ERIC Educational Resources Information Center
Gomez, Fabinton Sotelo; Ordóñez, Armando
2016-01-01
Previously a framework for integrating web resources providing educational services in dotLRN was presented. The present paper describes the application of this framework in a rural school in Cauca--Colombia. The case study includes two web resources about the topic of waves (physics) which is oriented in secondary education. Web classes and…
Waagmeester, Andra; Pico, Alexander R.
2016-01-01
The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457
Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R
2016-06-01
The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web.
Project MERLOT: Bringing Peer Review to Web-Based Educational Resources
ERIC Educational Resources Information Center
Cafolla, Ralph
2006-01-01
The unprecedented growth of the World Wide Web has resulted in a profusion of educational resources. The challenge for faculty is finding these resources and integrating them into their instruction. Even after the resource is found, the instructor must assess the effectiveness of the resource. As the number of educational web sites mount into the…
The semantic web in translational medicine: current applications and future directions
Machado, Catia M.; Rebholz-Schuhmann, Dietrich; Freitas, Ana T.; Couto, Francisco M.
2015-01-01
Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. PMID:24197933
The semantic web in translational medicine: current applications and future directions.
Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M
2015-01-01
Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. © The Author 2013. Published by Oxford University Press.
The NIF DISCO Framework: facilitating automated integration of neuroscience content on the web.
Marenco, Luis; Wang, Rixin; Shepherd, Gordon M; Miller, Perry L
2010-06-01
This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are "harvested" on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource's content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) "LinkOut" to a resource's data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource's lexicon and ontology, 5) sharing a resource's database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research.
Integrating Mathematics, Science, and Language Arts Instruction Using the World Wide Web.
ERIC Educational Resources Information Center
Clark, Kenneth; Hosticka, Alice; Kent, Judi; Browne, Ron
1998-01-01
Addresses issues of access to World Wide Web sites, mathematics and science content-resources available on the Web, and methods for integrating mathematics, science, and language arts instruction. (Author/ASK)
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-01-01
Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-11-19
Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.
Integrated and Applied Curricula Discussion Group and Data Base Project. Final Report.
ERIC Educational Resources Information Center
Wisconsin Univ. - Stout, Menomonie. Center for Vocational, Technical and Adult Education.
A project was conducted to compile integrated and applied curriculum resources, develop databases on the World Wide Web, and encourage networking for high school and technical college educators through an Internet discussion group. Activities conducted during the project include the creation of a web page to guide users to resource banks…
Moby and Moby 2: creatures of the deep (web).
Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D
2009-03-01
Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.
SAS- Semantic Annotation Service for Geoscience resources on the web
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.
2015-12-01
There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.
The NIF DISCO Framework: Facilitating Automated Integration of Neuroscience Content on the Web
Marenco, Luis; Wang, Rixin; Shepherd, Gordon M.; Miller, Perry L.
2013-01-01
This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are “harvested” on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource’s content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) “LinkOut” to a resource’s data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource’s lexicon and ontology, 5) sharing a resource’s database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research. PMID:20387131
WheatGenome.info: A Resource for Wheat Genomics Resource.
Lai, Kaitao
2016-01-01
An integrated database with a variety of Web-based systems named WheatGenome.info hosting wheat genome and genomic data has been developed to support wheat research and crop improvement. The resource includes multiple Web-based applications, which are implemented as a variety of Web-based systems. These include a GBrowse2-based wheat genome viewer with BLAST search portal, TAGdb for searching wheat second generation genome sequence data, wheat autoSNPdb, links to wheat genetic maps using CMap and CMap3D, and a wheat genome Wiki to allow interaction between diverse wheat genome sequencing activities. This portal provides links to a variety of wheat genome resources hosted at other research organizations. This integrated database aims to accelerate wheat genome research and is freely accessible via the web interface at http://www.wheatgenome.info/ .
Using EMBL-EBI services via Web interface and programmatically via Web Services
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2015-01-01
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. PMID:25501941
MendelWeb: An Electronic Science/Math/History Resource for the WWW.
ERIC Educational Resources Information Center
Blumberg, Roger B.
This paper describes a hypermedia resource, called MendelWeb that integrates elementary biology, discrete mathematics, and the history of science. MendelWeb is constructed from Gregor Menders 1865 paper, "Experiments in Plant Hybridization". An English translation of Mendel's paper, which is considered to mark the birth of classical and…
SeWeR: a customizable and integrated dynamic HTML interface to bioinformatics services.
Basu, M K
2001-06-01
Sequence analysis using Web Resources (SeWeR) is an integrated, Dynamic HTML (DHTML) interface to commonly used bioinformatics services available on the World Wide Web. It is highly customizable, extendable, platform neutral, completely server-independent and can be hosted as a web page as well as being used as stand-alone software running within a web browser.
Luxton, David D; Armstrong, Christina M; Fantelli, Emily E; Thomas, Elissa K
2011-09-01
Web-based self-care resources have a number of potential benefits for military service members (SMs) and their families such as convenience, anonymity, and immediate 24/7 access to useful information. There is limited data available, however, regarding SM and military healthcare provider use of online self-care resources. Our goal with this study was to conduct a preliminary survey assessment of self-care Web site awareness, general attitudes about use, and usage behaviors of Web-based self-care resources among SMs and military healthcare providers. Results show that the majority of SMs and providers use the Internet often, use Internet self-care resources, and are willing to use additional Web-based resources and capabilities. SMs and providers also indicated a preference for Web-based self-care resources as adjunct tools to face-to-face/in-person care. Data from this preliminary study are useful for informing additional research and best practices for integrating Web-based self-care for the military community.
EuroPhenome and EMPReSS: online mouse phenotyping resource
Mallon, Ann-Marie; Hancock, John M.
2008-01-01
EuroPhenome (http://www.europhenome.org) and EMPReSS (http://empress.har.mrc.ac.uk/) form an integrated resource to provide access to data and procedures for mouse phenotyping. EMPReSS describes 96 Standard Operating Procedures for mouse phenotyping. EuroPhenome contains data resulting from carrying out EMPReSS protocols on four inbred laboratory mouse strains. As well as web interfaces, both resources support web services to enable integration with other mouse phenotyping and functional genetics resources, and are committed to initiatives to improve integration of mouse phenotype databases. EuroPhenome will be the repository for a recently initiated effort to carry out large-scale phenotyping on a large number of knockout mouse lines (EUMODIC). PMID:17905814
EuroPhenome and EMPReSS: online mouse phenotyping resource.
Mallon, Ann-Marie; Blake, Andrew; Hancock, John M
2008-01-01
EuroPhenome (http://www.europhenome.org) and EMPReSS (http://empress.har.mrc.ac.uk/) form an integrated resource to provide access to data and procedures for mouse phenotyping. EMPReSS describes 96 Standard Operating Procedures for mouse phenotyping. EuroPhenome contains data resulting from carrying out EMPReSS protocols on four inbred laboratory mouse strains. As well as web interfaces, both resources support web services to enable integration with other mouse phenotyping and functional genetics resources, and are committed to initiatives to improve integration of mouse phenotype databases. EuroPhenome will be the repository for a recently initiated effort to carry out large-scale phenotyping on a large number of knockout mouse lines (EUMODIC).
Using EMBL-EBI Services via Web Interface and Programmatically via Web Services.
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2014-12-12
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. Copyright © 2014 John Wiley & Sons, Inc.
Web-Based Learning Information System for Web 3.0
NASA Astrophysics Data System (ADS)
Rego, Hugo; Moreira, Tiago; García-Peñalvo, Francisco Jose
With the emergence of Web/eLearning 3.0 we have been developing/adjusting AHKME in order to face this great challenge. One of our goals is to allow the instructional designer and teacher to access standardized resources and evaluate the possibility of integration and reuse in eLearning systems, not only content but also the learning strategy. We have also integrated some collaborative tools for the adaptation of resources, as well as the collection of feedback from users to provide feedback to the system. We also provide tools for the instructional designer to create/customize specifications/ontologies to give structure and meaning to resources, manual and automatic search with recommendation of resources and instructional design based on the context, as well as recommendation of adaptations in learning resources. We also consider the concept of mobility and mobile technology applied to eLearning, allowing access by teachers and students to learning resources, regardless of time and space.
The Effectiveness of Lecture-Integrated, Web-Supported Case Studies in Large Group Teaching
ERIC Educational Resources Information Center
Azzawi, May; Dawson, Maureen M.
2007-01-01
The effectiveness of lecture-integrated and web-supported case studies in supporting a large and academically diverse group of undergraduate students was evaluated in the present study. Case studies and resource (web)-based learning were incorporated as two complementary interactive learning strategies into the traditional curriculum. A truncated…
ERIC Educational Resources Information Center
Levitt, Roberta; Piro, Joseph
2014-01-01
Technology integration and Information and Communication Technology (ICT)-based education have enhanced the teaching and learning process by introducing a range of web-based instructional resources for classroom practitioners to deepen and extend instruction. One of the most durable of these resources has been the WebQuest. Introduced around the…
Delivering an Alternative Medicine Resource to the User's Desktop via World Wide Web.
ERIC Educational Resources Information Center
Li, Jie; Wu, Gang; Marks, Ellen; Fan, Weiyu
1998-01-01
Discusses the design and implementation of a World Wide Web-based alternative medicine virtual resource. This homepage integrates regional, national, and international resources and delivers library services to the user's desktop. Goals, structure, and organizational schemes of the system are detailed, and design issues for building such a…
Case Studies in Describing Scientific Research Efforts as Linked Data
NASA Astrophysics Data System (ADS)
Gandara, A.; Villanueva-Rosales, N.; Gates, A.
2013-12-01
The Web is growing with numerous scientific resources, prompting increased efforts in information management to consider integration and exchange of scientific resources. Scientists have many options to share scientific resources on the Web; however, existing options provide limited support to scientists in annotating and relating research resources resulting from a scientific research effort. Moreover, there is no systematic approach to documenting scientific research and sharing it on the Web. This research proposes the Collect-Annotate-Refine-Publish (CARP) Methodology as an approach for guiding documentation of scientific research on the Semantic Web as scientific collections. Scientific collections are structured descriptions about scientific research that make scientific results accessible based on context. In addition, scientific collections enhance the Linked Data data space and can be queried by machines. Three case studies were conducted on research efforts at the Cyber-ShARE Research Center of Excellence in order to assess the effectiveness of the methodology to create scientific collections. The case studies exposed the challenges and benefits of leveraging the Semantic Web and Linked Data data space to facilitate access, integration and processing of Web-accessible scientific resources and research documentation. As such, we present the case study findings and lessons learned in documenting scientific research using CARP.
Web-Based Learning Materials for Higher Education: The MERLOT Repository
ERIC Educational Resources Information Center
Orhun, Emrah
2004-01-01
MERLOT (Multimedia Educational Resource for Learning and Online Teaching) is a web-based open resource designed primarily for faculty and students in higher education. The resources in MERLOT include over 8,000 learning materials and support materials from a wide variety of disciplines that can be integrated within the context of a larger course.…
Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience
Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi
2011-01-01
Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477
Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.
Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi
2010-01-01
Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.
A case study of data integration for aquatic resources using semantic web technologies
Gordon, Janice M.; Chkhenkeli, Nina; Govoni, David L.; Lightsom, Frances L.; Ostroff, Andrea C.; Schweitzer, Peter N.; Thongsavanh, Phethala; Varanka, Dalia E.; Zednik, Stephan
2015-01-01
Use cases, information modeling, and linked data techniques are Semantic Web technologies used to develop a prototype system that integrates scientific observations from four independent USGS and cooperator data systems. The techniques were tested with a use case goal of creating a data set for use in exploring potential relationships among freshwater fish populations and environmental factors. The resulting prototype extracts data from the BioData Retrieval System, the Multistate Aquatic Resource Information System, the National Geochemical Survey, and the National Hydrography Dataset. A prototype user interface allows a scientist to select observations from these data systems and combine them into a single data set in RDF format that includes explicitly defined relationships and data definitions. The project was funded by the USGS Community for Data Integration and undertaken by the Community for Data Integration Semantic Web Working Group in order to demonstrate use of Semantic Web technologies by scientists. This allows scientists to simultaneously explore data that are available in multiple, disparate systems beyond those they traditionally have used.
Beveridge, Allan
2006-01-01
The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.
Parikh, Priti P; Minning, Todd A; Nguyen, Vinh; Lalithsena, Sarasi; Asiaee, Amir H; Sahoo, Satya S; Doshi, Prashant; Tarleton, Rick; Sheth, Amit P
2012-01-01
Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge. We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results. The SPSE facilitates parasitologists in leveraging the growing, but disparate, parasite data resources by offering an integrative platform that utilizes Semantic Web techniques, while keeping their workload increase minimal.
Our Town Integrated Studies: A Resource.
ERIC Educational Resources Information Center
North Carolina State Dept. of Public Education, Raleigh.
This integrated state curriculum guide was developed by North Carolina fourth grade teachers, principals, and supervisors during a workshop which explored methods of integrating curriculum objectives from multiple instructional areas by using the community as both a resource and a subject of study and by introducing the concept of webbing, an…
NASA Astrophysics Data System (ADS)
Wang, Jian
2017-01-01
In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.
The ChEMBL database as linked open data
2013-01-01
Background Making data available as Linked Data using Resource Description Framework (RDF) promotes integration with other web resources. RDF documents can natively link to related data, and others can link back using Uniform Resource Identifiers (URIs). RDF makes the data machine-readable and uses extensible vocabularies for additional information, making it easier to scale up inference and data analysis. Results This paper describes recent developments in an ongoing project converting data from the ChEMBL database into RDF triples. Relative to earlier versions, this updated version of ChEMBL-RDF uses recently introduced ontologies, including CHEMINF and CiTO; exposes more information from the database; and is now available as dereferencable, linked data. To demonstrate these new features, we present novel use cases showing further integration with other web resources, including Bio2RDF, Chem2Bio2RDF, and ChemSpider, and showing the use of standard ontologies for querying. Conclusions We have illustrated the advantages of using open standards and ontologies to link the ChEMBL database to other databases. Using those links and the knowledge encoded in standards and ontologies, the ChEMBL-RDF resource creates a foundation for integrated semantic web cheminformatics applications, such as the presented decision support. PMID:23657106
TryTransDB: A web-based resource for transport proteins in Trypanosomatidae.
Sonar, Krushna; Kabra, Ritika; Singh, Shailza
2018-03-12
TryTransDB is a web-based resource that stores transport protein data which can be retrieved using a standalone BLAST tool. We have attempted to create an integrated database that can be a one-stop shop for the researchers working with transport proteins of Trypanosomatidae family. TryTransDB (Trypanosomatidae Transport Protein Database) is a web based comprehensive resource that can fire a BLAST search against most of the transport protein sequences (protein and nucleotide) from Trypanosomatidae family organisms. This web resource further allows to compute a phylogenetic tree by performing multiple sequence alignment (MSA) using CLUSTALW suite embedded in it. Also, cross-linking to other databases helps in gathering more information for a certain transport protein in a single website.
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.
2016-12-01
Hydrologists today have to integrate resources such as data and models, which originate and reside in multiple autonomous and heterogeneous repositories over the Web. Several resource management systems have emerged within geoscience communities for sharing long-tail data, which are collected by individual or small research groups, and long-tail models, which are developed by scientists or small modeling communities. While these systems have increased the availability of resources within geoscience domains, deficiencies remain due to the heterogeneity in the methods, which are used to describe, encode, and publish information about resources over the Web. This heterogeneity limits our ability to access the right information in the right context so that it can be efficiently retrieved and understood without the Hydrologist's mediation. A primary challenge of the Web today is the lack of the semantic interoperability among the massive number of resources, which already exist and are continually being generated at rapid rates. To address this challenge, we have developed a decentralized GeoSemantic (GS) framework, which provides three sets of micro-web services to support (i) semantic annotation of resources, (ii) semantic alignment between the metadata of two resources, and (iii) semantic mediation among Standard Names. Here we present the design of the framework and demonstrate its application for semantic integration between data and models used in the IML-CZO. First we show how the IML-CZO data are annotated using the Semantic Annotation Services. Then we illustrate how the Resource Alignment Services and Knowledge Integration Services are used to create a semantic workflow among TopoFlow model, which is a spatially-distributed hydrologic model and the annotated data. Results of this work are (i) a demonstration of how the GS framework advances the integration of heterogeneous data and models of water-related disciplines by seamless handling of their semantic heterogeneity, (ii) an introduction of new paradigm for reusing existing and new standards as well as tools and models without the need of their implementation in the Cyberinfrastructures of water-related disciplines, and (iii) an investigation of a methodology by which distributed models can be coupled in a workflow using the GS services.
Sahoo, Satya S.; Bodenreider, Olivier; Rutter, Joni L.; Skinner, Karen J.; Sheth, Amit P.
2008-01-01
Objectives This paper illustrates how Semantic Web technologies (especially RDF, OWL, and SPARQL) can support information integration and make it easy to create semantic mashups (semantically integrated resources). In the context of understanding the genetic basis of nicotine dependence, we integrate gene and pathway information and show how three complex biological queries can be answered by the integrated knowledge base. Methods We use an ontology-driven approach to integrate two gene resources (Entrez Gene and HomoloGene) and three pathway resources (KEGG, Reactome and BioCyc), for five organisms, including humans. We created the Entrez Knowledge Model (EKoM), an information model in OWL for the gene resources, and integrated it with the extant BioPAX ontology designed for pathway resources. The integrated schema is populated with data from the pathway resources, publicly available in BioPAX-compatible format, and gene resources for which a population procedure was created. The SPARQL query language is used to formulate queries over the integrated knowledge base to answer the three biological queries. Results Simple SPARQL queries could easily identify hub genes, i.e., those genes whose gene products participate in many pathways or interact with many other gene products. The identification of the genes expressed in the brain turned out to be more difficult, due to the lack of a common identification scheme for proteins. Conclusion Semantic Web technologies provide a valid framework for information integration in the life sciences. Ontology-driven integration represents a flexible, sustainable and extensible solution to the integration of large volumes of information. Additional resources, which enable the creation of mappings between information sources, are required to compensate for heterogeneity across namespaces. Resource page http://knoesis.wright.edu/research/lifesci/integration/structured_data/JBI-2008/ PMID:18395495
Sahoo, Satya S; Bodenreider, Olivier; Rutter, Joni L; Skinner, Karen J; Sheth, Amit P
2008-10-01
This paper illustrates how Semantic Web technologies (especially RDF, OWL, and SPARQL) can support information integration and make it easy to create semantic mashups (semantically integrated resources). In the context of understanding the genetic basis of nicotine dependence, we integrate gene and pathway information and show how three complex biological queries can be answered by the integrated knowledge base. We use an ontology-driven approach to integrate two gene resources (Entrez Gene and HomoloGene) and three pathway resources (KEGG, Reactome and BioCyc), for five organisms, including humans. We created the Entrez Knowledge Model (EKoM), an information model in OWL for the gene resources, and integrated it with the extant BioPAX ontology designed for pathway resources. The integrated schema is populated with data from the pathway resources, publicly available in BioPAX-compatible format, and gene resources for which a population procedure was created. The SPARQL query language is used to formulate queries over the integrated knowledge base to answer the three biological queries. Simple SPARQL queries could easily identify hub genes, i.e., those genes whose gene products participate in many pathways or interact with many other gene products. The identification of the genes expressed in the brain turned out to be more difficult, due to the lack of a common identification scheme for proteins. Semantic Web technologies provide a valid framework for information integration in the life sciences. Ontology-driven integration represents a flexible, sustainable and extensible solution to the integration of large volumes of information. Additional resources, which enable the creation of mappings between information sources, are required to compensate for heterogeneity across namespaces. RESOURCE PAGE: http://knoesis.wright.edu/research/lifesci/integration/structured_data/JBI-2008/
Leveraging Web 2.0 in the Redesign of a Graduate-Level Technology Integration Course
ERIC Educational Resources Information Center
Oliver, Kevin
2007-01-01
In the emerging era of the "read-write" web, students can not only research and collect information from existing web resources, but also collaborate and create new information on the web in a surprising number of ways. Web 2.0 is an umbrella term for many individual tools that have been created with web collaboration, sharing, and/or new…
McLean, Michelle; Murrell, Kathy
2002-03-01
WebCT, front-end software for Internet-delivered material, became an integral part of a problem-based learning, student-centred curriculum introduced in January 2001 at the Nelson R. Mandela School of Medicine (South Africa). A template for six curriculum and two supplementary modules was developed. Organiser and Tool pages were added and files uploaded as each module progressed. This study provides feedback from students with regard to the value of WebCT in their curriculum, as well as discussing the value of WebCT for the delivery of digitized material (e.g., images, videos, PowerPoint presentations). In an anonymous survey following the completion of the first module, students, apparently irrespective of their level of computer literacy, responded positively to the communication facility between staff and students and amongst students, the resources and the URLs. Based on these preliminary responses, WebCT courses for all six modules were developed during 2001. With Faculty support, WebCT will probably be integrated into the rest of the MBChB programme. It will be particularly useful when students are off campus, undertaking electives and community service in the later years.
ERIC Educational Resources Information Center
El-Tigi, Manal Aziz-El-Din
This study examined college students' perceptions of course Web sites as an instructional resource for classroom-based courses. The focus was on identifying functions on the sites that students perceived as supporting and fostering their learning experiences. Subjects were 142 students responding to a 60-item questionnaire and open-ended…
WebQuest on Conic Sections as a Learning Tool for Prospective Teachers
ERIC Educational Resources Information Center
Kurtulus, Aytac; Ada, Tuba
2012-01-01
WebQuests incorporate technology with educational concepts through integrating online resources with student-centred and activity-based learning. In this study, we describe and evaluate a WebQuest based on conic sections, which we have used with a group of prospective mathematics teachers. The WebQuest entitled: "Creating a Carpet Design Using…
Integrating DXplain into a clinical information system using the World Wide Web.
Elhanan, G; Socratous, S A; Cimino, J J
1996-01-01
The World Wide Web(WWW) offers a cross-platform environment and standard protocols that enable integration of various applications available on the Internet. The authors use the Web to facilitate interaction between their Web-based Clinical Information System and a decision-support system-DXplain, at the Massachusetts General Hospital-using local architecture and Common Gateway Interface programs. The current application translates patients laboratory test results into DXplain's terms to generate diagnostic hypotheses. Two different access methods are utilized for this model; Hypertext Transfer Protocol (HTTP) and TCP/IP function calls. While clinical aspects cannot be evaluated as yet, the model demonstrates the potential of Web-based applications for interaction and integration and how local architecture, with a controlled vocabulary server, can further facilitate such integration. This model serves to demonstrate some of the limitations of the current WWW technology and identifies issues such as control over Web resources and their utilization and liability issues as possible obstacles for further integration.
WebVR: an interactive web browser for virtual environments
NASA Astrophysics Data System (ADS)
Barsoum, Emad; Kuester, Falko
2005-03-01
The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Zhang, Xiaoyu; Cai, Hongming; Xu, Boyi
Enacting a supply-chain process involves variant partners and different IT systems. REST receives increasing attention for distributed systems with loosely coupled resources. Nevertheless, resource model incompatibilities and conflicts prevent effective process modeling and deployment in resource-centric Web service environment. In this paper, a Petri-net based framework for supply-chain process integration is proposed. A resource meta-model is constructed to represent the basic information of resources. Then based on resource meta-model, XML schemas and documents are derived, which represent resources and their states in Petri-net. Thereafter, XML-net, a high level Petri-net, is employed for modeling control and data flow of process. From process model in XML-net, RESTful services and choreography descriptions are deduced. Therefore, unified resource representation and RESTful services description are proposed for cross-system integration in a more effective way. A case study is given to illustrate the approach and the desirable features of the approach are discussed.
Pathview Web: user friendly pathway visualization and data integration
Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory
2017-01-01
Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075
A semantic web ontology for small molecules and their biological targets.
Choi, Jooyoung; Davis, Melissa J; Newman, Andrew F; Ragan, Mark A
2010-05-24
A wide range of data on sequences, structures, pathways, and networks of genes and gene products is available for hypothesis testing and discovery in biological and biomedical research. However, data describing the physical, chemical, and biological properties of small molecules have not been well-integrated with these resources. Semantically rich representations of chemical data, combined with Semantic Web technologies, have the potential to enable the integration of small molecule and biomolecular data resources, expanding the scope and power of biomedical and pharmacological research. We employed the Semantic Web technologies Resource Description Framework (RDF) and Web Ontology Language (OWL) to generate a Small Molecule Ontology (SMO) that represents concepts and provides unique identifiers for biologically relevant properties of small molecules and their interactions with biomolecules, such as proteins. We instanced SMO using data from three public data sources, i.e., DrugBank, PubChem and UniProt, and converted to RDF triples. Evaluation of SMO by use of predetermined competency questions implemented as SPARQL queries demonstrated that data from chemical and biomolecular data sources were effectively represented and that useful knowledge can be extracted. These results illustrate the potential of Semantic Web technologies in chemical, biological, and pharmacological research and in drug discovery.
Bernard, André; Langille, Morgan; Hughes, Stephanie; Rose, Caren; Leddin, Desmond; Veldhuyzen van Zanten, Sander
2007-09-01
The Internet is a widely used information resource for patients with inflammatory bowel disease, but there is variation in the quality of Web sites that have patient information regarding Crohn's disease and ulcerative colitis. The purpose of the current study is to systematically evaluate the quality of these Web sites. The top 50 Web sites appearing in Google using the terms "Crohn's disease" or "ulcerative colitis" were included in the study. Web sites were evaluated using a (a) Quality Evaluation Instrument (QEI) that awarded Web sites points (0-107) for specific information on various aspects of inflammatory bowel disease, (b) a five-point Global Quality Score (GQS), (c) two reading grade level scores, and (d) a six-point integrity score. Thirty-four Web sites met the inclusion criteria, 16 Web sites were excluded because they were portals or non-IBD oriented. The median QEI score was 57 with five Web sites scoring higher than 75 points. The median Global Quality Score was 2.0 with five Web sites achieving scores of 4 or 5. The average reading grade level score was 11.2. The median integrity score was 3.0. There is marked variation in the quality of the Web sites containing information on Crohn's disease and ulcerative colitis. Many Web sites suffered from poor quality but there were five high-scoring Web sites.
PaaS for web applications with OpenShift Origin
NASA Astrophysics Data System (ADS)
Lossent, A.; Rodriguez Peon, A.; Wagner, A.
2017-10-01
The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.
ERIC Educational Resources Information Center
Yang, Shu Ching
2001-01-01
Describes the integration of Web resources as instructional and learning tools in an EFL (English as a Foreign Language) class in Taiwan. Highlights include challenges and advantages of using the Web; learners' perceptions; intentional and incidental learning; disorientation and cognitive overload; and information seeking as problem-solving. A…
Web-based resources for critical care education.
Kleinpell, Ruth; Ely, E Wesley; Williams, Ged; Liolios, Antonios; Ward, Nicholas; Tisherman, Samuel A
2011-03-01
To identify, catalog, and critically evaluate Web-based resources for critical care education. A multilevel search strategy was utilized. Literature searches were conducted (from 1996 to September 30, 2010) using OVID-MEDLINE, PubMed, and the Cumulative Index to Nursing and Allied Health Literature with the terms "Web-based learning," "computer-assisted instruction," "e-learning," "critical care," "tutorials," "continuing education," "virtual learning," and "Web-based education." The Web sites of relevant critical care organizations (American College of Chest Physicians, American Society of Anesthesiologists, American Thoracic Society, European Society of Intensive Care Medicine, Society of Critical Care Medicine, World Federation of Societies of Intensive and Critical Care Medicine, American Association of Critical Care Nurses, and World Federation of Critical Care Nurses) were reviewed for the availability of e-learning resources. Finally, Internet searches and e-mail queries to critical care medicine fellowship program directors and members of national and international acute/critical care listserves were conducted to 1) identify the use of and 2) review and critique Web-based resources for critical care education. To ensure credibility of Web site information, Web sites were reviewed by three independent reviewers on the basis of the criteria of authority, objectivity, authenticity, accuracy, timeliness, relevance, and efficiency in conjunction with suggested formats for evaluating Web sites in the medical literature. Literature searches using OVID-MEDLINE, PubMed, and the Cumulative Index to Nursing and Allied Health Literature resulted in >250 citations. Those pertinent to critical care provide examples of the integration of e-learning techniques, the development of specific resources, reports of the use of types of e-learning, including interactive tutorials, case studies, and simulation, and reports of student or learner satisfaction, among other general reviews of the benefits of utilizing e-learning. Review of the Web sites of relevant critical care organizations revealed the existence of a number of e-learning resources, including online critical care courses, tutorials, podcasts, webcasts, slide sets, and continuing medical education resources, some requiring membership or a fee to access. Respondents to listserve queries (>100) and critical care medicine fellowship director and advanced practice nursing educator e-mail queries (>50) identified the use of a number of tutorials, self-directed learning modules, and video-enhanced programs for critical care education and practice. In all, >135 Web-based education resources exist, including video Web resources for critical care education in a variety of e-learning formats, such as tutorials, self-directed learning modules, interactive case studies, webcasts, podcasts, and video-enhanced programs. As identified by critical care educators and practitioners, e-learning is actively being integrated into critical care medicine and nursing training programs for continuing medical education and competency training purposes. Knowledge of available Web-based educational resources may enhance critical care practitioners' ongoing learning and clinical competence, although this has not been objectively measured to date.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2008-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2009-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
USDA-ARS?s Scientific Manuscript database
Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...
IntegratedMap: a Web interface for integrating genetic map data.
Yang, Hongyu; Wang, Hongyu; Gingle, Alan R
2005-05-01
IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp
CORC--Cooperative Online Resource Catalog.
ERIC Educational Resources Information Center
Hickey, Thomas B.
2001-01-01
Describes OCLC's CORC (Cooperative Online Resource Catalog) that is being developed to explore the cooperative creation of a catalog of Internet resources that will support both MARC and less formal metadata. Explains the catalog design which will allow dynamic generation of Web pages with resources for integration with libraries' portal pages.…
Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo
2015-01-01
Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.
WebQuest Learning as Perceived by Higher-Education Learners
ERIC Educational Resources Information Center
Zheng, Robert; Stucky, Bradd; McAlack, Matt; Menchaca, Mike; Stoddart, Sue
2005-01-01
The WebQuest as an inquiry-oriented approach in web learning has gained considerable attention from educators and has been integrated widely into curricula in K-12 and higher education. It is considered to be an effective way to organize chaotic internet resources and help learners gain new knowledge through a guided learning environment.…
NASA Technical Reports Server (NTRS)
McCarthy, Marianne C.; Grabowski, Barbara L.; Koszalka, Tiffany
2003-01-01
Over a three-year period, researchers and educators from the Pennsylvania State University (PSU), University Park, Pennsylvania, and the NASA Dryden Flight Research Center (DFRC), Edwards, California, worked together to analyze, develop, implement and evaluate materials and tools that enable teachers to use NASA Web resources effectively for teaching science, mathematics, technology and geography. Two conference publications and one technical paper have already been published as part of this educational research series on Web-based instruction and learning. This technical paper, Web-Enhanced Instruction and Learning: Findings of a Short- and Long-Term Impact Study, is the culminating report in this educational research series and is based on the final report submitted to NASA. This report describes the broad spectrum of data gathered from teachers about their experiences using NASA Web resources in the classroom. It also describes participating teachers responses and feedback about the use of the NASA Web-Enhanced Learning Environment Strategies reflection tool on their teaching practices. The reflection tool was designed to help teachers merge the vast array of NASA resources with the best teaching methods, taking into consideration grade levels, subject areas and teaching preferences. The teachers described their attitudes toward technology and innovation in the classroom and their experiences and perceptions as they attempted to integrate Web resources into science, mathematics, technology and geography instruction.
Pathview Web: user friendly pathway visualization and data integration.
Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory
2017-07-03
Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Montano, Blanca San José; Garcia Carretero, Rafael; Varela Entrecanales, Manuel; Pozuelo, Paz Martin
2010-09-01
Research in hospital settings faces several difficulties. Information technologies and certain Web 2.0 tools may provide new models to tackle these problems, allowing for a collaborative approach and bridging the gap between clinical practice, teaching and research. We aim to gather a community of researchers involved in the development of a network of learning and investigation resources in a hospital setting. A multi-disciplinary work group analysed the needs of the research community. We studied the opportunities provided by Web 2.0 tools and finally we defined the spaces that would be developed, describing their elements, members and different access levels. WIKINVESTIGACION is a collaborative web space with the aim of integrating the management of all the hospital's teaching and research resources. It is composed of five spaces, with different access privileges. The spaces are: Research Group Space 'wiki for each individual research group', Learning Resources Centre devoted to the Library, News Space, Forum and Repositories. The Internet, and most notably the Web 2.0 movement, is introducing some overwhelming changes in our society. Research and teaching in the hospital setting will join this current and take advantage of these tools to socialise and improve knowledge management.
A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices
Zhao, Shuai; Yu, Le; Cheng, Bo
2016-01-01
With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are “silo” solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users’ data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations. PMID:27690038
A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices.
Zhao, Shuai; Yu, Le; Cheng, Bo
2016-09-28
With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are "silo" solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users' data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations.
Exploring weight loss services in primary care and staff views on using a web-based programme.
Ware, Lisa J; Williams, Sarah; Bradbury, Katherine; Brant, Catherine; Little, Paul; Hobbs, F D Richard; Yardley, Lucy
2012-01-01
Demand is increasing for primary care to deliver effective weight management services to patients, but research suggests that staff feel inadequately resourced for such a role. Supporting service delivery with a free and effective web-based weight management programme could maximise primary care resource and provide cost-effective support for patients. However, integration of e-health into primary care may face challenges. To explore primary care staff experiences of delivering weight management services and their perceptions of a web-based weight management programme to aid service delivery. Focus groups were conducted with primary care physicians, nurses and healthcare assistants (n = 36) involved in delivering weight loss services. Data were analysed using inductive thematic analysis. Participants thought that primary care should be involved in delivering weight management, especially when weight was aggravating health problems. However, they felt under-resourced to deliver these services and unsure as to the effectiveness of their input, as routine services were not evaluated. Beliefs that current services were ineffective resulted in staff reluctance to allocate more resources. Participants were hopeful that supplementing practice with a web-based weight management programme would enhance patient services and promote service evaluation. Although primary care staff felt they should deliver weight loss services, low levels of faith in the efficacy of current treatments resulted in provision of under-resourced and 'ad hoc' services. Integration of a web-based weight loss programme that promotes service evaluation and provides a cost-effective option for supporting patients may encourage practices to invest more in weight management services.
Information integration from heterogeneous data sources: a Semantic Web approach.
Kunapareddy, Narendra; Mirhaji, Parsa; Richards, David; Casscells, S Ward
2006-01-01
Although the decentralized and autonomous implementation of health information systems has made it possible to extend the reach of surveillance systems to a variety of contextually disparate domains, public health use of data from these systems is not primarily anticipated. The Semantic Web has been proposed to address both representational and semantic heterogeneity in distributed and collaborative environments. We introduce a semantic approach for the integration of health data using the Resource Definition Framework (RDF) and the Simple Knowledge Organization System (SKOS) developed by the Semantic Web community.
On-Line Literature: The Challenge of Integrating Web-Based Materials
ERIC Educational Resources Information Center
Ruzich, Constance M.
2012-01-01
This classroom research discusses the challenges of integrating face-to-face interactions with the use of on-line resources in secondary English classrooms. Examining the lesson plans of pre-service and early career teachers in the US, I found that the uses of on-line resources were frequently neither coherent nor consistent with the goals and…
ExPASy: SIB bioinformatics resource portal.
Artimo, Panu; Jonnalagedda, Manohar; Arnold, Konstantin; Baratin, Delphine; Csardi, Gabor; de Castro, Edouard; Duvaud, Séverine; Flegel, Volker; Fortier, Arnaud; Gasteiger, Elisabeth; Grosdidier, Aurélien; Hernandez, Céline; Ioannidis, Vassilios; Kuznetsov, Dmitry; Liechti, Robin; Moretti, Sébastien; Mostaguir, Khaled; Redaschi, Nicole; Rossier, Grégoire; Xenarios, Ioannis; Stockinger, Heinz
2012-07-01
ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a 'decentralized' way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across 'selected' resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPASy.
Paterson, Trevor; Law, Andy
2009-08-14
Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. We have developed a simple generic XML schema (GenomicMappingData.xsd - GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. The data exchange standard we present here provides a useful generic format for transfer and integration of genomic and genetic mapping data. The extensibility of our schema allows for inclusion of additional data and provides a mechanism for typing mapping objects via third party standards. Web services retrieving GMD-compliant mapping data demonstrate that use of this exchange standard provides a practical mechanism for achieving data integration, by facilitating syntactically and semantically-controlled access to the data.
Paterson, Trevor; Law, Andy
2009-01-01
Background Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. Results We have developed a simple generic XML schema (GenomicMappingData.xsd – GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. Conclusion The data exchange standard we present here provides a useful generic format for transfer and integration of genomic and genetic mapping data. The extensibility of our schema allows for inclusion of additional data and provides a mechanism for typing mapping objects via third party standards. Web services retrieving GMD-compliant mapping data demonstrate that use of this exchange standard provides a practical mechanism for achieving data integration, by facilitating syntactically and semantically-controlled access to the data. PMID:19682365
WheatGenome.info: an integrated database and portal for wheat genome information.
Lai, Kaitao; Berkman, Paul J; Lorenc, Michal Tadeusz; Duran, Chris; Smits, Lars; Manoli, Sahana; Stiller, Jiri; Edwards, David
2012-02-01
Bread wheat (Triticum aestivum) is one of the most important crop plants, globally providing staple food for a large proportion of the human population. However, improvement of this crop has been limited due to its large and complex genome. Advances in genomics are supporting wheat crop improvement. We provide a variety of web-based systems hosting wheat genome and genomic data to support wheat research and crop improvement. WheatGenome.info is an integrated database resource which includes multiple web-based applications. These include a GBrowse2-based wheat genome viewer with BLAST search portal, TAGdb for searching wheat second-generation genome sequence data, wheat autoSNPdb, links to wheat genetic maps using CMap and CMap3D, and a wheat genome Wiki to allow interaction between diverse wheat genome sequencing activities. This system includes links to a variety of wheat genome resources hosted at other research organizations. This integrated database aims to accelerate wheat genome research and is freely accessible via the web interface at http://www.wheatgenome.info/.
Enriching and improving the quality of linked data with GIS
NASA Astrophysics Data System (ADS)
Iwaniak, Adam; Kaczmarek, Iwona; Strzelecki, Marek; Lukowicz, Jaromar; Jankowski, Piotr
2016-06-01
Standardization of methods for data exchange in GIS has along history predating the creation of World Wide Web. The advent of World Wide Web brought the emergence of new solutions for data exchange and sharing including; more recently, standards proposed by the W3C for data exchange involving Semantic Web technologies and linked data. Despite the growing interest in integration, GIS and linked data are still two separate paradigms for describing and publishing spatial data on the Web. At the same time, both paradigms offer complementary ways of representing real world phenomena and means of analysis using different processing functions. The complementarity of linked data and GIS can be leveraged to synergize both paradigms resulting in richer data content and more powerful inferencing. The article presents an approach aimed at integrating linked data with GIS. The approach relies on the use of GIS tools for integration, verification and enrichment of linked data. The GIS tools are employed to enrich linked data by furnishing access to collection of data resources, defining relationship between data resources, and subsequently facilitating GIS data integration with linked data. The proposed approach is demonstrated with examples using data from DBpedia, OSM, and tools developed by the authors for standard GIS software.
Climate Change, Nutrition, and Bottom-Up and Top-Down Food Web Processes.
Rosenblatt, Adam E; Schmitz, Oswald J
2016-12-01
Climate change ecology has focused on climate effects on trophic interactions through the lenses of temperature effects on organismal physiology and phenological asynchronies. Trophic interactions are also affected by the nutrient content of resources, but this topic has received less attention. Using concepts from nutritional ecology, we propose a conceptual framework for understanding how climate affects food webs through top-down and bottom-up processes impacted by co-occurring environmental drivers. The framework integrates climate effects on consumer physiology and feeding behavior with effects on resource nutrient content. It illustrates how studying responses of simplified food webs to simplified climate change might produce erroneous predictions. We encourage greater integrative complexity of climate change research on trophic interactions to resolve patterns and enhance predictive capacities. Copyright © 2016 Elsevier Ltd. All rights reserved.
ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii.
May, Patrick; Christian, Jan-Ole; Kempa, Stefan; Walther, Dirk
2009-05-04
The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc) was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.
The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience
Akil, Huda; Ascoli, Giorgio A.; Bowden, Douglas M.; Bug, William; Donohue, Duncan E.; Goldberg, David H.; Grafstein, Bernice; Grethe, Jeffrey S.; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N.; Marenco, Luis; Martone, Maryann E.; Miller, Perry L.; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M.; Sternberg, Paul W.; Van Essen, David C.; Williams, Robert W.
2009-01-01
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line. PMID:18946742
The neuroscience information framework: a data and knowledge environment for neuroscience.
Gardner, Daniel; Akil, Huda; Ascoli, Giorgio A; Bowden, Douglas M; Bug, William; Donohue, Duncan E; Goldberg, David H; Grafstein, Bernice; Grethe, Jeffrey S; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N; Marenco, Luis; Martone, Maryann E; Miller, Perry L; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M; Sternberg, Paul W; Van Essen, David C; Williams, Robert W
2008-09-01
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience's Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov , http://neurogateway.org , and other sites as they come on line.
SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services
Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T
2009-01-01
Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs. PMID:19775460
Data management integration for biomedical core facilities
NASA Astrophysics Data System (ADS)
Zhang, Guo-Qiang; Szymanski, Jacek; Wilson, David
2007-03-01
We present the design, development, and pilot-deployment experiences of MIMI, a web-based, Multi-modality Multi-Resource Information Integration environment for biomedical core facilities. This is an easily customizable, web-based software tool that integrates scientific and administrative support for a biomedical core facility involving a common set of entities: researchers; projects; equipments and devices; support staff; services; samples and materials; experimental workflow; large and complex data. With this software, one can: register users; manage projects; schedule resources; bill services; perform site-wide search; archive, back-up, and share data. With its customizable, expandable, and scalable characteristics, MIMI not only provides a cost-effective solution to the overarching data management problem of biomedical core facilities unavailable in the market place, but also lays a foundation for data federation to facilitate and support discovery-driven research.
2011-01-01
Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447
Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke
2011-10-24
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.
Public health, GIS, and the internet.
Croner, Charles M
2003-01-01
Internet access and use of georeferenced public health information for GIS application will be an important and exciting development for the nation's Department of Health and Human Services and other health agencies in this new millennium. Technological progress toward public health geospatial data integration, analysis, and visualization of space-time events using the Web portends eventual robust use of GIS by public health and other sectors of the economy. Increasing Web resources from distributed spatial data portals and global geospatial libraries, and a growing suite of Web integration tools, will provide new opportunities to advance disease surveillance, control, and prevention, and insure public access and community empowerment in public health decision making. Emerging supercomputing, data mining, compression, and transmission technologies will play increasingly critical roles in national emergency, catastrophic planning and response, and risk management. Web-enabled public health GIS will be guided by Federal Geographic Data Committee spatial metadata, OpenGIS Web interoperability, and GML/XML geospatial Web content standards. Public health will become a responsive and integral part of the National Spatial Data Infrastructure.
Scholarly Context Adrift: Three out of Four URI References Lead to Changed Content
Tobin, Richard; Grover, Claire
2016-01-01
Increasingly, scholarly articles contain URI references to “web at large” resources including project web sites, scholarly wikis, ontologies, online debates, presentations, blogs, and videos. Authors reference such resources to provide essential context for the research they report on. A reader who visits a web at large resource by following a URI reference in an article, some time after its publication, is led to believe that the resource’s content is representative of what the author originally referenced. However, due to the dynamic nature of the web, that may very well not be the case. We reuse a dataset from a previous study in which several authors of this paper were involved, and investigate to what extent the textual content of web at large resources referenced in a vast collection of Science, Technology, and Medicine (STM) articles published between 1997 and 2012 has remained stable since the publication of the referencing article. We do so in a two-step approach that relies on various well-established similarity measures to compare textual content. In a first step, we use 19 web archives to find snapshots of referenced web at large resources that have textual content that is representative of the state of the resource around the time of publication of the referencing paper. We find that representative snapshots exist for about 30% of all URI references. In a second step, we compare the textual content of representative snapshots with that of their live web counterparts. We find that for over 75% of references the content has drifted away from what it was when referenced. These results raise significant concerns regarding the long term integrity of the web-based scholarly record and call for the deployment of techniques to combat these problems. PMID:27911955
The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework
NASA Astrophysics Data System (ADS)
King, T. A.; Walker, R. J.; Weigel, R. S.; Narock, T. W.; McGuire, R. E.; Candey, R. M.
2011-12-01
The Service Environment for Enhanced Knowledge and Research (SEEKR) Framework is a configurable service oriented framework to enable the discovery, access and analysis of data shared in a community. The SEEKR framework integrates many existing independent services through the use of web technologies and standard metadata. Services are hosted on systems by using an application server and are callable by using REpresentational State Transfer (REST) protocols. Messages and metadata are transferred with eXtensible Markup Language (XML) encoding which conform to a published XML schema. Space Physics Archive Search and Extract (SPASE) metadata is central to utilizing the services. Resources (data, documents, software, etc.) are described with SPASE and the associated Resource Identifier is used to access and exchange resources. The configurable options for the service can be set by using a web interface. Services are packaged as web application resource (WAR) files for direct deployment on application services such as Tomcat or Jetty. We discuss the composition of the SEEKR framework, how new services can be integrated and the steps necessary to deploying the framework. The SEEKR Framework emerged from NASA's Virtual Magnetospheric Observatory (VMO) and other systems and we present an overview of these systems from a SEEKR Framework perspective.
MAPI: a software framework for distributed biomedical applications
2013-01-01
Background The amount of web-based resources (databases, tools etc.) in biomedicine has increased, but the integrated usage of those resources is complex due to differences in access protocols and data formats. However, distributed data processing is becoming inevitable in several domains, in particular in biomedicine, where researchers face rapidly increasing data sizes. This big data is difficult to process locally because of the large processing, memory and storage capacity required. Results This manuscript describes a framework, called MAPI, which provides a uniform representation of resources available over the Internet, in particular for Web Services. The framework enhances their interoperability and collaborative use by enabling a uniform and remote access. The framework functionality is organized in modules that can be combined and configured in different ways to fulfil concrete development requirements. Conclusions The framework has been tested in the biomedical application domain where it has been a base for developing several clients that are able to integrate different web resources. The MAPI binaries and documentation are freely available at http://www.bitlab-es.com/mapi under the Creative Commons Attribution-No Derivative Works 2.5 Spain License. The MAPI source code is available by request (GPL v3 license). PMID:23311574
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
Description of the U.S. Geological Survey Geo Data Portal data integration framework
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Lucido, Jessica M.
2012-01-01
The U.S. Geological Survey has developed an open-standard data integration framework for working efficiently and effectively with large collections of climate and other geoscience data. A web interface accesses catalog datasets to find data services. Data resources can then be rendered for mapping and dataset metadata are derived directly from these web services. Algorithm configuration and information needed to retrieve data for processing are passed to a server where all large-volume data access and manipulation takes place. The data integration strategy described here was implemented by leveraging existing free and open source software. Details of the software used are omitted; rather, emphasis is placed on how open-standard web services and data encodings can be used in an architecture that integrates common geographic and atmospheric data.
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
Desiderata for an authoritative Representation of MeSH in RDF.
Winnenburg, Rainer; Bodenreider, Olivier
2014-01-01
The Semantic Web provides a framework for the integration of resources on the web, which facilitates information integration and interoperability. RDF is the main representation format for Linked Open Data (LOD). However, datasets are not always made available in RDF by their producers and the Semantic Web community has had to convert some of these datasets to RDF in order for these datasets to participate in the LOD cloud. As a result, the LOD cloud sometimes contains outdated, partial and even inaccurate RDF datasets. We review the LOD landscape for one of these resources, MeSH, and analyze the characteristics of six existing representations in order to identify desirable features for an authoritative version, for which we create a prototype. We illustrate the suitability of this prototype on three common use cases. NLM intends to release an authoritative representation of MeSH in RDF (beta version) in the Fall of 2014.
Desiderata for an authoritative Representation of MeSH in RDF
Winnenburg, Rainer; Bodenreider, Olivier
2014-01-01
The Semantic Web provides a framework for the integration of resources on the web, which facilitates information integration and interoperability. RDF is the main representation format for Linked Open Data (LOD). However, datasets are not always made available in RDF by their producers and the Semantic Web community has had to convert some of these datasets to RDF in order for these datasets to participate in the LOD cloud. As a result, the LOD cloud sometimes contains outdated, partial and even inaccurate RDF datasets. We review the LOD landscape for one of these resources, MeSH, and analyze the characteristics of six existing representations in order to identify desirable features for an authoritative version, for which we create a prototype. We illustrate the suitability of this prototype on three common use cases. NLM intends to release an authoritative representation of MeSH in RDF (beta version) in the Fall of 2014. PMID:25954433
Finding gene regulatory network candidates using the gene expression knowledge base.
Venkatesan, Aravind; Tripathi, Sushil; Sanz de Galdeano, Alejandro; Blondé, Ward; Lægreid, Astrid; Mironov, Vladimir; Kuiper, Martin
2014-12-10
Network-based approaches for the analysis of large-scale genomics data have become well established. Biological networks provide a knowledge scaffold against which the patterns and dynamics of 'omics' data can be interpreted. The background information required for the construction of such networks is often dispersed across a multitude of knowledge bases in a variety of formats. The seamless integration of this information is one of the main challenges in bioinformatics. The Semantic Web offers powerful technologies for the assembly of integrated knowledge bases that are computationally comprehensible, thereby providing a potentially powerful resource for constructing biological networks and network-based analysis. We have developed the Gene eXpression Knowledge Base (GeXKB), a semantic web technology based resource that contains integrated knowledge about gene expression regulation. To affirm the utility of GeXKB we demonstrate how this resource can be exploited for the identification of candidate regulatory network proteins. We present four use cases that were designed from a biological perspective in order to find candidate members relevant for the gastrin hormone signaling network model. We show how a combination of specific query definitions and additional selection criteria derived from gene expression data and prior knowledge concerning candidate proteins can be used to retrieve a set of proteins that constitute valid candidates for regulatory network extensions. Semantic web technologies provide the means for processing and integrating various heterogeneous information sources. The GeXKB offers biologists such an integrated knowledge resource, allowing them to address complex biological questions pertaining to gene expression. This work illustrates how GeXKB can be used in combination with gene expression results and literature information to identify new potential candidates that may be considered for extending a gene regulatory network.
Integrated web visualizations for protein-protein interaction databases.
Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas
2015-06-16
Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.
ERIC Educational Resources Information Center
Becker, Bernd W.
2010-01-01
The author has discussed the Multimedia Educational Resource for Teaching and Online Learning site, MERLOT, in a recent Electronic Roundup column. In this article, he discusses an entirely new Web page development tool that MERLOT has added for its members. The new tool is called the MERLOT Content Builder and is directly integrated into the…
Webquests in Social Studies Education
ERIC Educational Resources Information Center
Vanguri, Pradeep R.; Sunal, Cynthia Szymanski; Wilson, Elizabeth K.; Wright, Vivian H.
2004-01-01
WebQuests provide the opportunity to combine technology with educational concepts and to incorporate inquiry-based learning. WebQuests also have the ability to integrate on-line resources with student-centered, activity-based learning. Three courses in the College of Education at The University of Alabama and at West Virginia University…
[Research on tumor information grid framework].
Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing
2013-10-01
In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.
National Vulnerability Database (NVD)
National Institute of Standards and Technology Data Gateway
National Vulnerability Database (NVD) (Web, free access) NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.
DOT National Transportation Integrated Search
2012-02-01
Many manuals, handbooks and web resources exist that provide guidance on planning for and designing bicycle and pedestrian facilities. However few of these resources emphasize program and infrastructure characteristics most desired by current (and po...
Job submission and management through web services: the experience with the CREAM service
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Fina, S. D.; Ronco, S. D.; Dorigo, A.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Sgaravatto, M.; Verlato, M.; Zangrando, L.; Corvo, M.; Miccio, V.; Sciaba, A.; Cesini, D.; Dongiovanni, D.; Grandi, C.
2008-07-01
Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution and Management (CREAM) service. CREAM provides a Web service-based job execution and management capability for Grid systems; in particular, it is being used within the gLite middleware. CREAM exposes a Web service interface allowing conforming clients to submit and manage computational jobs to a Local Resource Management System. We developed a special component, called ICE (Interface to CREAM Environment) to integrate CREAM in gLite. ICE transfers job submissions and cancellations from the Workload Management System, allowing users to manage CREAM jobs from the gLite User Interface. This paper describes some recent studies aimed at assessing the performance and reliability of CREAM and ICE; those tests have been performed as part of the acceptance tests for integration of CREAM and ICE in gLite. We also discuss recent work towards enhancing CREAM with a BES and JSDL compliant interface.
Web Pages: An Effective Method of Providing CAI Resource Material in Histology.
ERIC Educational Resources Information Center
McLean, Michelle
2001-01-01
Presents research that introduces computer-aided instruction (CAI) resource material as an integral part of the second-year histology course at the University of Natal Medical School. Describes the ease with which this software can be developed, using limited resources and available skills, while providing students with valuable learning…
Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho
2011-01-01
The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.
ERIC Educational Resources Information Center
Jackson, Mary E.
2002-01-01
Explains portals as tools that gather a variety of electronic information resources, including local library resources, into a single Web page. Highlights include cross-database searching; integration with university portals and course management software; the ARL (Association of Research Libraries) Scholars Portal Initiative; and selected vendors…
Khan, Mohd Shoaib; Gupta, Amit Kumar; Kumar, Manoj
2016-01-01
To develop a computational resource for viral epigenomic methylation profiles from diverse diseases. Methylation patterns of Epstein-Barr virus and hepatitis B virus genomic regions are provided as web platform developed using open source Linux-Apache-MySQL-PHP (LAMP) bundle: programming and scripting languages, that is, HTML, JavaScript and PERL. A comprehensive and integrated web resource ViralEpi v1.0 is developed providing well-organized compendium of methylation events and statistical analysis associated with several diseases. Additionally, it also facilitates 'Viral EpiGenome Browser' for user-affable browsing experience using JavaScript-based JBrowse. This web resource would be helpful for research community engaged in studying epigenetic biomarkers for appropriate prognosis and diagnosis of diseases and its various stages.
Discovering, Indexing and Interlinking Information Resources
Celli, Fabrizio; Keizer, Johannes; Jaques, Yves; Konstantopoulos, Stasinos; Vudragović, Dušan
2015-01-01
The social media revolution is having a dramatic effect on the world of scientific publication. Scientists now publish their research interests, theories and outcomes across numerous channels, including personal blogs and other thematic web spaces where ideas, activities and partial results are discussed. Accordingly, information systems that facilitate access to scientific literature must learn to cope with this valuable and varied data, evolving to make this research easily discoverable and available to end users. In this paper we describe the incremental process of discovering web resources in the domain of agricultural science and technology. Making use of Linked Open Data methodologies, we interlink a wide array of custom-crawled resources with the AGRIS bibliographic database in order to enrich the user experience of the AGRIS website. We also discuss the SemaGrow Stack, a query federation and data integration infrastructure used to estimate the semantic distance between crawled web resources and AGRIS. PMID:26834982
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
A Semantic Sensor Web for Environmental Decision Support Applications
Gray, Alasdair J. G.; Sadler, Jason; Kit, Oles; Kyzirakos, Kostis; Karpathiotakis, Manos; Calbimonte, Jean-Paul; Page, Kevin; García-Castro, Raúl; Frazer, Alex; Galpin, Ixent; Fernandes, Alvaro A. A.; Paton, Norman W.; Corcho, Oscar; Koubarakis, Manolis; De Roure, David; Martinez, Kirk; Gómez-Pérez, Asunción
2011-01-01
Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England. PMID:22164110
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.
2016-12-01
The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
miRMaid: a unified programming interface for microRNA data resources
2010-01-01
Background MicroRNAs (miRNAs) are endogenous small RNAs that play a key role in post-transcriptional regulation of gene expression in animals and plants. The number of known miRNAs has increased rapidly over the years. The current release (version 14.0) of miRBase, the central online repository for miRNA annotation, comprises over 10.000 miRNA precursors from 115 different species. Furthermore, a large number of decentralized online resources are now available, each contributing with important miRNA annotation and information. Results We have developed a software framework, designated here as miRMaid, with the goal of integrating miRNA data resources in a uniform web service interface that can be accessed and queried by researchers and, most importantly, by computers. miRMaid is built around data from miRBase and is designed to follow the official miRBase data releases. It exposes miRBase data as inter-connected web services. Third-party miRNA data resources can be modularly integrated as miRMaid plugins or they can loosely couple with miRMaid as individual entities in the World Wide Web. miRMaid is available as a public web service but is also easily installed as a local application. The software framework is freely available under the LGPL open source license for academic and commercial use. Conclusion miRMaid is an intuitive and modular software platform designed to unify miRBase and independent miRNA data resources. It enables miRNA researchers to computationally address complex questions involving the multitude of miRNA data resources. Furthermore, miRMaid constitutes a basic framework for further programming in which microRNA-interested bioinformaticians can readily develop their own tools and data sources. PMID:20074352
Teaching with the World Wide Web: Internet Resources for Educators in Illinois Schools.
ERIC Educational Resources Information Center
Barker, Bruce O.; Hall, Robert F.
1998-01-01
This report focuses on teaching with the World Wide Web. An introduction describes the Illinois State Board of Education's (ISBE's) efforts in urging local schools to integrate information technology into all aspects of their curriculum and in emphasizing the need for technology-focused staff development for Illinois teachers. ISBE supports…
The Study on Integrating WebQuest with Mobile Learning for Environmental Education
ERIC Educational Resources Information Center
Chang, Cheng-Sian; Chen, Tzung-Shi; Hsu, Wei-Hsiang
2011-01-01
This study is to demonstrate the impact of different teaching strategies on the learning performance of environmental education using quantitative methods. Students learned about resource recycling and classification through an instructional website based on the teaching tool of WebQuest. There were 103 sixth-grade students participating in this…
Karson, T. H.; Perkins, C.; Dixon, C.; Ehresman, J. P.; Mammone, G. L.; Sato, L.; Schaffer, J. L.; Greenes, R. A.
1997-01-01
A component-based health information resource, delivered on an intranet and the Internet, utilizing World Wide Web (WWW) technology, has been built to meet the needs of a large integrated delivery network (IDN). Called PartnerWeb, this resource is intended to provide a variety of health care and reference information to both practitioners and consumers/patients. The initial target audience has been providers. Content management for the numerous departments, divisions, and other organizational entities within the IDN is accomplished by a distributed authoring and editing environment. Structured entry using a set of form tools into databases facilitates consistency of information presentation, while empowering designated authors and editors in the various entities to be responsible for their own materials, but not requiring them to be technically skilled. Each form tool manages an encapsulated component. The output of each component can be a dynamically generated display on WWW platforms, or an appropriate interface to other presentation environments. The PartnerWeb project lays the foundation for both an internal and external communication infrastructure for the enterprise that can facilitate information dissemination. PMID:9357648
Benke, Arthur C
2018-03-31
The majority of food web studies are based on connectivity, top-down impacts, bottom-up flows, or trophic position (TP), and ecologists have argued for decades which is best. Rarely have any two been considered simultaneously. The present study uses a procedure that integrates the last three approaches based on taxon-specific secondary production and gut analyses. Ingestion flows are quantified to create a flow web and the same data are used to quantify TP for all taxa. An individual predator's impacts also are estimated using the ratio of its ingestion (I) of each prey to prey production (P) to create an I/P web. This procedure was applied to 41 invertebrate taxa inhabiting submerged woody habitat in a southeastern U.S. river. A complex flow web starting with five basal food resources had 462 flows >1 mg·m -2 ·yr -1 , providing far more information than a connectivity web. Total flows from basal resources to primary consumers/omnivores were dominated by allochthonous amorphous detritus and ranged from 1 to >50,000 mg·m -2 ·yr -1 . Most predator-prey flows were much lower (<50 mg·m -2 ·yr -1 ), but some were >1,000 mg·m -2 ·yr -1 . The I/P web showed that 83% of individual predator impacts were weak (<10%), whereas total predator impacts were often strong (e.g., 35% of prey sustained an impact >90%). Quantitative estimates of TP ranged from 2 to 3.7, contrasting sharply with seven integer-based trophic levels based on longest feeding chain. Traditional omnivores (TP = 2.4-2.9) played an important role by consuming more prey and exerting higher impacts on primary consumers than strict predators (TP ≥ 3). This study illustrates how simultaneous quantification of flow pathways, predator impacts, and TP together provide an integrated characterization of natural food webs. © 2018 by the Ecological Society of America.
Functional Requirements for Information Resource Provenance on the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCusker, James P.; Lebo, Timothy; Graves, Alvaro
We provide a means to formally explain the relationship between HTTP URLs and the representations returned when they are requested. According to existing World Wide Web architecture, the URL serves as an identier for a semiotic referent while the document returned via HTTP serves as a representation of the same referent. This begins with two sides of a semiotic triangle; the third side is the relationship between the URL and the representation received. We complete this description by extending the library science resource model Functional Requirements for Bibliographic Resources (FRBR) with cryptographic message and content digests to create a Functionalmore » Requirements for Information Resources (FRIR). We show how applying the FRIR model to HTTP GET and POST transactions disambiguates the many relationships between a given URL and all representations received from its request, provides fine-grained explanations that are complementary to existing explanations of web resources, and integrates easily into the emerging W3C provenance standard.« less
Shaping the Electronic Library--The UW-Madison Approach.
ERIC Educational Resources Information Center
Dean, Charles W., Ed.; Frazier, Ken; Pope, Nolan F.; Gorman, Peter C.; Dentinger, Sue; Boston, Jeanne; Phillips, Hugh; Daggett, Steven C.; Lundquist, Mitch; McClung, Mark; Riley, Curran; Allan, Craig; Waugh, David
1998-01-01
This special theme section describes the University of Wisconsin-Madison's experience building its Electronic Library. Highlights include integrating resources and services; the administrative framework; the public electronic library, including electronic publishing capability and access to World Wide Web-based and other electronic resources;…
Scholarly Context Not Found: One in Five Articles Suffers from Reference Rot
Klein, Martin; Van de Sompel, Herbert; Sanderson, Robert; Shankar, Harihar; Balakireva, Lyudmila; Zhou, Ke; Tobin, Richard
2014-01-01
The emergence of the web has fundamentally affected most aspects of information communication, including scholarly communication. The immediacy that characterizes publishing information to the web, as well as accessing it, allows for a dramatic increase in the speed of dissemination of scholarly knowledge. But, the transition from a paper-based to a web-based scholarly communication system also poses challenges. In this paper, we focus on reference rot, the combination of link rot and content drift to which references to web resources included in Science, Technology, and Medicine (STM) articles are subject. We investigate the extent to which reference rot impacts the ability to revisit the web context that surrounds STM articles some time after their publication. We do so on the basis of a vast collection of articles from three corpora that span publication years 1997 to 2012. For over one million references to web resources extracted from over 3.5 million articles, we determine whether the HTTP URI is still responsive on the live web and whether web archives contain an archived snapshot representative of the state the referenced resource had at the time it was referenced. We observe that the fraction of articles containing references to web resources is growing steadily over time. We find one out of five STM articles suffering from reference rot, meaning it is impossible to revisit the web context that surrounds them some time after their publication. When only considering STM articles that contain references to web resources, this fraction increases to seven out of ten. We suggest that, in order to safeguard the long-term integrity of the web-based scholarly record, robust solutions to combat the reference rot problem are required. In conclusion, we provide a brief insight into the directions that are explored with this regard in the context of the Hiberlink project. PMID:25541969
MycoCosm, an Integrated Fungal Genomics Resource
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shabalov, Igor; Grigoriev, Igor
2012-03-16
MycoCosm is a web-based interactive fungal genomics resource, which was first released in March 2010, in response to an urgent call from the fungal community for integration of all fungal genomes and analytical tools in one place (Pan-fungal data resources meeting, Feb 21-22, 2010, Alexandria, VA). MycoCosm integrates genomics data and analysis tools to navigate through over 100 fungal genomes sequenced at JGI and elsewhere. This resource allows users to explore fungal genomes in the context of both genome-centric analysis and comparative genomics, and promotes user community participation in data submission, annotation and analysis. MycoCosm has over 4500 unique visitors/monthmore » or 35000+ visitors/year as well as hundreds of registered users contributing their data and expertise to this resource. Its scalable architecture allows significant expansion of the data expected from JGI Fungal Genomics Program, its users, and integration with external resources used by fungal community.« less
Taverna: a tool for building and running workflows of services
Hull, Duncan; Wolstencroft, Katy; Stevens, Robert; Goble, Carole; Pocock, Mathew R.; Li, Peter; Oinn, Tom
2006-01-01
Taverna is an application that eases the use and integration of the growing number of molecular biology tools and databases available on the web, especially web services. It allows bioinformaticians to construct workflows or pipelines of services to perform a range of different analyses, such as sequence analysis and genome annotation. These high-level workflows can integrate many different resources into a single analysis. Taverna is available freely under the terms of the GNU Lesser General Public License (LGPL) from . PMID:16845108
Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo
2011-01-01
The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015
jORCA: easily integrating bioinformatics Web Services.
Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo
2010-02-15
Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .
SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.
Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T
2009-09-23
SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs.
Plotview Software For Retrieving Plot-Level Imagery and GIS Data Over The Web
Ken Boss
2001-01-01
The Minnesota Department of Natural Resources Division of Forestry Resource Assessment office has been cooperating with both the Forest Service's FIA and Natural Resource Conservation Services's NRI inventory programs in researching methods to more tightly integrate the two programs. One aspect of these ongoing efforts has been to develop a prototype intranet...
Resource Discovery within the Networked "Hybrid" Library.
ERIC Educational Resources Information Center
Leigh, Sally-Anne
This paper focuses on the development, adoption, and integration of resource discovery, knowledge management, and/or knowledge sharing interfaces such as interactive portals, and the use of the library's World Wide Web presence to increase the availability and usability of information services. The introduction addresses changes in library…
Integration of Problem-Based Learning and Web-Based Multimedia to Enhance Soil Management Course
NASA Astrophysics Data System (ADS)
Strivelli, R.; Krzic, M.; Crowley, C.; Dyanatkar, S.; Bomke, A.; Simard, S.; Grand, S.
2012-04-01
In an attempt to address declining enrolment in soil science programs and the changing learning needs of 21st century students, several universities in North America and around the world have re-organized their soil science curriculum and adopted innovative educational approaches and web-based teaching resources. At the University of British Columbia, Canada, an interdisciplinary team set out to integrate teaching approaches to address this trend. The objective of this project was to develop an interactive web-based teaching resource, which combined a face-to-face problem-based learning (PBL) case study with multimedia to illustrate the impacts of three land-uses on soil transformation and quality. The Land Use Impacts (LUI) tool (http://soilweb.landfood.ubc.ca/luitool/) was a collaborative and concentrated effort to maximize the advantages of two educational approaches: (1) the web's interactivity, flexibility, adaptability and accessibility, and (2) PBL's ability to foster an authentic learning environment, encourage group work and promote the application of core concepts. The design of the LUI case study was guided by Herrington's development principles for web-based authentic learning. The LUI tool presented students with rich multimedia (streaming videos, text, data, photographs, maps, and weblinks) and real world tasks (site assessment and soil analysis) to encourage students to utilize knowledge of soil science in collaborative problem-solving. Preliminary student feedback indicated that the LUI tool effectively conveyed case study objectives and was appealing to students. The resource is intended primarily for students enrolled in an upper level undergraduate/graduate university course titled Sustainable Soil Management but it is flexible enough to be adapted to other natural resource courses. Project planning and an interactive overview of the tool will be given during the presentation.
NASA Astrophysics Data System (ADS)
dias, S. B.; Yang, C.; Li, Z.; XIA, J.; Liu, K.; Gui, Z.; Li, W.
2013-12-01
Global climate change has become one of the biggest concerns for human kind in the 21st century due to its broad impacts on society and ecosystems across the world. Arctic has been observed as one of the most vulnerable regions to the climate change. In order to understand the impacts of climate change on the natural environment, ecosystems, biodiversity and others in the Arctic region, and thus to better support the planning and decision making process, cross-disciplinary researches are required to monitor and analyze changes of Arctic regions such as water, sea level, biodiversity and so on. Conducting such research demands the efficient utilization of various geospatially referenced data, web services and information related to Arctic region. In this paper, we propose a cloud-enabled and service-oriented Spatial Web Portal (SWP) to support the discovery, integration and utilization of Arctic related geospatial resources, serving as a building block of polar CI. This SWP leverages the following techniques: 1) a hybrid searching mechanism combining centralized local search, distributed catalogue search and specialized Internet search for effectively discovering Arctic data and web services from multiple sources; 2) a service-oriented quality-enabled framework for seamless integration and utilization of various geospatial resources; and 3) a cloud-enabled parallel spatial index building approach to facilitate near-real time resource indexing and searching. A proof-of-concept prototype is developed to demonstrate the feasibility of the proposed SWP, using an example of analyzing the Arctic snow cover change over the past 50 years.
AMBIT RESTful web services: an implementation of the OpenTox application programming interface.
Jeliazkova, Nina; Jeliazkov, Vedrin
2011-05-16
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations.The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems.
AMBIT RESTful web services: an implementation of the OpenTox application programming interface
2011-01-01
The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations. The AMBIT web services package has been developed as an extension of AMBIT modules, adding the ability to create (Quantitative) Structure-Activity Relationship (QSAR) models and providing an OpenTox API compliant interface. The representation of data and processing resources in W3C Resource Description Framework facilitates integrating the resources as Linked Data. By uploading datasets with chemical structures and arbitrary set of properties, they become automatically available online in several formats. The services provide unified interfaces to several descriptor calculation, machine learning and similarity searching algorithms, as well as to applicability domain and toxicity prediction models. All Toxtree modules for predicting the toxicological hazard of chemical compounds are also integrated within this package. The complexity and diversity of the processing is reduced to the simple paradigm "read data from a web address, perform processing, write to a web address". The online service allows to easily run predictions, without installing any software, as well to share online datasets and models. The downloadable web application allows researchers to setup an arbitrary number of service instances for specific purposes and at suitable locations. These services could be used as a distributed framework for processing of resource-intensive tasks and data sharing or in a fully independent way, according to the specific needs. The advantage of exposing the functionality via the OpenTox API is seamless interoperability, not only within a single web application, but also in a network of distributed services. Last, but not least, the services provide a basis for building web mashups, end user applications with friendly GUIs, as well as embedding the functionalities in existing workflow systems. PMID:21575202
Global Conflicts On-line: Technoliteracy and Developing an Internet-Based Conflict Archive.
ERIC Educational Resources Information Center
Tuathail, Gearoid O; McCormack, Derek
1998-01-01
Reflects on the experience of teaching a large undergraduate course on the geography of global conflict. A World Wide Web site featuring an archive of conflicts around the globe was designed and integrated into the course. Discusses issues concerning the design and maintenance of the Web site and its usefulness as a learning resource. (MJP)
The EPA Comptox Chemistry Dashboard is a web-based application providing access to a set of data resources provided by the National Center of Computational Toxicology. Sitting on a foundation of chemistry data for ~750,000 chemical substances the application integrates bioassay s...
ERIC Educational Resources Information Center
Chen, Yi-Cheng; Lin, Yi-Chien; Yeh, Ron Chuen; Lou, Shi-Jer
2013-01-01
With accelerated progress of information and communication technologies (ICT), web-based instruction (WBI) is becoming a popular method for education resources distributing and delivering. This study was conducted to explore what factors influence college students' behavioral intentions to utilize WBI systems. To achieve this aim, a WBI system was…
Dinov, Ivo D; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H V; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D Stott; Toga, Arthur W
2008-05-28
The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.
ERIC Educational Resources Information Center
Bandara, H. M. N. Dilum
2012-01-01
Resource-rich computing devices, decreasing communication costs, and Web 2.0 technologies are fundamentally changing the way distributed applications communicate and collaborate. With these changes, we envision Peer-to-Peer (P2P) systems that will allow for the integration and collaboration of peers with diverse capabilities to a virtual community…
UBioLab: a web-laboratory for ubiquitous in-silico experiments.
Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo
2012-07-09
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Dinov, Ivo D.; Rubin, Daniel; Lorensen, William; Dugan, Jonathan; Ma, Jeff; Murphy, Shawn; Kirschner, Beth; Bug, William; Sherman, Michael; Floratos, Aris; Kennedy, David; Jagadish, H. V.; Schmidt, Jeanette; Athey, Brian; Califano, Andrea; Musen, Mark; Altman, Russ; Kikinis, Ron; Kohane, Isaac; Delp, Scott; Parker, D. Stott; Toga, Arthur W.
2008-01-01
The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu. PMID:18509477
Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.
Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell
2011-07-26
Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.
Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs
NASA Astrophysics Data System (ADS)
Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.
2006-12-01
The sensor web is a distributed, federated infrastructure much like its predecessors, the internet and the world wide web. It will be a federation of many sensor webs, large and small, under many distinct spans of control, that loosely cooperates and share information for many purposes. Realistically, it will grow piecemeal as distinct, individual systems are developed and deployed, some expressly built for a sensor web while many others were created for other purposes. Therefore, the architecture of the sensor web is of fundamental import and architectural strictures that inhibit innovation, experimentation, sharing or scaling may prove fatal. Drawing upon the architectural lessons of the world wide web, we offer a novel system architecture, the flow web, that elevates flows, sequences of messages over a domain of interest and constrained in both time and space, to a position of primacy as a dynamic, real-time, medium of information exchange for computational services. The flow web captures; in a single, uniform architectural style; the conflicting demands of the sensor web including dynamic adaptations to changing conditions, ease of experimentation, rapid recovery from the failures of sensors and models, automated command and control, incremental development and deployment, and integration at multiple levels—in many cases, at different times. Our conception of sensor webs—dynamic amalgamations of sensor webs each constructed within a flow web infrastructure—holds substantial promise for earth science missions in general, and of weather, air quality, and disaster management in particular. Flow webs, are by philosophy, design and implementation a dynamic infrastructure that permits massive adaptation in real-time. Flows may be attached to and detached from services at will, even while information is in transit through the flow. This concept, flow mobility, permits dynamic integration of earth science products and modeling resources in response to real-time demands. Flows are the connective tissue of flow webs—massive computational engines organized as directed graphs whose nodes are semi-autonomous components and whose edges are flows. The individual components of a flow web may themselves be encapsulated flow webs. In other words, a flow web subgraph may be presented to a yet larger flow web as a single, seamless component. Flow webs, at all levels, may be edited and modified while still executing. Within a flow web individual components may be added, removed, started, paused, halted, reparameterized, or inspected. The topology of a flow web may be changed at will. Thus, flow webs exhibit an extraordinary degree of adaptivity and robustness as they are explicitly designed to be modified on the fly, an attribute well suited for dynamic model interactions in sensor webs. We describe our concept for a sensor web, implemented as a flow web, in the context of a wildfire disaster management system for the southern California region. Comprehensive wildfire management requires cooperation among multiple agencies. Flow webs allow agencies to share resources in exactly the manner they choose. We will explain how to employ flow webs and agents to integrate satellite remote sensing data, models, in-situ sensors, UAVs and other resources into a sensor web that interconnects organizations and their disaster management tools in a manner that simultaneously preserves their independence and builds upon the individual strengths of agency-specific models and data sources.
ERIC Educational Resources Information Center
Jenkins, Ann G.; Robin, Bernard R.
As educators increasingly integrate Web-based resources into their curriculum, there is a growing need for high quality, educationally relevant materials. This study evaluated the Bayou Bend Web site, the result of a collaboration between staff at the Museum of Fine Arts, Houston, Texas, and faculty and graduate students at the University of…
ERIC Educational Resources Information Center
Ehman, Lee H.
Modules to teach the appropriate integration of technology into social studies teaching were pilot-taught in a secondary social studies methods course. The seven modules emphasized the World Wide Web as a resource for teachers and students. Pre- and post-course surveys were conducted with the 24 students in the course. Both qualitative and…
The Need for Integration of Technology in K-12 School Settings in Kenya, Africa
ERIC Educational Resources Information Center
Momanyi, Lilian; Norby, RenaFaye; Strand, Sharon
2006-01-01
Many computer users around the world have access to the latest advances in technology and use of the World Wide Web (WWW or Web). However, for a variety of political, economic, and social reasons, some peoples of the world do not have access to these resources. The educational systems of developing countries have not completely missed the…
Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan
2015-01-01
Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
Netbook - A Toolset in Support of a Collaborative Learning.
1997-01-31
Netbook is a software development research project being conducted for the DARPA Computer Aided Training Initiative (CEATI). As a part of the Smart...Navigators to Access and Integrated Resources (SNAIR) division of CEATI, Netbook concerns itself with the management of Internet resources. More...specifically, Netbook is a toolset that enables students, teachers, and administrators to navigate the World Wide Web, collect resources found there, index
SOCR: Statistics Online Computational Resource
Dinov, Ivo D.
2011-01-01
The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741
Enrichment and Ranking of the YouTube Tag Space and Integration with the Linked Data Cloud
NASA Astrophysics Data System (ADS)
Choudhury, Smitashree; Breslin, John G.; Passant, Alexandre
The increase of personal digital cameras with video functionality and video-enabled camera phones has increased the amount of user-generated videos on the Web. People are spending more and more time viewing online videos as a major source of entertainment and "infotainment". Social websites allow users to assign shared free-form tags to user-generated multimedia resources, thus generating annotations for objects with a minimum amount of effort. Tagging allows communities to organise their multimedia items into browseable sets, but these tags may be poorly chosen and related tags may be omitted. Current techniques to retrieve, integrate and present this media to users are deficient and could do with improvement. In this paper, we describe a framework for semantic enrichment, ranking and integration of web video tags using Semantic Web technologies. Semantic enrichment of folksonomies can bridge the gap between the uncontrolled and flat structures typically found in user-generated content and structures provided by the Semantic Web. The enhancement of tag spaces with semantics has been accomplished through two major tasks: (1) a tag space expansion and ranking step; and (2) through concept matching and integration with the Linked Data cloud. We have explored social, temporal and spatial contexts to enrich and extend the existing tag space. The resulting semantic tag space is modelled via a local graph based on co-occurrence distances for ranking. A ranked tag list is mapped and integrated with the Linked Data cloud through the DBpedia resource repository. Multi-dimensional context filtering for tag expansion means that tag ranking is much easier and it provides less ambiguous tag to concept matching.
Audiovisual Speech Web-Lab: an Internet teaching and research laboratory.
Gordon, M S; Rosenblum, L D
2001-05-01
Internet resources now enable laboratories to make full-length experiments available on line. A handful of existing web sites offer users the ability to participate in experiments and generate usable data. We have integrated this technology into a web site that also provides full discussion of the theoretical and methodological aspects of the experiments using text and simple interactive demonstrations. The content of the web site (http://www.psych.ucr.edu/avspeech/lab) concerns audiovisual speech perception and its relation to face perception. The site is designed to be useful for users of multiple interests and levels of expertise.
Poppenga, Sandra K.; Evans, Gayla; Gesch, Dean; Stoker, Jason M.; Queija, Vivian R.; Worstell, Bruce; Tyler, Dean J.; Danielson, Jeff; Bliss, Norman; Greenlee, Susan
2010-01-01
The mission of U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center Topographic Science is to establish partnerships and conduct research and applications that facilitate the development and use of integrated national and global topographic datasets. Topographic Science includes a wide range of research and applications that result in improved seamless topographic datasets, advanced elevation technology, data integration and terrain visualization, new and improved elevation derivatives, and development of Web-based tools. In cooperation with our partners, Topographic Science is developing integrated-science applications for mapping, national natural resource initiatives, hazards, and global change science. http://topotools.cr.usgs.gov/.
HR4EU--A Web-Portal for E-Learning of Croatian
ERIC Educational Resources Information Center
Filko, Matea; Farkaš, Daša; Hriberski, Diana
2016-01-01
In this paper, we present the HR4EU--a web portal for e-learning of Croatian. HR4EU is the first portal that offers Croatian language courses which are free-of-charge and developed by language professionals. Moreover, HR4EU also integrates bidirectional interaction with some of the previously developed language resources for Croatian. The HR4EU…
ERIC Educational Resources Information Center
Coello-Coutino, Gerardo; Ainsworth, Shirley; Escalante-Gonzalbo, Ana Marie
2002-01-01
Describes Hermes, a research tool that uses specially designed acquisition, parsing and presentation methods to integrate information resources on the Internet, from searching in disparate bibliographic databases, to accessing full text articles online, and developing a web of information associated with each reference via one common interface.…
Blodgett, David L.; Lucido, Jessica M.; Kreft, James M.
2016-01-01
Critical water-resources issues ranging from flood response to water scarcity make access to integrated water information, services, tools, and models essential. Since 1995 when the first water data web pages went online, the U.S. Geological Survey has been at the forefront of water data distribution and integration. Today, real-time and historical streamflow observations are available via web pages and a variety of web service interfaces. The Survey has built partnerships with Federal and State agencies to integrate hydrologic data providing continuous observations of surface and groundwater, temporally discrete water quality data, groundwater well logs, aquatic biology data, water availability and use information, and tools to help characterize the landscape for modeling. In this paper, we summarize the status and design patterns implemented for selected data systems. We describe how these systems contribute to a U.S. Federal Open Water Data Initiative and present some gaps and lessons learned that apply to global hydroinformatics data infrastructure.
ERIC Educational Resources Information Center
Ramírez-Verdugo, Maria Dolores
2012-01-01
This paper presents an overview of the research conducted within a funded Comenius project which aims at developing a virtual European CLIL Resource Centre for Web 2.0 Education. E-CLIL focuses on Content and Language Integrated Learning (CLIL), creativity and multiculturalism through digital resources. In this sense, our prior research on CLIL…
NASA Astrophysics Data System (ADS)
Yen, Y.-N.; Wu, Y.-W.; Weng, K.-H.
2013-07-01
E-learning assisted teaching and learning is the trend of the 21st century and has many advantages - freedom from the constraints of time and space, hypertext and multimedia rich resources - enhancing the interaction between students and the teaching materials. The purpose of this study is to explore how rich Internet resources assisted students with the Western Architectural History course. First, we explored the Internet resources which could assist teaching and learning activities. Second, according to course objectives, we built a web-based platform which integrated the Google spreadsheets form, SIMILE widget, Wikipedia and the Google Maps and applied it to the course of Western Architectural History. Finally, action research was applied to understanding the effectiveness of this teaching/learning mode. Participants were the students of the Department of Architecture in the Private University of Technology in northern Taiwan. Results showed that students were willing to use the web-based platform to assist their learning. They found this platform to be useful in understanding the relationship between different periods of buildings. Through the view of the map mode, this platform also helped students expand their international perspective. However, we found that the information shared by students via the Internet were not completely correct. One possible reason was that students could easily acquire information on Internet but they could not determine the correctness of the information. To conclude, this study found some useful and rich resources that could be well-integrated, from which we built a web-based platform to collect information and present this information in diverse modes to stimulate students' learning motivation. We recommend that future studies should consider hiring teaching assistants in order to ease the burden on teachers, and to assist in the maintenance of information quality.
Improving Geoscience Outreach Through Multimedia Enhanced Web Sites - An Example From Connecticut
NASA Astrophysics Data System (ADS)
Hyatt, J. A.; Coron, C. R.; Schroeder, T. J.; Fleming, T.; Drzewiecki, P. A.
2005-12-01
Although large governmental web sites (e.g. USGS, NASA etc.) are important resources, particularly in relation to phenomena with global to regional significance (e.g. recent Tsunami and Hurricane disasters), smaller academic web portals continue to make substantive contributions to web-based learning in the geosciences. The strength of "home-grown" web sites is that they easily can be tailored to specific classes, they often focus on local geologic content, and they potentially integrate classroom, laboratory, and field-based learning in ways that improve introductory classes. Furthermore, innovative multimedia techniques including virtual reality, image manipulations, and interactive streaming video can improve visualization and be particularly helpful for first-time geology students. This poster reports on one such web site, Learning Tools in Earth Science (LTES, http://www.easternct .edu/personal/faculty/hyattj/LTES-v2/), a site developed by geoscience faculty at two state institutions. In contrast to some large web sites with media development teams, LTES geoscientists, with strong support from media and IT service departments, are responsible for geologic content and verification, media development and editing, and web development and authoring. As such, we have considerable control over both content and design of this site. At present the main content modules for LTES include "mineral" and "virtual field trip" links. The mineral module includes an interactive mineral gallery, and a virtual mineral box of 24 unidentified samples that are identical to those used in some of our classes. Students navigate an intuitive web portal to manipulate images and view streaming video segments that explain and undertake standard mineral identification tests. New elements highlighted in our poster include links to a virtual petrographic microscope, in which users can manipulate images to simulate stage rotation in both plane- and cross-polarized light. Virtual field trips include video-based excursions to sites in Georgia, Connecticut and Greenland. New to these VFT's is the integration of "virtual walks" in which users are able to navigate through some field sites in a virtual sense. Development of this resource is ongoing, but response from students, faculty outside of Earth Science and K-12 instructors indicate that this small web site can provide useful resources for those educators utilizing web-based learning in their courses. .edu/personal/faculty/hyattj/LTES-v2/
Creating a course-based web site in a university environment
NASA Astrophysics Data System (ADS)
Robin, Bernard R.; Mcneil, Sara G.
1997-06-01
The delivery of educational materials is undergoing a remarkable change from the traditional lecture method to dissemination of courses via the World Wide Web. This paradigm shift from a paper-based structure to an electronic one has profound implications for university faculty. Students are enrolling in classes with the expectation of using technology and logging on to the Internet, and professors are realizing that the potential of the Web can have a significant impact on classroom activities. An effective method of integrating electronic technologies into teaching and learning is to publish classroom materials on the World Wide Web. Already, many faculty members are creating their own home pages and Web sites for courses that include syllabi, handouts, and student work. Additionally, educators are finding value in adding hypertext links to a wide variety of related Web resources from online research and electronic journals to government and commercial sites. A number of issues must be considered when developing course-based Web sites. These include meeting the needs of a target audience, designing effective instructional materials, and integrating graphics and other multimedia components. There are also numerous technical issues that must be addressed in developing, uploading and maintaining HTML documents. This article presents a model for a university faculty who want to begin using the Web in their teaching and is based on the experiences of two College of Education professors who are using the Web as an integral part of their graduate courses.
EntrezAJAX: direct web browser access to the Entrez Programming Utilities.
Loman, Nicholas J; Pallen, Mark J
2010-06-21
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/
BingEO: Enable Distributed Earth Observation Data for Environmental Research
NASA Astrophysics Data System (ADS)
Wu, H.; Yang, C.; Xu, Y.
2010-12-01
Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.
Teaching physiology and the World Wide Web: electrochemistry and electrophysiology on the Internet.
Dwyer, T M; Fleming, J; Randall, J E; Coleman, T G
1997-12-01
Students seek active learning experiences that can rapidly impart relevant information in the most convenient way possible. Computer-assisted education can now use the resources of the World Wide Web to convey the important characteristics of events as elemental as the physical properties of osmotically active particles in the cell and as complex as the nerve action potential or the integrative behavior of the intact organism. We have designed laboratory exercises that introduce first-year medical students to membrane and action potentials, as well as the more complex example of integrative physiology, using the dynamic properties of computer simulations. Two specific examples are presented. The first presents the physical laws that apply to osmotic, chemical, and electrical gradients, leading to the development of the concept of membrane potentials; this module concludes with the simulation of the ability of the sodium-potassium pump to establish chemical gradients and maintain cell volume. The second module simulates the action potential according to the Hodgkin-Huxley model, illustrating the concepts of threshold, inactivation, refractory period, and accommodation. Students can access these resources during the scheduled laboratories or on their own time via our Web site on the Internet (http./(/)phys-main.umsmed.edu) by using the World Wide Web protocol. Accurate version control is possible because one valid, but easily edited, copy of the labs exists at the Web site. A common graphical interface is possible through the use of the Hypertext mark-up language. Platform independence is possible through the logical and arithmetic calculations inherent to graphical browsers and the Javascript computer language. The initial success of this program indicates that medical education can be very effective both by the use of accurate simulations and by the existence of a universally accessible Internet resource.
NeuroMorpho.Org implementation of digital neuroscience: dense coverage and integration with the NIF
Halavi, Maryam; Polavaram, Sridevi; Donohue, Duncan E.; Hamilton, Gail; Hoyt, Jeffrey; Smith, Kenneth P.; Ascoli, Giorgio A.
2009-01-01
Neuronal morphology affects network connectivity, plasticity, and information processing. Uncovering the design principles and functional consequences of dendritic and axonal shape necessitates quantitative analysis and computational modeling of detailed experimental data. Digital reconstructions provide the required neuromorphological descriptions in a parsimonious, comprehensive, and reliable numerical format. NeuroMorpho.Org is the largest web-accessible repository service for digitally reconstructed neurons and one of the integrated resources in the Neuroscience Information Framework (NIF). Here we describe the NeuroMorpho.Org approach as an exemplary experience in designing, creating, populating, and curating a neuroscience digital resource. The simple three-tier architecture of NeuroMorpho.Org (web client, web server, and relational database) encompasses all necessary elements to support a large-scale, integrate-able repository. The data content, while heterogeneous in scientific scope and experimental origin, is unified in format and presentation by an in house standardization protocol. The server application (MRALD) is secure, customizable, and developer-friendly. Centralized processing and expert annotation yields a comprehensive set of metadata that enriches and complements the raw data. The thoroughly tested interface design allows for optimal and effective data search and retrieval. Availability of data in both original and standardized formats ensures compatibility with existing resources and fosters further tool development. Other key functions enable extensive exploration and discovery, including 3D and interactive visualization of branching, frequently measured morphometrics, and reciprocal links to the original PubMed publications. The integration of NeuroMorpho.Org with version-1 of the NIF (NIFv1) provides the opportunity to access morphological data in the context of other relevant resources and diverse subdomains of neuroscience, opening exciting new possibilities in data mining and knowledge discovery. The outcome of such coordination is the rapid and powerful advancement of neuroscience research at both the conceptual and technological level. PMID:18949582
NeuroMorpho.Org implementation of digital neuroscience: dense coverage and integration with the NIF.
Halavi, Maryam; Polavaram, Sridevi; Donohue, Duncan E; Hamilton, Gail; Hoyt, Jeffrey; Smith, Kenneth P; Ascoli, Giorgio A
2008-09-01
Neuronal morphology affects network connectivity, plasticity, and information processing. Uncovering the design principles and functional consequences of dendritic and axonal shape necessitates quantitative analysis and computational modeling of detailed experimental data. Digital reconstructions provide the required neuromorphological descriptions in a parsimonious, comprehensive, and reliable numerical format. NeuroMorpho.Org is the largest web-accessible repository service for digitally reconstructed neurons and one of the integrated resources in the Neuroscience Information Framework (NIF). Here we describe the NeuroMorpho.Org approach as an exemplary experience in designing, creating, populating, and curating a neuroscience digital resource. The simple three-tier architecture of NeuroMorpho.Org (web client, web server, and relational database) encompasses all necessary elements to support a large-scale, integrate-able repository. The data content, while heterogeneous in scientific scope and experimental origin, is unified in format and presentation by an in house standardization protocol. The server application (MRALD) is secure, customizable, and developer-friendly. Centralized processing and expert annotation yields a comprehensive set of metadata that enriches and complements the raw data. The thoroughly tested interface design allows for optimal and effective data search and retrieval. Availability of data in both original and standardized formats ensures compatibility with existing resources and fosters further tool development. Other key functions enable extensive exploration and discovery, including 3D and interactive visualization of branching, frequently measured morphometrics, and reciprocal links to the original PubMed publications. The integration of NeuroMorpho.Org with version-1 of the NIF (NIFv1) provides the opportunity to access morphological data in the context of other relevant resources and diverse subdomains of neuroscience, opening exciting new possibilities in data mining and knowledge discovery. The outcome of such coordination is the rapid and powerful advancement of neuroscience research at both the conceptual and technological level.
PaintOmics 3: a web resource for the pathway analysis and visualization of multi-omics data.
Hernández-de-Diego, Rafael; Tarazona, Sonia; Martínez-Mira, Carlos; Balzano-Nogueira, Leandro; Furió-Tarí, Pedro; Pappas, Georgios J; Conesa, Ana
2018-05-25
The increasing availability of multi-omic platforms poses new challenges to data analysis. Joint visualization of multi-omics data is instrumental in better understanding interconnections across molecular layers and in fully utilizing the multi-omic resources available to make biological discoveries. We present here PaintOmics 3, a web-based resource for the integrated visualization of multiple omic data types onto KEGG pathway diagrams. PaintOmics 3 combines server-end capabilities for data analysis with the potential of modern web resources for data visualization, providing researchers with a powerful framework for interactive exploration of their multi-omics information. Unlike other visualization tools, PaintOmics 3 covers a comprehensive pathway analysis workflow, including automatic feature name/identifier conversion, multi-layered feature matching, pathway enrichment, network analysis, interactive heatmaps, trend charts, and more. It accepts a wide variety of omic types, including transcriptomics, proteomics and metabolomics, as well as region-based approaches such as ATAC-seq or ChIP-seq data. The tool is freely available at www.paintomics.org.
Nowotka, Michał M; Gaulton, Anna; Mendez, David; Bento, A Patricia; Hersey, Anne; Leach, Andrew
2017-08-01
ChEMBL is a manually curated database of bioactivity data on small drug-like molecules, used by drug discovery scientists. Among many access methods, a REST API provides programmatic access, allowing the remote retrieval of ChEMBL data and its integration into other applications. This approach allows scientists to move from a world where they go to the ChEMBL web site to search for relevant data, to one where ChEMBL data can be simply integrated into their everyday tools and work environment. Areas covered: This review highlights some of the audiences who may benefit from using the ChEMBL API, and the goals they can address, through the description of several use cases. The examples cover a team communication tool (Slack), a data analytics platform (KNIME), batch job management software (Luigi) and Rich Internet Applications. Expert opinion: The advent of web technologies, cloud computing and micro services oriented architectures have made REST APIs an essential ingredient of modern software development models. The widespread availability of tools consuming RESTful resources have made them useful for many groups of users. The ChEMBL API is a valuable resource of drug discovery bioactivity data for professional chemists, chemistry students, data scientists, scientific and web developers.
Semantic Web repositories for genomics data using the eXframe platform.
Merrill, Emily; Corlosquet, Stéphane; Ciccarese, Paolo; Clark, Tim; Das, Sudeshna
2014-01-01
With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge.
The BioExtract Server: a web-based bioinformatic workflow platform
Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.
2011-01-01
The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552
Computational toxicology using the OpenTox application programming interface and Bioclipse
2011-01-01
Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173
ERIC Educational Resources Information Center
Huang, Tien-Chi; Chen, Chia-Chen
2013-01-01
With the advent of Web 2.0 technology, message transmission has become increasingly convenient, and the rising amount of information has become gradually diverse. A question must be asked of this trend, on whether informal learning resources can be integrated into formal learning knowledge. This study attempts to integrate educational blog…
A Web GIS Enabled Comprehensive Hydrologic Information System for Indian Water Resources Systems
NASA Astrophysics Data System (ADS)
Goyal, A.; Tyagi, H.; Gosain, A. K.; Khosa, R.
2017-12-01
Hydrological systems across the globe are getting increasingly water stressed with each passing season due to climate variability & snowballing water demand. Hence, to safeguard food, livelihood & economic security, it becomes imperative to employ scientific studies for holistic management of indispensable resource like water. However, hydrological study of any scale & purpose is heavily reliant on various spatio-temporal datasets which are not only difficult to discover/access but are also tough to use & manage. Besides, owing to diversity of water sector agencies & dearth of standard operating procedures, seamless information exchange is challenging for collaborators. Extensive research is being done worldwide to address these issues but regrettably not much has been done in developing countries like India. Therefore, the current study endeavours to develop a Hydrological Information System framework in a Web-GIS environment for empowering Indian water resources systems. The study attempts to harmonize the standards for metadata, terminology, symbology, versioning & archiving for effective generation, processing, dissemination & mining of data required for hydrological studies. Furthermore, modelers with humble computing resources at their disposal, can consume this standardized data in high performance simulation modelling using cloud computing within the developed Web-GIS framework. They can also integrate the inputs-outputs of different numerical models available on the platform and integrate their results for comprehensive analysis of the chosen hydrological system. Thus, the developed portal is an all-in-one framework that can facilitate decision makers, industry professionals & researchers in efficient water management.
NASA Technical Reports Server (NTRS)
Gawadiak, Yuri; Wong, Alan; Maluf, David; Bell, David; Gurram, Mohana; Tran, Khai Peter; Hsu, Jennifer; Yagi, Kenji; Patel, Hemil
2007-01-01
The Program Management Tool (PMT) is a comprehensive, Web-enabled business intelligence software tool for assisting program and project managers within NASA enterprises in gathering, comprehending, and disseminating information on the progress of their programs and projects. The PMT provides planning and management support for implementing NASA programmatic and project management processes and requirements. It provides an online environment for program and line management to develop, communicate, and manage their programs, projects, and tasks in a comprehensive tool suite. The information managed by use of the PMT can include monthly reports as well as data on goals, deliverables, milestones, business processes, personnel, task plans, monthly reports, and budgetary allocations. The PMT provides an intuitive and enhanced Web interface to automate the tedious process of gathering and sharing monthly progress reports, task plans, financial data, and other information on project resources based on technical, schedule, budget, and management criteria and merits. The PMT is consistent with the latest Web standards and software practices, including the use of Extensible Markup Language (XML) for exchanging data and the WebDAV (Web Distributed Authoring and Versioning) protocol for collaborative management of documents. The PMT provides graphical displays of resource allocations in the form of bar and pie charts using Microsoft Excel Visual Basic for Application (VBA) libraries. The PMT has an extensible architecture that enables integration of PMT with other strategic-information software systems, including, for example, the Erasmus reporting system, now part of the NASA Integrated Enterprise Management Program (IEMP) tool suite, at NASA Marshall Space Flight Center (MSFC). The PMT data architecture provides automated and extensive software interfaces and reports to various strategic information systems to eliminate duplicative human entries and minimize data integrity issues among various NASA systems that impact schedules and planning.
An Integrated Web-based Decision Support System in Disaster Risk Management
NASA Astrophysics Data System (ADS)
Aye, Z. C.; Jaboyedoff, M.; Derron, M. H.
2012-04-01
Nowadays, web based decision support systems (DSS) play an essential role in disaster risk management because of their supporting abilities which help the decision makers to improve their performances and make better decisions without needing to solve complex problems while reducing human resources and time. Since the decision making process is one of the main factors which highly influence the damages and losses of society, it is extremely important to make right decisions at right time by combining available risk information with advanced web technology of Geographic Information System (GIS) and Decision Support System (DSS). This paper presents an integrated web-based decision support system (DSS) of how to use risk information in risk management efficiently and effectively while highlighting the importance of a decision support system in the field of risk reduction. Beyond the conventional systems, it provides the users to define their own strategies starting from risk identification to the risk reduction, which leads to an integrated approach in risk management. In addition, it also considers the complexity of changing environment from different perspectives and sectors with diverse stakeholders' involvement in the development process. The aim of this platform is to contribute a part towards the natural hazards and geosciences society by developing an open-source web platform where the users can analyze risk profiles and make decisions by performing cost benefit analysis, Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) with the support of others tools and resources provided. There are different access rights to the system depending on the user profiles and their responsibilities. The system is still under development and the current version provides maps viewing, basic GIS functionality, assessment of important infrastructures (e.g. bridge, hospital, etc.) affected by landslides and visualization of the impact-probability matrix in terms of socio-economic dimension.
Quicker, slicker, and better? An evaluation of a web-based human resource management system
NASA Astrophysics Data System (ADS)
Gibb, Stephen; McBride, Andrew
2001-10-01
This paper reviews the design and development of a web based Human Resource Management (HRM) system which has as its foundation a 'capability profiler' tool for analysing individual or team roles in organisations. This provides a foundation for managing a set of integrated activities in recruitment and selection, performance and career management, and training and development for individuals, teams, and whole organisations. The challenges of representing and processing information about the human side of organisation encountered in the design and implementation of such systems are evident. There is a combination of legal, practical, technical and philosophical issues to be faced in the processes of defining roles, selecting staff, monitoring and managing the performance of employees in the design and implementation of such systems. The strengths and weaknesses of web based systems in this context are evaluated. This evaluation highlights both the potential, given the evolution of broader Enterprise Resource Planning (ERP) systems and strategies in manufacturing, and concerns about the migration of HRM processes to such systems.
Scientists as Communicators: Inclusion of a Science/Education Liaison on Research Expeditions
NASA Astrophysics Data System (ADS)
Sautter, L. R.
2004-12-01
Communication of research and scientific results to an audience outside of one's field poses a challenge to many scientists. Many research scientists have a natural ability to address the challenge, while others may chose to seek assistance. Research cruise PIs maywish to consider including a Science/Education Liaison (SEL) on future grants. The SEL is a marine scientist whose job before, during and after the cruise is to work with the shipboard scientists to document the science conducted. The SEL's role is three-fold: (1) to communicate shipboard science activities near-real-time to the public via the web; (2) to develop a variety of web-based resources based on the scientific operations; and (3) to assist educators with the integration of these resources into classroom curricula. The first role involves at-sea writing and relaying from ship-to-shore (via email) a series of Daily Logs. NOAA Ocean Exploration (OE) has mastered the use of web-posted Daily Logs for their major expeditions (see their OceanExplorer website), introducing millions of users to deep sea exploration. Project Oceanica uses the OE daily log model to document research expeditions. In addition to writing daily logs and participating on OE expeditions, Oceanica's SEL also documents the cruise's scientific operations and preliminary findings using video and photos, so that web-based resources (photo galleries, video galleries, and PhotoDocumentaries) can be developed during and following the cruise, and posted on the expedition's home page within the Oceanica web site (see URL). We have created templates for constructing these science resources which allow the shipboard scientists to assist with web resource development. Bringing users to the site is achieved through email communications to a growing list of educators, scientists, and students, and through collaboration with the COSEE network. With a large research expedition-based inventory of web resources now available, Oceanica is training teachers and college faculty on the use and incorporation of these resources into middle school, high school and introductory college classrooms. Support for a SEL on shipboard expeditions serves to catalyze the dissemination of the scientific operations to a broad audience of users.
IMPACT web portal: oncology database integrating molecular profiles with actionable therapeutics.
Hintzsche, Jennifer D; Yoo, Minjae; Kim, Jihye; Amato, Carol M; Robinson, William A; Tan, Aik Choon
2018-04-20
With the advancement of next generation sequencing technology, researchers are now able to identify important variants and structural changes in DNA and RNA in cancer patient samples. With this information, we can now correlate specific variants and/or structural changes with actionable therapeutics known to inhibit these variants. We introduce the creation of the IMPACT Web Portal, a new online resource that connects molecular profiles of tumors to approved drugs, investigational therapeutics and pharmacogenetics associated drugs. IMPACT Web Portal contains a total of 776 drugs connected to 1326 target genes and 435 target variants, fusion, and copy number alterations. The online IMPACT Web Portal allows users to search for various genetic alterations and connects them to three levels of actionable therapeutics. The results are categorized into 3 levels: Level 1 contains approved drugs separated into two groups; Level 1A contains approved drugs with variant specific information while Level 1B contains approved drugs with gene level information. Level 2 contains drugs currently in oncology clinical trials. Level 3 provides pharmacogenetic associations between approved drugs and genes. IMPACT Web Portal allows for sequencing data to be linked to actionable therapeutics for translational and drug repurposing research. The IMPACT Web Portal online resource allows users to query genes and variants to approved and investigational drugs. We envision that this resource will be a valuable database for personalized medicine and drug repurposing. IMPACT Web Portal is freely available for non-commercial use at http://tanlab.ucdenver.edu/IMPACT .
The Internet as a Means of Information Resources' Integration: The Regional Aspect.
ERIC Educational Resources Information Center
Elepov, Boris S.; Soboleva, Elena B.; Fedotova, Olga P.; Shabanov, Andrei V.
The presence of Siberian and Far Eastern libraries on the Internet has become the reality of today. Joining this community, they solve at least two main problems--those of rational use of World Wide Web resources and those of providing access to their own products. There is a system of fixing documentary streams disclosing regional problems. Each…
ERIC Educational Resources Information Center
Kahn, Russell L.
2013-01-01
This article develops and applies an analytic matrix for searching and using Web 2.0 resources along a learning continuum based on learning styles. This continuum applies core concepts of cognitive psychology, which places an emphasis on internal processes, such as motivation, thinking, attitudes, and reflection. A pilot study found that access to…
Benefits and Challenges in Using Computers and the Internet with Adult English Learners.
ERIC Educational Resources Information Center
Terrill, Lynda
Although resources and training vary from program to program, adult English as a Second or Other Language (ESOL) teachers and English learners across the country are integrating computers and Internet use with ESOL instruction. This can be seen in the growing number of ESOL resources available on the World Wide Web. There are very good reasons for…
Grid Enabled Geospatial Catalogue Web Service
NASA Technical Reports Server (NTRS)
Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush
2004-01-01
Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.
The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies.
Katayama, Toshiaki; Wilkinson, Mark D; Micklem, Gos; Kawashima, Shuichi; Yamaguchi, Atsuko; Nakao, Mitsuteru; Yamamoto, Yasunori; Okamoto, Shinobu; Oouchida, Kenta; Chun, Hong-Woo; Aerts, Jan; Afzal, Hammad; Antezana, Erick; Arakawa, Kazuharu; Aranda, Bruno; Belleau, Francois; Bolleman, Jerven; Bonnal, Raoul Jp; Chapman, Brad; Cock, Peter Ja; Eriksson, Tore; Gordon, Paul Mk; Goto, Naohisa; Hayashi, Kazuhiro; Horn, Heiko; Ishiwata, Ryosuke; Kaminuma, Eli; Kasprzyk, Arek; Kawaji, Hideya; Kido, Nobuhiro; Kim, Young Joo; Kinjo, Akira R; Konishi, Fumikazu; Kwon, Kyung-Hoon; Labarga, Alberto; Lamprecht, Anna-Lena; Lin, Yu; Lindenbaum, Pierre; McCarthy, Luke; Morita, Hideyuki; Murakami, Katsuhiko; Nagao, Koji; Nishida, Kozo; Nishimura, Kunihiro; Nishizawa, Tatsuya; Ogishima, Soichi; Ono, Keiichiro; Oshita, Kazuki; Park, Keun-Joon; Prins, Pjotr; Saito, Taro L; Samwald, Matthias; Satagopam, Venkata P; Shigemoto, Yasumasa; Smith, Richard; Splendiani, Andrea; Sugawara, Hideaki; Taylor, James; Vos, Rutger A; Withers, David; Yamasaki, Chisato; Zmasek, Christian M; Kawamoto, Shoko; Okubo, Kosaku; Asai, Kiyoshi; Takagi, Toshihisa
2013-02-11
BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer.
The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies
2013-01-01
Background BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. Results The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. Conclusion We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer. PMID:23398680
MAPI: towards the integrated exploitation of bioinformatics Web Services.
Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo
2011-10-27
Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
Childs, Kevin L; Konganti, Kranti; Buell, C Robin
2012-01-01
Major feedstock sources for future biofuel production are likely to be high biomass producing plant species such as poplar, pine, switchgrass, sorghum and maize. One active area of research in these species is genome-enabled improvement of lignocellulosic biofuel feedstock quality and yield. To facilitate genomic-based investigations in these species, we developed the Biofuel Feedstock Genomic Resource (BFGR), a database and web-portal that provides high-quality, uniform and integrated functional annotation of gene and transcript assembly sequences from species of interest to lignocellulosic biofuel feedstock researchers. The BFGR includes sequence data from 54 species and permits researchers to view, analyze and obtain annotation at the gene, transcript, protein and genome level. Annotation of biochemical pathways permits the identification of key genes and transcripts central to the improvement of lignocellulosic properties in these species. The integrated nature of the BFGR in terms of annotation methods, orthologous/paralogous relationships and linkage to seven species with complete genome sequences allows comparative analyses for biofuel feedstock species with limited sequence resources. Database URL: http://bfgr.plantbiology.msu.edu.
Marenco, Luis; Ascoli, Giorgio A; Martone, Maryann E; Shepherd, Gordon M; Miller, Perry L
2008-09-01
This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information's (NCBI's) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation.
Ascoli, Giorgio A.; Martone, Maryann E.; Shepherd, Gordon M.; Miller, Perry L.
2009-01-01
This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information’s (NCBI’s) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation. PMID:18975149
Hoelzer, Simon; Schweiger, Ralf K; Rieger, Joerg; Meyer, Michael
2006-01-01
The organizational structures of web contents and electronic information resources must adapt to the demands of a growing volume of information and user requirements. Otherwise the information society will be threatened by disinformation. The biomedical sciences are especially vulnerable in this regard, since they are strongly oriented toward text-based knowledge sources. Here sustainable improvement can only be achieved by using a comprehensive, integrated approach that not only includes data management but also specifically incorporates the editorial processes, including structuring information sources and publication. The technical resources needed to effectively master these tasks are already available in the form of the data standards and tools of the Semantic Web. They include Rich Site Summaries (RSS), which have become an established means of distributing and syndicating conventional news messages and blogs. They can also provide access to the contents of the previously mentioned information sources, which are conventionally classified as 'deep web' content.
Masseroli, Marco; Stella, Andrea; Meani, Natalia; Alcalay, Myriam; Pinciroli, Francesco
2004-12-12
High-throughput technologies create the necessity to mine large amounts of gene annotations from diverse databanks, and to integrate the resulting data. Most databanks can be interrogated only via Web, for a single gene at a time, and query results are generally available only in the HTML format. Although some databanks provide batch retrieval of data via FTP, this requires expertise and resources for locally reimplementing the databank. We developed MyWEST, a tool aimed at researchers without extensive informatics skills or resources, which exploits user-defined templates to easily mine selected annotations from different Web-interfaced databanks, and aggregates and structures results in an automatically updated database. Using microarray results from a model system of retinoic acid-induced differentiation, MyWEST effectively gathered relevant annotations from various biomolecular databanks, highlighted significant biological characteristics and supported a global approach to the understanding of complex cellular mechanisms. MyWEST is freely available for non-profit use at http://www.medinfopoli.polimi.it/MyWEST/
DAVID-WS: a stateful web service to facilitate gene/protein list analysis
Jiao, Xiaoli; Sherman, Brad T.; Huang, Da Wei; Stephens, Robert; Baseler, Michael W.; Lane, H. Clifford; Lempicki, Richard A.
2012-01-01
Summary: The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. Availability: The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html. Contact: xiaoli.jiao@nih.gov; rlempicki@nih.gov PMID:22543366
DAVID-WS: a stateful web service to facilitate gene/protein list analysis.
Jiao, Xiaoli; Sherman, Brad T; Huang, Da Wei; Stephens, Robert; Baseler, Michael W; Lane, H Clifford; Lempicki, Richard A
2012-07-01
The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html.
Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies.
Huang, Yu S; Horton, Matthew; Vilhjálmsson, Bjarni J; Seren, Umit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O; Bergelson, Joy; Nordborg, Magnus
2011-01-01
With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/
AlzPharm: integration of neurodegeneration data using RDF.
Lam, Hugo Y K; Marenco, Luis; Clark, Tim; Gao, Yong; Kinoshita, June; Shepherd, Gordon; Miller, Perry; Wu, Elizabeth; Wong, Gwendolyn T; Liu, Nian; Crasto, Chiquito; Morse, Thomas; Stephens, Susie; Cheung, Kei-Hoi
2007-05-09
Neuroscientists often need to access a wide range of data sets distributed over the Internet. These data sets, however, are typically neither integrated nor interoperable, resulting in a barrier to answering complex neuroscience research questions. Domain ontologies can enable the querying heterogeneous data sets, but they are not sufficient for neuroscience since the data of interest commonly span multiple research domains. To this end, e-Neuroscience seeks to provide an integrated platform for neuroscientists to discover new knowledge through seamless integration of the very diverse types of neuroscience data. Here we present a Semantic Web approach to building this e-Neuroscience framework by using the Resource Description Framework (RDF) and its vocabulary description language, RDF Schema (RDFS), as a standard data model to facilitate both representation and integration of the data. We have constructed a pilot ontology for BrainPharm (a subset of SenseLab) using RDFS and then converted a subset of the BrainPharm data into RDF according to the ontological structure. We have also integrated the converted BrainPharm data with existing RDF hypothesis and publication data from a pilot version of SWAN (Semantic Web Applications in Neuromedicine). Our implementation uses the RDF Data Model in Oracle Database 10g release 2 for data integration, query, and inference, while our Web interface allows users to query the data and retrieve the results in a convenient fashion. Accessing and integrating biomedical data which cuts across multiple disciplines will be increasingly indispensable and beneficial to neuroscience researchers. The Semantic Web approach we undertook has demonstrated a promising way to semantically integrate data sets created independently. It also shows how advanced queries and inferences can be performed over the integrated data, which are hard to achieve using traditional data integration approaches. Our pilot results suggest that our Semantic Web approach is suitable for realizing e-Neuroscience and generic enough to be applied in other biomedical fields.
AlzPharm: integration of neurodegeneration data using RDF
Lam, Hugo YK; Marenco, Luis; Clark, Tim; Gao, Yong; Kinoshita, June; Shepherd, Gordon; Miller, Perry; Wu, Elizabeth; Wong, Gwendolyn T; Liu, Nian; Crasto, Chiquito; Morse, Thomas; Stephens, Susie; Cheung, Kei-Hoi
2007-01-01
Background Neuroscientists often need to access a wide range of data sets distributed over the Internet. These data sets, however, are typically neither integrated nor interoperable, resulting in a barrier to answering complex neuroscience research questions. Domain ontologies can enable the querying heterogeneous data sets, but they are not sufficient for neuroscience since the data of interest commonly span multiple research domains. To this end, e-Neuroscience seeks to provide an integrated platform for neuroscientists to discover new knowledge through seamless integration of the very diverse types of neuroscience data. Here we present a Semantic Web approach to building this e-Neuroscience framework by using the Resource Description Framework (RDF) and its vocabulary description language, RDF Schema (RDFS), as a standard data model to facilitate both representation and integration of the data. Results We have constructed a pilot ontology for BrainPharm (a subset of SenseLab) using RDFS and then converted a subset of the BrainPharm data into RDF according to the ontological structure. We have also integrated the converted BrainPharm data with existing RDF hypothesis and publication data from a pilot version of SWAN (Semantic Web Applications in Neuromedicine). Our implementation uses the RDF Data Model in Oracle Database 10g release 2 for data integration, query, and inference, while our Web interface allows users to query the data and retrieve the results in a convenient fashion. Conclusion Accessing and integrating biomedical data which cuts across multiple disciplines will be increasingly indispensable and beneficial to neuroscience researchers. The Semantic Web approach we undertook has demonstrated a promising way to semantically integrate data sets created independently. It also shows how advanced queries and inferences can be performed over the integrated data, which are hard to achieve using traditional data integration approaches. Our pilot results suggest that our Semantic Web approach is suitable for realizing e-Neuroscience and generic enough to be applied in other biomedical fields. PMID:17493287
NASA Astrophysics Data System (ADS)
Wang, J.; Song, J.; Gao, M.; Zhu, L.
2014-02-01
The trans-boundary area between Northern China, Mongolia and eastern Siberia of Russia is a continuous geographical area located in north eastern Asia. Many common issues in this region need to be addressed based on a uniform resources and environmental data warehouse. Based on the practice of joint scientific expedition, the paper presented a data integration solution including 3 steps, i.e., data collection standards and specifications making, data reorganization and process, data warehouse design and development. A series of data collection standards and specifications were drawn up firstly covering more than 10 domains. According to the uniform standard, 20 resources and environmental survey databases in regional scale, and 11 in-situ observation databases were reorganized and integrated. North East Asia Resources and Environmental Data Warehouse was designed, which included 4 layers, i.e., resources layer, core business logic layer, internet interoperation layer, and web portal layer. The data warehouse prototype was developed and deployed initially. All the integrated data in this area can be accessed online.
High-performance web services for querying gene and variant annotation.
Xin, Jiwen; Mark, Adam; Afrasiabi, Cyrus; Tsueng, Ginger; Juchler, Moritz; Gopal, Nikhil; Stupp, Gregory S; Putman, Timothy E; Ainscough, Benjamin J; Griffith, Obi L; Torkamani, Ali; Whetzel, Patricia L; Mungall, Christopher J; Mooney, Sean D; Su, Andrew I; Wu, Chunlei
2016-05-06
Efficient tools for data management and integration are essential for many aspects of high-throughput biology. In particular, annotations of genes and human genetic variants are commonly used but highly fragmented across many resources. Here, we describe MyGene.info and MyVariant.info, high-performance web services for querying gene and variant annotation information. These web services are currently accessed more than three million times permonth. They also demonstrate a generalizable cloud-based model for organizing and querying biological annotation information. MyGene.info and MyVariant.info are provided as high-performance web services, accessible at http://mygene.info and http://myvariant.info . Both are offered free of charge to the research community.
Freight information real-time system for transport (FIRST)
DOT National Transportation Integrated Search
2002-05-01
The FIRST Demonstration Project was funded and developed, in part, to provide unique solutions to freight transportation problems. FIRST is an Internet-based, real-time network that integrates many resources into a single, easy-to-use Web site on car...
Experimenting with an Evolving Ground/Space-based Software Architecture to Enable Sensor Webs
NASA Technical Reports Server (NTRS)
mandl, Daniel; Frye, Stuart
2005-01-01
A series of ongoing experiments are being conducted at the NASA Goddard Space Flight Center to explore integrated ground and space-based software architectures enabling sensor webs. A sensor web, as defined by Steve Talabac at NASA Goddard Space Flight Center(GSFC), is a coherent set of distributed nodes interconnected by a communications fabric, that collectively behave as a single, dynamically adaptive, observing system. The nodes can be comprised of satellites, ground instruments, computing nodes etc. Sensor web capability requires autonomous management of constellation resources. This becomes progressively more important as more and more satellites share resource, such as communication channels and ground station,s while automatically coordinating their activities. There have been five ongoing activities which include an effort to standardize a set of middleware. This paper will describe one set of activities using the Earth Observing 1 satellite, which used a variety of ground and flight software along with other satellites and ground sensors to prototype a sensor web. This activity allowed us to explore where the difficulties that occur in the assembly of sensor webs given today s technology. We will present an overview of the software system architecture, some key experiments and lessons learned to facilitate better sensor webs in the future.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
EntrezAJAX: direct web browser access to the Entrez Programming Utilities
2010-01-01
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/ PMID:20565938
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.
Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J
2015-01-01
CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.
Semantic Web repositories for genomics data using the eXframe platform
2014-01-01
Background With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. Methods To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Conclusions Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge. PMID:25093072
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.
1999-07-01
The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.
Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G
2011-01-01
A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.
Teaching bioinformatics and neuroinformatics by using free web-based tools.
Grisham, William; Schottler, Natalie A; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson
2010-01-01
This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with anatomy (Mouse Brain Library), quantitative trait locus analysis (WebQTL from GeneNetwork), bioinformatics and gene expression analyses (University of California, Santa Cruz Genome Browser, National Center for Biotechnology Information's Entrez Gene, and the Allen Brain Atlas), and information resources (PubMed). Instructors can use these various websites in concert to teach genetics from the phenotypic level to the molecular level, aspects of neuroanatomy and histology, statistics, quantitative trait locus analysis, and molecular biology (including in situ hybridization and microarray analysis), and to introduce bioinformatic resources. Students use these resources to discover 1) the region(s) of chromosome(s) influencing the phenotypic trait, 2) a list of candidate genes-narrowed by expression data, 3) the in situ pattern of a given gene in the region of interest, 4) the nucleotide sequence of the candidate gene, and 5) articles describing the gene. Teaching materials such as a detailed student/instructor's manual, PowerPoints, sample exams, and links to free Web resources can be found at http://mdcune.psych.ucla.edu/modules/bioinformatics.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
LDAP: a web server for lncRNA-disease association prediction.
Lan, Wei; Li, Min; Zhao, Kaijie; Liu, Jin; Wu, Fang-Xiang; Pan, Yi; Wang, Jianxin
2017-02-01
Increasing evidences have demonstrated that long noncoding RNAs (lncRNAs) play important roles in many human diseases. Therefore, predicting novel lncRNA-disease associations would contribute to dissect the complex mechanisms of disease pathogenesis. Some computational methods have been developed to infer lncRNA-disease associations. However, most of these methods infer lncRNA-disease associations only based on single data resource. In this paper, we propose a new computational method to predict lncRNA-disease associations by integrating multiple biological data resources. Then, we implement this method as a web server for lncRNA-disease association prediction (LDAP). The input of the LDAP server is the lncRNA sequence. The LDAP predicts potential lncRNA-disease associations by using a bagging SVM classifier based on lncRNA similarity and disease similarity. The web server is available at http://bioinformatics.csu.edu.cn/ldap jxwang@mail.csu.edu.cn. Supplementary data are available at Bioinformatics online.
The Virtual Learning Commons: Supporting Science Education with Emerging Technologies
NASA Astrophysics Data System (ADS)
Pennington, D. D.; Gandara, A.; Gris, I.
2012-12-01
The Virtual Learning Commons (VLC), funded by the National Science Foundation Office of Cyberinfrastructure CI-Team Program, is a combination of Semantic Web, mash up, and social networking tools that supports knowledge sharing and innovation across scientific disciplines in research and education communities and networks. The explosion of scientific resources (data, models, algorithms, tools, and cyberinfrastructure) challenges the ability of educators to be aware of resources that might be relevant to their classes. Even when aware, it can be difficult to understand enough about those resources to develop classroom materials. Often emerging data and technologies have little documentation, especially about their application. The VLC tackles this challenge by providing mechanisms for individuals and groups of educators to organize Web resources into virtual collections, and engage each other around those collections in order to a) learn about potentially relevant resources that are available; b) design classes that leverage those resources; and c) develop course syllabi. The VLC integrates Semantic Web functionality for structuring distributed information, mash up functionality for retrieving and displaying information, and social media for discussing/rating information. We are working to provide three views of information that support educators in different ways: 1. Innovation Marketplace: supports users as they find others teaching similar courses, where they are located, and who they collaborate with; 2. Conceptual Mapper: supports educators as they organize their thinking about the content of their class and related classes taught by others; 3. Curriculum Designer: supports educators as they generate a syllabus and find Web resources that are relevant. This presentation will discuss the innovation and learning theories that have informed design of the VLC, hypotheses about the use of emerging technologies to support innovation in classrooms, and will include a brief demonstration of these capabilities.
Grid computing enhances standards-compatible geospatial catalogue service
NASA Astrophysics Data System (ADS)
Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang
2010-04-01
A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.
A resource-oriented architecture for a Geospatial Web
NASA Astrophysics Data System (ADS)
Mazzetti, Paolo; Nativi, Stefano
2010-05-01
In this presentation we discuss some architectural issues on the design of an architecture for a Geospatial Web, that is an information system for sharing geospatial resources according to the Web paradigm. The success of the Web in building a multi-purpose information space, has raised questions about the possibility of adopting the same approach for systems dedicated to the sharing of more specific resources, such as the geospatial information, that is information characterized by spatial/temporal reference. To this aim an investigation on the nature of the Web and on the validity of its paradigm for geospatial resources is required. The Web was born in the early 90's to provide "a shared information space through which people and machines could communicate" [Berners-Lee 1996]. It was originally built around a small set of specifications (e.g. URI, HTTP, HTML, etc.); however, in the last two decades several other technologies and specifications have been introduced in order to extend its capabilities. Most of them (e.g. the SOAP family) actually aimed to transform the Web in a generic Distributed Computing Infrastructure. While these efforts were definitely successful enabling the adoption of service-oriented approaches for machine-to-machine interactions supporting complex business processes (e.g. for e-Government and e-Business applications), they do not fit in the original concept of the Web. In the year 2000, R. T. Fielding, one of the designers of the original Web specifications, proposes a new architectural style for distributed systems, called REST (Representational State Transfer), aiming to capture the fundamental characteristics of the Web as it was originally conceived [Fielding 2000]. In this view, the nature of the Web lies not so much in the technologies, as in the way they are used. Maintaining the Web architecture conform to the REST style would then assure the scalability, extensibility and low entry barrier of the original Web. On the contrary, systems using the same Web technologies and specifications but according to a different architectural style, despite their usefulness, should not be considered part of the Web. If the REST style captures the significant Web characteristics, then, in order to build a Geospatial Web it is necessary that its architecture satisfies all the REST constraints. One of them is of particular importance: the adoption of a Uniform Interface. It prescribes that all the geospatial resources must be accessed through the same interface; moreover according to the REST style this interface must satisfy four further constraints: a) identification of resources; b) manipulation of resources through representations; c) self-descriptive messages; and, d) hypermedia as the engine of application state. In the Web, the uniform interface provides basic operations which are meaningful for generic resources. They typically implement the CRUD pattern (Create-Retrieve-Update-Delete) which demonstrated to be flexible and powerful in several general-purpose contexts (e.g. filesystem management, SQL for database management systems, etc.). Restricting the scope to a subset of resources it would be possible to identify other generic actions which are meaningful for all of them. For example for geospatial resources, subsetting, resampling, interpolation and coordinate reference systems transformations functionalities are candidate functionalities for a uniform interface. However an investigation is needed to clarify the semantics of those actions for different resources, and consequently if they can really ascend the role of generic interface operation. Concerning the point a), (identification of resources), it is required that every resource addressable in the Geospatial Web has its own identifier (e.g. a URI). This allows to implement citation and re-use of resources, simply providing the URI. OPeNDAP and KVP encodings of OGC data access services specifications might provide a basis for it. Concerning point b) (manipulation of resources through representations), the Geospatial Web poses several issues. In fact, while the Web mainly handles semi-structured information, in the Geospatial Web the information is typically structured with several possible data models (e.g. point series, gridded coverages, trajectories, etc.) and encodings. A possibility would be to simplify the interchange formats, choosing to support a subset of data models and format(s). This is what actually the Web designers did choosing to define a common format for hypermedia (HTML), although the underlying protocol would be generic. Concerning point c), self-descriptive messages, the exchanged messages should describe themselves and their content. This would not be actually a major issue considering the effort put in recent years on geospatial metadata models and specifications. The point d), hypermedia as the engine of application state, is actually where the Geospatial Web would mainly differ from existing geospatial information sharing systems. In fact the existing systems typically adopt a service-oriented architecture, where applications are built as a single service or as a workflow of services. On the other hand, in the Geospatial Web, applications should be built following the path between interconnected resources. The link between resources should be made explicit as hyperlinks. The adoption of Semantic Web solutions would allow to define not only the existence of a link between two resources, but also the nature of the link. The implementation of a Geospatial Web would allow to build an information system with the same characteristics of the Web sharing its points-of-strength and weaknesses. The main advantages would be the following: • The user would interact with the Geospatial Web according to the well-known Web navigation paradigm. This would lower the barrier to the access to geospatial applications for non-specialists (e.g. the success of Google Maps and other Web mapping applications); • Successful Web and Web 2.0 applications - search engines, feeds, social network - could be integrated/replicated in the Geospatial Web; The main drawbacks would be the following: • The Uniform Interface simplifies the overall system architecture (e.g. no service registry, and service descriptors required), but moves the complexity to the data representation. Moreover since the interface must stay generic, it results really simple and therefore complex interactions would require several transfers. • In the geospatial domain one of the most valuable resources are processes (e.g. environmental models). How they can be modeled as resources accessed through the common interface is an open issue. Taking into account advantages and drawback it seems that a Geospatial Web would be useful, but its use would be limited to specific use-cases not covering all the possible applications. The Geospatial Web architecture could be partly based on existing specifications, while other aspects need investigation. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Fielding 2000] Fielding, R. T. 2000. Architectural styles and the design of network-based software architectures. PhD Dissertation. Dept. of Information and Computer Science, University of California, Irvine
Kinjo, Akira R.; Suzuki, Hirofumi; Yamashita, Reiko; Ikegawa, Yasuyo; Kudou, Takahiro; Igarashi, Reiko; Kengaku, Yumiko; Cho, Hasumi; Standley, Daron M.; Nakagawa, Atsushi; Nakamura, Haruki
2012-01-01
The Protein Data Bank Japan (PDBj, http://pdbj.org) is a member of the worldwide Protein Data Bank (wwPDB) and accepts and processes the deposited data of experimentally determined macromolecular structures. While maintaining the archive in collaboration with other wwPDB partners, PDBj also provides a wide range of services and tools for analyzing structures and functions of proteins, which are summarized in this article. To enhance the interoperability of the PDB data, we have recently developed PDB/RDF, PDB data in the Resource Description Framework (RDF) format, along with its ontology in the Web Ontology Language (OWL) based on the PDB mmCIF Exchange Dictionary. Being in the standard format for the Semantic Web, the PDB/RDF data provide a means to integrate the PDB with other biological information resources. PMID:21976737
The Electron Microscopy Outreach Program: A Web-based resource for research and education.
Sosinsky, G E; Baker, T S; Hand, G; Ellisman, M H
1999-01-01
We have developed a centralized World Wide Web (WWW)-based environment that serves as a resource of software tools and expertise for biological electron microscopy. A major focus is molecular electron microscopy, but the site also includes information and links on structural biology at all levels of resolution. This site serves to help integrate or link structural biology techniques in accordance with user needs. The WWW site, called the Electron Microscopy (EM) Outreach Program (URL: http://emoutreach.sdsc.edu), provides scientists with computational and educational tools for their research and edification. In particular, we have set up a centralized resource containing course notes, references, and links to image analysis and three-dimensional reconstruction software for investigators wanting to learn about EM techniques either within or outside of their fields of expertise. Copyright 1999 Academic Press.
Integration of bicycling and walking facilities into the infrastructure of urban communities.
DOT National Transportation Integrated Search
2012-02-01
Several manuals, handbooks and web resources exist to provide varied guidance on planning for and designing bicycle and pedestrian facilities, yet there are no specific indications about which of the varied treatments in these guides work well for us...
Integration of Bicycling and Walking Facilities into the Infrastructure of Urban Communities
DOT National Transportation Integrated Search
2012-02-01
Several manuals, handbooks and web resources exist to provide varied guidance on planning for and designing bicycle and pedestrian facilities, yet there are no specific indications about which of the varied treatments in these guides work well for us...
NEIBank: Genomics and bioinformatics resources for vision research
Peterson, Katherine; Gao, James; Buchoff, Patee; Jaworski, Cynthia; Bowes-Rickman, Catherine; Ebright, Jessica N.; Hauser, Michael A.; Hoover, David
2008-01-01
NEIBank is an integrated resource for genomics and bioinformatics in vision research. It includes expressed sequence tag (EST) data and sequence-verified cDNA clones for multiple eye tissues of several species, web-based access to human eye-specific SAGE data through EyeSAGE, and comprehensive, annotated databases of known human eye disease genes and candidate disease gene loci. All expression- and disease-related data are integrated in EyeBrowse, an eye-centric genome browser. NEIBank provides a comprehensive overview of current knowledge of the transcriptional repertoires of eye tissues and their relation to pathology. PMID:18648525
Userscripts for the life sciences.
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-12-21
The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.
Userscripts for the Life Sciences
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-01-01
Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity. PMID:18154664
Globus | Informatics Technology for Cancer Research (ITCR)
Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.
The Development of GIS Educational Resources Sharing among Central Taiwan Universities
NASA Astrophysics Data System (ADS)
Chou, T.-Y.; Yeh, M.-L.; Lai, Y.-C.
2011-09-01
Using GIS in the classroom enhance students' computer skills and explore the range of knowledge. The paper highlights GIS integration on e-learning platform and introduces a variety of abundant educational resources. This research project will demonstrate tools for e-learning environment and delivers some case studies for learning interaction from Central Taiwan Universities. Feng Chia University (FCU) obtained a remarkable academic project subsidized by Ministry of Education and developed e-learning platform for excellence in teaching/learning programs among Central Taiwan's universities. The aim of the project is to integrate the educational resources of 13 universities in central Taiwan. FCU is serving as the hub of Center University. To overcome the problem of distance, e-platforms have been established to create experiences with collaboration enhanced learning. The e-platforms provide coordination of web service access among the educational community and deliver GIS educational resources. Most of GIS related courses cover the development of GIS, principles of cartography, spatial data analysis and overlaying, terrain analysis, buffer analysis, 3D GIS application, Remote Sensing, GPS technology, and WebGIS, MobileGIS, ArcGIS manipulation. In each GIS case study, students have been taught to know geographic meaning, collect spatial data and then use ArcGIS software to analyze spatial data. On one of e-Learning platforms provide lesson plans and presentation slides. Students can learn Arc GIS online. As they analyze spatial data, they can connect to GIS hub to get data they need including satellite images, aerial photos, and vector data. Moreover, e-learning platforms provide solutions and resources. Different levels of image scales have been integrated into the systems. Multi-scale spatial development and analyses in Central Taiwan integrate academic research resources among CTTLRC partners. Thus, establish decision-making support mechanism in teaching and learning. Accelerate communication, cooperation and sharing among academic units
Integrated Patient Education on U.S. Hospital Web Sites.
Huang, Edgar; Wu, Kerong; Edwards, Kelsey
2016-01-01
Based on a census of the 2015 Most Wired Hospitals, this content analysis aimed to find out how patient education has been integrated on these best IT hospitals' Web sites to serve the purposes of marketing and meeting online visitors' needs. This study will help hospitals to understand where the weaknesses are in their interactive patient education implementation and come up with a smart integration strategy. The study found that 70% of these hospitals had adopted interactive patient education contents, 76.6% of such contents were from a third-party developer, and only 20% of the hospitals linked their patient education contents to one or more of the hospital's resources while 26% cross-references such contents. The authors concluded that more hospitals should take advantage of modern information communication technology to cross-reference their patient education contents and to integrate such contents into their overall online marketing strategy to benefit patients and themselves.
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
Development and implementation of an Integrated Water Resources Management System (IWRMS)
NASA Astrophysics Data System (ADS)
Flügel, W.-A.; Busch, C.
2011-04-01
One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.
Information Literacy and the Introductory Management Classroom
ERIC Educational Resources Information Center
Leigh, Jennifer S. A.; Gibbon, Cynthia A.
2008-01-01
This article proposes that the integration of information literacy standards into the management classroom can address underdeveloped student research strategies and promote effective use of print, digital, and free Web resources. Incorporating information literacy can support management educators in their need to balance disciplinary content,…
Health Oasis in the Desert Southwest.
ERIC Educational Resources Information Center
Barrett, Julia R.
2001-01-01
Community outreach and education at the Southwest Environmental Health Sciences Center (University of Arizona, Tucson) features a Web site on toxicology and environmental health with resources for secondary teachers and students, an integrated high school curriculum with an environmental health sciences theme, teacher workshops, outreach to…
Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring
Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.
2008-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of autonomously tasking the other. Sensor-web data acquisition and dissemination will be accomplished through the use of the Open Geospatial Consortium Sensorweb Enablement protocols. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform. ??2008 IEEE.
The RCSB protein data bank: integrative view of protein, gene and 3D structural information
Rose, Peter W.; Prlić, Andreas; Altunkaya, Ali; Bi, Chunxiao; Bradley, Anthony R.; Christie, Cole H.; Costanzo, Luigi Di; Duarte, Jose M.; Dutta, Shuchismita; Feng, Zukang; Green, Rachel Kramer; Goodsell, David S.; Hudson, Brian; Kalro, Tara; Lowe, Robert; Peisach, Ezra; Randle, Christopher; Rose, Alexander S.; Shao, Chenghua; Tao, Yi-Ping; Valasatava, Yana; Voigt, Maria; Westbrook, John D.; Woo, Jesse; Yang, Huangwang; Young, Jasmine Y.; Zardecki, Christine; Berman, Helen M.; Burley, Stephen K.
2017-01-01
The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB, http://rcsb.org), the US data center for the global PDB archive, makes PDB data freely available to all users, from structural biologists to computational biologists and beyond. New tools and resources have been added to the RCSB PDB web portal in support of a ‘Structural View of Biology.’ Recent developments have improved the User experience, including the high-speed NGL Viewer that provides 3D molecular visualization in any web browser, improved support for data file download and enhanced organization of website pages for query, reporting and individual structure exploration. Structure validation information is now visible for all archival entries. PDB data have been integrated with external biological resources, including chromosomal position within the human genome; protein modifications; and metabolic pathways. PDB-101 educational materials have been reorganized into a searchable website and expanded to include new features such as the Geis Digital Archive. PMID:27794042
IntegromeDB: an integrated system and biological search engine.
Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia
2012-01-19
With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.
The value of the Semantic Web in the laboratory.
Frey, Jeremy G
2009-06-01
The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available.
Soybean Knowledge Base (SoyKB): a Web Resource for Soybean Translational Genomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Trupti; Patil, Kapil; Fitzpatrick, Michael R.
2012-01-17
Background: Soybean Knowledge Base (SoyKB) is a comprehensive all-inclusive web resource for soybean translational genomics. SoyKB is designed to handle the management and integration of soybean genomics, transcriptomics, proteomics and metabolomics data along with annotation of gene function and biological pathway. It contains information on four entities, namely genes, microRNAs, metabolites and single nucleotide polymorphisms (SNPs). Methods: SoyKB has many useful tools such as Affymetrix probe ID search, gene family search, multiple gene/ metabolite search supporting co-expression analysis, and protein 3D structure viewer as well as download and upload capacity for experimental data and annotations. It has four tiers ofmore » registration, which control different levels of access to public and private data. It allows users of certain levels to share their expertise by adding comments to the data. It has a user-friendly web interface together with genome browser and pathway viewer, which display data in an intuitive manner to the soybean researchers, producers and consumers. Conclusions: SoyKB addresses the increasing need of the soybean research community to have a one-stop-shop functional and translational omics web resource for information retrieval and analysis in a user-friendly way. SoyKB can be publicly accessed at http://soykb.org/.« less
COEUS: “semantic web in a box” for biomedical applications
2012-01-01
Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467
COEUS: "semantic web in a box" for biomedical applications.
Lopes, Pedro; Oliveira, José Luís
2012-12-17
As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
An Improved Publication Process for the UMVF.
Renard, Jean-Marie; Brunetaud, Jean-Marc; Cuggia, Marc; Darmoni, Stephan; Lebeux, Pierre; Beuscart, Régis
2005-01-01
The "Université Médicale Virtuelle Francophone" (UMVF) is a federation of French medical schools. Its main goal is to share the production and use of pedagogic medical resources generated by academic medical teachers. We developed an Open-Source application based upon a workflow system which provides an improved publication process for the UMVF. For teachers, the tool permits easy and efficient upload of new educational resources. For web masters it provides a mechanism to easily locate and validate the resources. For both the teachers and the web masters, the utility provides the control and communication functions that define a workflow system.For all users, students in particular, the application improves the value of the UMVF repository by providing an easy way to find a detailed description of a resource and to check any resource from the UMVF to ascertain its quality and integrity, even if the resource is an old deprecated version. The server tier of the application is used to implement the main workflow functionalities and is deployed on certified UMVF servers using the PHP language, an LDAP directory and an SQL database. The client tier of the application provides both the workflow and the search and check functionalities and is implemented using a Java applet through a W3C compliant web browser. A unique signature for each resource, was needed to provide security functionality and is implemented using the MD5 Digest algorithm. The testing performed by Rennes and Lille verified the functionality and conformity with our specifications.
Proposition and Organization of an Adaptive Learning Domain Based on Fusion from the Web
ERIC Educational Resources Information Center
Chaoui, Mohammed; Laskri, Mohamed Tayeb
2013-01-01
The Web allows self-navigated education through interaction with large amounts of Web resources. While enjoying the flexibility of Web tools, authors may suffer from research and filtering Web resources, when they face various resources formats and complex structures. An adaptation of extracted Web resources must be assured by authors, to give…
Chapman, Ann LN; Darton, Thomas C; Foster, Rachel A
2013-01-01
Tuberculosis (TB) remains a global health emergency. Ongoing challenges include the coordination of national and international control programs, high levels of drug resistance in many parts of the world, and availability of accurate and rapid diagnostic tests. The increasing availability and reliability of Internet access throughout both affluent and resource-limited countries brings new opportunities to improve TB management and control through the integration of web-based technologies with traditional approaches. In this review, we explore current and potential future use of web-based tools in the areas of TB diagnosis, treatment, epidemiology, service monitoring, and teaching and training. PMID:24294008
Jue, J Jane S; Metlay, Joshua P
2011-11-01
Web-based health resources on college websites have the potential to reach a substantial number of college students. The objective of this study was to characterize how colleges use their websites to educate about and promote health. This study was a cross-sectional analysis of websites from a nationally representative sample of 426 US colleges. Reviewers abstracted information about Web-based health resources from college websites, namely health information, Web links to outside health resources, and interactive Web-based health programs. Nearly 60% of US colleges provided health resources on their websites, 49% provided health information, 48% provided links to outside resources, and 28% provided interactive Web-based health programs. The most common topics of Web-based health resources were mental health and general health. We found widespread presence of Web-based health resources available from various delivery modes and covering a range of health topics. Although further research in this new modality is warranted, Web-based health resources hold promise for reaching more US college students.
Yan, Xianghe; Peng, Yun; Meng, Jianghong; Ruzante, Juliana; Fratamico, Pina M; Huang, Lihan; Juneja, Vijay; Needleman, David S
2011-01-01
Several factors have hindered effective use of information and resources related to food safety due to inconsistency among semantically heterogeneous data resources, lack of knowledge on profiling of food-borne pathogens, and knowledge gaps among research communities, government risk assessors/managers, and end-users of the information. This paper discusses technical aspects in the establishment of a comprehensive food safety information system consisting of the following steps: (a) computational collection and compiling publicly available information, including published pathogen genomic, proteomic, and metabolomic data; (b) development of ontology libraries on food-borne pathogens and design automatic algorithms with formal inference and fuzzy and probabilistic reasoning to address the consistency and accuracy of distributed information resources (e.g., PulseNet, FoodNet, OutbreakNet, PubMed, NCBI, EMBL, and other online genetic databases and information); (c) integration of collected pathogen profiling data, Foodrisk.org ( http://www.foodrisk.org ), PMP, Combase, and other relevant information into a user-friendly, searchable, "homogeneous" information system available to scientists in academia, the food industry, and government agencies; and (d) development of a computational model in semantic web for greater adaptability and robustness.
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J
2011-07-01
The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.
Distribution of immunodeficiency fact files with XML--from Web to WAP.
Väliaho, Jouni; Riikonen, Pentti; Vihinen, Mauno
2005-06-26
Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML), that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR) available at http://bioinf.uta.fi/idr/. A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at http://bioinf.uta.fi/idml/.
The Virtual Watershed Observatory: Cyberinfrastructure for Model-Data Integration and Access
NASA Astrophysics Data System (ADS)
Duffy, C.; Leonard, L. N.; Giles, L.; Bhatt, G.; Yu, X.
2011-12-01
The Virtual Watershed Observatory (VWO) is a concept where scientists, water managers, educators and the general public can create a virtual observatory from integrated hydrologic model results, national databases and historical or real-time observations via web services. In this paper, we propose a prototype for automated and virtualized web services software using national data products for climate reanalysis, soils, geology, terrain and land cover. The VWO has the broad purpose of making accessible water resource simulations, real-time data assimilation, calibration and archival at the scale of HUC 12 watersheds (Hydrologic Unit Code) anywhere in the continental US. Our prototype for model-data integration focuses on creating tools for fast data storage from selected national databases, as well as the computational resources necessary for a dynamic, distributed watershed simulation. The paper will describe cyberinfrastructure tools and workflow that attempts to resolve the problem of model-data accessibility and scalability such that individuals, research teams, managers and educators can create a WVO in a desired context. Examples are given for the NSF-funded Shale Hills Critical Zone Observatory and the European Critical Zone Observatories within the SoilTrEC project. In the future implementation of WVO services will benefit from the development of a cloud cyber infrastructure as the prototype evolves to data and model intensive computation for continental scale water resource predictions.
Vigi4Med Scraper: A Framework for Web Forum Structured Data Extraction and Semantic Representation
Audeh, Bissan; Beigbeder, Michel; Zimmermann, Antoine; Jaillon, Philippe; Bousquet, Cédric
2017-01-01
The extraction of information from social media is an essential yet complicated step for data analysis in multiple domains. In this paper, we present Vigi4Med Scraper, a generic open source framework for extracting structured data from web forums. Our framework is highly configurable; using a configuration file, the user can freely choose the data to extract from any web forum. The extracted data are anonymized and represented in a semantic structure using Resource Description Framework (RDF) graphs. This representation enables efficient manipulation by data analysis algorithms and allows the collected data to be directly linked to any existing semantic resource. To avoid server overload, an integrated proxy with caching functionality imposes a minimal delay between sequential requests. Vigi4Med Scraper represents the first step of Vigi4Med, a project to detect adverse drug reactions (ADRs) from social networks founded by the French drug safety agency Agence Nationale de Sécurité du Médicament (ANSM). Vigi4Med Scraper has successfully extracted greater than 200 gigabytes of data from the web forums of over 20 different websites. PMID:28122056
Science and Technology Resources on the Internet: Computer Security.
ERIC Educational Resources Information Center
Kinkus, Jane F.
2002-01-01
Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…
A Case Study of Technology-Enhanced Historical Inquiry
ERIC Educational Resources Information Center
Yang, Shu Ching
2009-01-01
The paper describes the integration of web resources and technology as instructional and learning tools in oral history projects. The computer-mediated oral history project centred around interviews with community elders combined with new technologies to engage students in authentic historical inquiry. The study examined learners' affective…
Curating Virtual Data Collections
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Leon, Amanda; Ramapriyan, Hampapuram; Tsontos, Vardis; Shie, Chung-Lin; Liu, Zhong
2015-01-01
NASAs Earth Observing System Data and Information System (EOSDIS) contains a rich set of datasets and related services throughout its many elements. As a result, locating all the EOSDIS data and related resources relevant to particular science theme can be daunting. This is largely because EOSDIS data's organizing principle is affected more by the way they are produced than around the expected end use. Virtual collections oriented around science themes can overcome this by presenting collections of data and related resources that are organized around the user's interest, not around the way the data were produced. Virtual collections consist of annotated web addresses (URLs) that point to data and related resource addresses, thus avoiding the need to copy all of the relevant data to a single place. These URL addresses can be consumed by a variety of clients, ranging from basic URL downloaders (wget, curl) and web browsers to sophisticated data analysis programs such as the Integrated Data Viewer.
A Web simulation of medical image reconstruction and processing as an educational tool.
Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos
2015-02-01
Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.
Matching Alternative Addresses: a Semantic Web Approach
NASA Astrophysics Data System (ADS)
Ariannamazi, S.; Karimipour, F.; Hakimpour, F.
2015-12-01
Rapid development of crowd-sourcing or volunteered geographic information (VGI) provides opportunities for authoritatives that deal with geospatial information. Heterogeneity of multiple data sources and inconsistency of data types is a key characteristics of VGI datasets. The expansion of cities resulted in the growing number of POIs in the OpenStreetMap, a well-known VGI source, which causes the datasets to outdate in short periods of time. These changes made to spatial and aspatial attributes of features such as names and addresses might cause confusion or ambiguity in the processes that require feature's literal information like addressing and geocoding. VGI sources neither will conform specific vocabularies nor will remain in a specific schema for a long period of time. As a result, the integration of VGI sources is crucial and inevitable in order to avoid duplication and the waste of resources. Information integration can be used to match features and qualify different annotation alternatives for disambiguation. This study enhances the search capabilities of geospatial tools with applications able to understand user terminology to pursuit an efficient way for finding desired results. Semantic web is a capable tool for developing technologies that deal with lexical and numerical calculations and estimations. There are a vast amount of literal-spatial data representing the capability of linguistic information in knowledge modeling, but these resources need to be harmonized based on Semantic Web standards. The process of making addresses homogenous generates a helpful tool based on spatial data integration and lexical annotation matching and disambiguating.
Semantic Metadata for Heterogeneous Spatial Planning Documents
NASA Astrophysics Data System (ADS)
Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.
2016-09-01
Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.
Boulos, Maged N Kamel; Maramba, Inocencio; Wheeler, Steve
2006-08-15
We have witnessed a rapid increase in the use of Web-based 'collaborationware' in recent years. These Web 2.0 applications, particularly wikis, blogs and podcasts, have been increasingly adopted by many online health-related professional and educational services. Because of their ease of use and rapidity of deployment, they offer the opportunity for powerful information sharing and ease of collaboration. Wikis are Web sites that can be edited by anyone who has access to them. The word 'blog' is a contraction of 'Web Log' - an online Web journal that can offer a resource rich multimedia environment. Podcasts are repositories of audio and video materials that can be "pushed" to subscribers, even without user intervention. These audio and video files can be downloaded to portable media players that can be taken anywhere, providing the potential for "anytime, anywhere" learning experiences (mobile learning). Wikis, blogs and podcasts are all relatively easy to use, which partly accounts for their proliferation. The fact that there are many free and Open Source versions of these tools may also be responsible for their explosive growth. Thus it would be relatively easy to implement any or all within a Health Professions' Educational Environment. Paradoxically, some of their disadvantages also relate to their openness and ease of use. With virtually anybody able to alter, edit or otherwise contribute to the collaborative Web pages, it can be problematic to gauge the reliability and accuracy of such resources. While arguably, the very process of collaboration leads to a Darwinian type 'survival of the fittest' content within a Web page, the veracity of these resources can be assured through careful monitoring, moderation, and operation of the collaborationware in a closed and secure digital environment. Empirical research is still needed to build our pedagogic evidence base about the different aspects of these tools in the context of medical/health education. If effectively deployed, wikis, blogs and podcasts could offer a way to enhance students', clinicians' and patients' learning experiences, and deepen levels of learners' engagement and collaboration within digital learning environments. Therefore, research should be conducted to determine the best ways to integrate these tools into existing e-Learning programmes for students, health professionals and patients, taking into account the different, but also overlapping, needs of these three audience classes and the opportunities of virtual collaboration between them. Of particular importance is research into novel integrative applications, to serve as the "glue" to bind the different forms of Web-based collaborationware synergistically in order to provide a coherent wholesome learning experience.
CAS-viewer: web-based tool for splicing-guided integrative analysis of multi-omics cancer data.
Han, Seonggyun; Kim, Dongwook; Kim, Youngjun; Choi, Kanghoon; Miller, Jason E; Kim, Dokyoon; Lee, Younghee
2018-04-20
The Cancer Genome Atlas (TCGA) project is a public resource that provides transcriptomic, DNA sequence, methylation, and clinical data for 33 cancer types. Transforming the large size and high complexity of TCGA cancer genome data into integrated knowledge can be useful to promote cancer research. Alternative splicing (AS) is a key regulatory mechanism of genes in human cancer development and in the interaction with epigenetic factors. Therefore, AS-guided integration of existing TCGA data sets will make it easier to gain insight into the genetic architecture of cancer risk and related outcomes. There are already existing tools analyzing and visualizing alternative mRNA splicing patterns for large-scale RNA-seq experiments. However, these existing web-based tools are limited to the analysis of individual TCGA data sets at a time, such as only transcriptomic information. We implemented CAS-viewer (integrative analysis of Cancer genome data based on Alternative Splicing), a web-based tool leveraging multi-cancer omics data from TCGA. It illustrates alternative mRNA splicing patterns along with methylation, miRNAs, and SNPs, and then provides an analysis tool to link differential transcript expression ratio to methylation, miRNA, and splicing regulatory elements for 33 cancer types. Moreover, one can analyze AS patterns with clinical data to identify potential transcripts associated with different survival outcome for each cancer. CAS-viewer is a web-based application for transcript isoform-driven integration of multi-omics data in multiple cancer types and will aid in the visualization and possible discovery of biomarkers for cancer by integrating multi-omics data from TCGA.
MED31/437: A Web-based Diabetes Management System: DiabNet
Zhao, N; Roudsari, A; Carson, E
1999-01-01
Introduction A web-based system (DiabNet) was developed to provide instant access to the Electronic Diabetes Records (EDR) for end-users, and real-time information for healthcare professionals to facilitate their decision-making. It integrates portable glucometer, handheld computer, mobile phone and Internet access as a combined telecommunication and mobile computing solution for diabetes management. Methods: Active Server Pages (ASP) embedded with advanced ActiveX controls and VBScript were developed to allow remote data upload, retrieval and interpretation. Some advisory and Internet-based learning features, together with a video teleconferencing component make DiabNet web site an informative platform for Web-consultation. Results The evaluation of the system is being implemented among several UK Internet diabetes discussion groups and the Diabetes Day Centre at the Guy's & St. Thomas' Hospital. Many positive feedback are received from the web site demonstrating DiabNet is an advanced web-based diabetes management system which can help patients to keep closer control of self-monitoring blood glucose remotely, and is an integrated diabetes information resource that offers telemedicine knowledge in diabetes management. Discussion In summary, DiabNet introduces an innovative online diabetes management concept, such as online appointment and consultation, to enable users to access diabetes management information without time and location limitation and security concerns.
NASA Astrophysics Data System (ADS)
Yu, Bailang; Wu, Jianping
2006-10-01
Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.
Science Discoveries on the Net: An Integrated Approach.
ERIC Educational Resources Information Center
Fredericks, Anthony D.
This guide helps students and teachers use the Internet efficiently to find the latest information. Activities and units are divided into five categories: life science, physical science, earth science, space science, and the human body. Each unit contains an introduction, research questions, Web sites, literature resources, and activities and…
Teaching Bioinformatics and Neuroinformatics by Using Free Web-Based Tools
ERIC Educational Resources Information Center
Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson
2010-01-01
This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…
Advancing Collaboration through Hydrologic Data and Model Sharing
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.
2015-12-01
HydroShare is an online, collaborative system for open sharing of hydrologic data, analytical tools, and models. It supports the sharing of and collaboration around "resources" which are defined primarily by standardized metadata, content data models for each resource type, and an overarching resource data model based on the Open Archives Initiative's Object Reuse and Exchange (OAI-ORE) standard and a hierarchical file packaging system called "BagIt". HydroShare expands the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated to include geospatial and multidimensional space-time datasets commonly used in hydrology. HydroShare also includes new capability for sharing models, model components, and analytical tools and will take advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. It also supports web services and server/cloud based computation operating on resources for the execution of hydrologic models and analysis and visualization of hydrologic data. HydroShare uses iRODS as a network file system for underlying storage of datasets and models. Collaboration is enabled by casting datasets and models as "social objects". Social functions include both private and public sharing, formation of collaborative groups of users, and value-added annotation of shared datasets and models. The HydroShare web interface and social media functions were developed using the Django web application framework coupled to iRODS. Data visualization and analysis is supported through the Tethys Platform web GIS software stack. Links to external systems are supported by RESTful web service interfaces to HydroShare's content. This presentation will introduce the HydroShare functionality developed to date and describe ongoing development of functionality to support collaboration and integration of data and models.
Facilitating NCAR Data Discovery by Connecting Related Resources
NASA Astrophysics Data System (ADS)
Rosati, A.
2012-12-01
Linking datasets, creators, and users by employing the proper standards helps to increase the impact of funded research. In order for users to find a dataset, it must first be named. Data citations play the important role of giving datasets a persistent presence by assigning a formal "name" and location. This project focuses on the next step of the "name-find-use" sequence: enhancing discoverability of NCAR data by connecting related resources on the web. By examining metadata schemas that document datasets, I examined how Semantic Web approaches can help to ensure the widest possible range of data users. The focus was to move from search engine optimization (SEO) to information connectivity. Two main markup types are very visible in the Semantic Web and applicable to scientific dataset discovery: The Open Archives Initiative-Object Reuse and Exchange (OAI-ORE - www.openarchives.org) and Microdata (HTML5 and www.schema.org). My project creates pilot aggregations of related resources using both markup types for three case studies: The North American Regional Climate Change Assessment Program (NARCCAP) dataset and related publications, the Palmer Drought Severity Index (PSDI) animation and image files from NCAR's Visualization Lab (VisLab), and the multidisciplinary data types and formats from the Advanced Cooperative Arctic Data and Information Service (ACADIS). This project documents the differences between these markups and how each creates connectedness on the web. My recommendations point toward the most efficient and effective markup schema for aggregating resources within the three case studies based on the following assessment criteria: ease of use, current state of support and adoption of technology, integration with typical web tools, available vocabularies and geoinformatic standards, interoperability with current repositories and access portals (e.g. ESG, Java), and relation to data citation tools and methods.
O'Brien, Nicola; Heaven, Ben; Teal, Gemma; Evans, Elizabeth H; Cleland, Claire; Moffatt, Suzanne; Sniehotta, Falko F; White, Martin; Mathers, John C
2016-01-01
Background Integrating stakeholder involvement in complex health intervention design maximizes acceptability and potential effectiveness. However, there is little methodological guidance about how to integrate evidence systematically from various sources in this process. Scientific evidence derived from different approaches can be difficult to integrate and the problem is compounded when attempting to include diverse, subjective input from stakeholders. Objective The intent of the study was to describe and appraise a systematic, sequential approach to integrate scientific evidence, expert knowledge and experience, and stakeholder involvement in the co-design and development of a complex health intervention. The development of a Web-based lifestyle intervention for people in retirement is used as an example. Methods Evidence from three systematic reviews, qualitative research findings, and expert knowledge was compiled to produce evidence statements (stage 1). Face validity of these statements was assessed by key stakeholders in a co-design workshop resulting in a set of intervention principles (stage 2). These principles were assessed for face validity in a second workshop, resulting in core intervention concepts and hand-drawn prototypes (stage 3). The outputs from stages 1-3 were translated into a design brief and specification (stage 4), which guided the building of a functioning prototype, Web-based intervention (stage 5). This prototype was de-risked resulting in an optimized functioning prototype (stage 6), which was subject to iterative testing and optimization (stage 7), prior to formal pilot evaluation. Results The evidence statements (stage 1) highlighted the effectiveness of physical activity, dietary and social role interventions in retirement; the idiosyncratic nature of retirement and well-being; the value of using specific behavior change techniques including those derived from the Health Action Process Approach; and the need for signposting to local resources. The intervention principles (stage 2) included the need to facilitate self-reflection on available resources, personalization, and promotion of links between key lifestyle behaviors. The core concepts and hand-drawn prototypes (stage 3) had embedded in them the importance of time use and work exit planning, personalized goal setting, and acceptance of a Web-based intervention. The design brief detailed the features and modules required (stage 4), guiding the development of wireframes, module content and functionality, virtual mentors, and intervention branding (stage 5). Following an iterative process of intervention testing and optimization (stage 6), the final Web-based intervention prototype of LEAP (Living, Eating, Activity, and Planning in retirement) was produced (stage 7). The approach was resource intensive and required a multidisciplinary team. The design expert made an invaluable contribution throughout the process. Conclusions Our sequential approach fills an important methodological gap in the literature, describing the stages and techniques useful in developing an evidence-based complex health intervention. The systematic and rigorous integration of scientific evidence, expert knowledge and experience, and stakeholder input has resulted in an intervention likely to be acceptable and feasible. PMID:27489143
O'Brien, Nicola; Heaven, Ben; Teal, Gemma; Evans, Elizabeth H; Cleland, Claire; Moffatt, Suzanne; Sniehotta, Falko F; White, Martin; Mathers, John C; Moynihan, Paula
2016-08-03
Integrating stakeholder involvement in complex health intervention design maximizes acceptability and potential effectiveness. However, there is little methodological guidance about how to integrate evidence systematically from various sources in this process. Scientific evidence derived from different approaches can be difficult to integrate and the problem is compounded when attempting to include diverse, subjective input from stakeholders. The intent of the study was to describe and appraise a systematic, sequential approach to integrate scientific evidence, expert knowledge and experience, and stakeholder involvement in the co-design and development of a complex health intervention. The development of a Web-based lifestyle intervention for people in retirement is used as an example. Evidence from three systematic reviews, qualitative research findings, and expert knowledge was compiled to produce evidence statements (stage 1). Face validity of these statements was assessed by key stakeholders in a co-design workshop resulting in a set of intervention principles (stage 2). These principles were assessed for face validity in a second workshop, resulting in core intervention concepts and hand-drawn prototypes (stage 3). The outputs from stages 1-3 were translated into a design brief and specification (stage 4), which guided the building of a functioning prototype, Web-based intervention (stage 5). This prototype was de-risked resulting in an optimized functioning prototype (stage 6), which was subject to iterative testing and optimization (stage 7), prior to formal pilot evaluation. The evidence statements (stage 1) highlighted the effectiveness of physical activity, dietary and social role interventions in retirement; the idiosyncratic nature of retirement and well-being; the value of using specific behavior change techniques including those derived from the Health Action Process Approach; and the need for signposting to local resources. The intervention principles (stage 2) included the need to facilitate self-reflection on available resources, personalization, and promotion of links between key lifestyle behaviors. The core concepts and hand-drawn prototypes (stage 3) had embedded in them the importance of time use and work exit planning, personalized goal setting, and acceptance of a Web-based intervention. The design brief detailed the features and modules required (stage 4), guiding the development of wireframes, module content and functionality, virtual mentors, and intervention branding (stage 5). Following an iterative process of intervention testing and optimization (stage 6), the final Web-based intervention prototype of LEAP (Living, Eating, Activity, and Planning in retirement) was produced (stage 7). The approach was resource intensive and required a multidisciplinary team. The design expert made an invaluable contribution throughout the process. Our sequential approach fills an important methodological gap in the literature, describing the stages and techniques useful in developing an evidence-based complex health intervention. The systematic and rigorous integration of scientific evidence, expert knowledge and experience, and stakeholder input has resulted in an intervention likely to be acceptable and feasible.
IntegromeDB: an integrated system and biological search engine
2012-01-01
Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095
The Plant Genome Integrative Explorer Resource: PlantGenIE.org.
Sundell, David; Mannapperuma, Chanaka; Netotea, Sergiu; Delhomme, Nicolas; Lin, Yao-Cheng; Sjödin, Andreas; Van de Peer, Yves; Jansson, Stefan; Hvidsten, Torgeir R; Street, Nathaniel R
2015-12-01
Accessing and exploring large-scale genomics data sets remains a significant challenge to researchers without specialist bioinformatics training. We present the integrated PlantGenIE.org platform for exploration of Populus, conifer and Arabidopsis genomics data, which includes expression networks and associated visualization tools. Standard features of a model organism database are provided, including genome browsers, gene list annotation, Blast homology searches and gene information pages. Community annotation updating is supported via integration of WebApollo. We have produced an RNA-sequencing (RNA-Seq) expression atlas for Populus tremula and have integrated these data within the expression tools. An updated version of the ComPlEx resource for performing comparative plant expression analyses of gene coexpression network conservation between species has also been integrated. The PlantGenIE.org platform provides intuitive access to large-scale and genome-wide genomics data from model forest tree species, facilitating both community contributions to annotation improvement and tools supporting use of the included data resources to inform biological insight. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
dada - a web-based 2D detector analysis tool
NASA Astrophysics Data System (ADS)
Osterhoff, Markus
2017-06-01
The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.
Gifford, Lida K; Carter, Lester G; Gabanyi, Margaret J; Berman, Helen M; Adams, Paul D
2012-06-01
The Technology Portal of the Protein Structure Initiative Structural Biology Knowledgebase (PSI SBKB; http://technology.sbkb.org/portal/ ) is a web resource providing information about methods and tools that can be used to relieve bottlenecks in many areas of protein production and structural biology research. Several useful features are available on the web site, including multiple ways to search the database of over 250 technological advances, a link to videos of methods on YouTube, and access to a technology forum where scientists can connect, ask questions, get news, and develop collaborations. The Technology Portal is a component of the PSI SBKB ( http://sbkb.org ), which presents integrated genomic, structural, and functional information for all protein sequence targets selected by the Protein Structure Initiative. Created in collaboration with the Nature Publishing Group, the SBKB offers an array of resources for structural biologists, such as a research library, editorials about new research advances, a featured biological system each month, and a functional sleuth for searching protein structures of unknown function. An overview of the various features and examples of user searches highlight the information, tools, and avenues for scientific interaction available through the Technology Portal.
DOORS to the semantic web and grid with a PORTAL for biomedical computing.
Taswell, Carl
2008-03-01
The semantic web remains in the early stages of development. It has not yet achieved the goals envisioned by its founders as a pervasive web of distributed knowledge and intelligence. Success will be attained when a dynamic synergism can be created between people and a sufficient number of infrastructure systems and tools for the semantic web in analogy with those for the original web. The domain name system (DNS), web browsers, and the benefits of publishing web pages motivated many people to register domain names and publish web sites on the original web. An analogous resource label system, semantic search applications, and the benefits of collaborative semantic networks will motivate people to register resource labels and publish resource descriptions on the semantic web. The Domain Ontology Oriented Resource System (DOORS) and Problem Oriented Registry of Tags and Labels (PORTAL) are proposed as infrastructure systems for resource metadata within a paradigm that can serve as a bridge between the original web and the semantic web. The Internet Registry Information Service (IRIS) registers [corrected] domain names while DNS publishes domain addresses with mapping of names to addresses for the original web. Analogously, PORTAL registers resource labels and tags while DOORS publishes resource locations and descriptions with mapping of labels to locations for the semantic web. BioPORT is proposed as a prototype PORTAL registry specific for the problem domain of biomedical computing.
Research and application of knowledge resources network for product innovation.
Li, Chuan; Li, Wen-qiang; Li, Yan; Na, Hui-zhen; Shi, Qian
2015-01-01
In order to enhance the capabilities of knowledge service in product innovation design service platform, a method of acquiring knowledge resources supporting for product innovation from the Internet and providing knowledge active push is proposed. Through knowledge modeling for product innovation based on ontology, the integrated architecture of knowledge resources network is put forward. The technology for the acquisition of network knowledge resources based on focused crawler and web services is studied. Knowledge active push is provided for users by user behavior analysis and knowledge evaluation in order to improve users' enthusiasm for participation in platform. Finally, an application example is illustrated to prove the effectiveness of the method.
BUSCA: an integrative web server to predict subcellular localization of proteins.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Profiti, Giuseppe; Casadio, Rita
2018-04-30
Here, we present BUSCA (http://busca.biocomp.unibo.it), a novel web server that integrates different computational tools for predicting protein subcellular localization. BUSCA combines methods for identifying signal and transit peptides (DeepSig and TPpred3), GPI-anchors (PredGPI) and transmembrane domains (ENSEMBLE3.0 and BetAware) with tools for discriminating subcellular localization of both globular and membrane proteins (BaCelLo, MemLoci and SChloro). Outcomes from the different tools are processed and integrated for annotating subcellular localization of both eukaryotic and bacterial protein sequences. We benchmark BUSCA against protein targets derived from recent CAFA experiments and other specific data sets, reporting performance at the state-of-the-art. BUSCA scores better than all other evaluated methods on 2732 targets from CAFA2, with a F1 value equal to 0.49 and among the best methods when predicting targets from CAFA3. We propose BUSCA as an integrated and accurate resource for the annotation of protein subcellular localization.
BioModels.net Web Services, a free and integrated toolkit for computational modelling software.
Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille
2010-05-01
Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.
Inferring Metadata for a Semantic Web Peer-to-Peer Environment
ERIC Educational Resources Information Center
Brase, Jan; Painter, Mark
2004-01-01
Learning Objects Metadata (LOM) aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements,…
Promoting Student Autonomy and Competence Using a Hybrid Model for Teaching Physical Activity
ERIC Educational Resources Information Center
Bachman, Christine; Scherer, Rhonda
2015-01-01
For approximately twenty-years, Web-enhanced learning environments have been popular in higher education. Much research has examined how best practices can integrate technology, pedagogical theories, and resources to enhance learning. Numerous studies of hybrid teaching have revealed mostly positive effects. Yet, very little research has examined…
Weaving the Web into Course Integrated Instruction.
ERIC Educational Resources Information Center
Wallach, Ruth; McCann, Linda
In the fall 1995, a professor teaching an undergraduate course asked the Reference Center at the University of Southern California to conduct a research session on Dante related resources on the Internet, and to show her students how to search the Dartmouth Dante Project. A simple homepage was created for the class, which listed the course…
Enhancing Mathematical Communication for Virtual Math Teams
ERIC Educational Resources Information Center
Stahl, Gerry; Çakir, Murat Perit; Weimar, Stephen; Weusijana, Baba Kofi; Ou, Jimmy Xiantong
2010-01-01
The Math Forum is an online resource center for pre-algebra, algebra, geometry and pre-calculus. Its Virtual Math Teams (VMT) service provides an integrated web-based environment for small teams of people to discuss math and to work collaboratively on math problems or explore interesting mathematical micro-worlds together. The VMT Project studies…
USDA-ARS?s Scientific Manuscript database
Over the last three decades, the rapid explosion of information and resources on human food-borne diseases and food safety has provided the ability to rapidly determine and interpret the mechanisms of survival and pathogenesis of food-borne pathogens. However, several factors have hindered effective...
Music Teacher Perceptions of a Model of Technology Training and Support in Virginia
ERIC Educational Resources Information Center
Welch, Lee Arthur
2013-01-01
A plethora of technology resources currently exists for the music classroom of the twenty-first century, including digital audio and video, music software, electronic instruments, Web 2.0 tools and more. Research shows a strong need for professional development for teachers to properly implement and integrate instructional technology resources…
Integrating Chemistry: Crossing the Millennium Divide.
Housecroft, Catherine E
2018-02-01
A personal account of the development of two University level chemistry books is presented. The account focuses on ways to integrate the traditional branches of chemistry into a textbook that captures the imagination of students and relates chemical principles and fundamental topics to environmental, medicinal, biological and industrial applications. The ways in which teaching methods have changed over two decades and how web-based resources can be used to improve the communication of chemical (in particular structural) concepts are highlighted.
Schoof, Heiko; Ernst, Rebecca; Nazarov, Vladimir; Pfeifer, Lukas; Mewes, Hans-Werner; Mayer, Klaus F. X.
2004-01-01
Arabidopsis thaliana is the most widely studied model plant. Functional genomics is intensively underway in many laboratories worldwide. Beyond the basic annotation of the primary sequence data, the annotated genetic elements of Arabidopsis must be linked to diverse biological data and higher order information such as metabolic or regulatory pathways. The MIPS Arabidopsis thaliana database MAtDB aims to provide a comprehensive resource for Arabidopsis as a genome model that serves as a primary reference for research in plants and is suitable for transfer of knowledge to other plants, especially crops. The genome sequence as a common backbone serves as a scaffold for the integration of data, while, in a complementary effort, these data are enhanced through the application of state-of-the-art bioinformatics tools. This information is visualized on a genome-wide and a gene-by-gene basis with access both for web users and applications. This report updates the information given in a previous report and provides an outlook on further developments. The MAtDB web interface can be accessed at http://mips.gsf.de/proj/thal/db. PMID:14681437
Student Interactives--A new Tool for Exploring Science.
NASA Astrophysics Data System (ADS)
Turner, C.
2005-05-01
Science NetLinks (SNL), a national program that provides online teacher resources created by the American Association for the Advancement of Science (AAAS), has proven to be a leader among educational resource providers in bringing free, high-quality, grade-appropriate materials to the national teaching community in a format that facilitates classroom integration. Now in its ninth year on the Web, Science NetLinks is part of the MarcoPolo Consortium of Web sites and associated state-based training initiatives that help teachers integrate Internet content into the classroom. SNL is a national presence in the K-12 science education community serving over 700,000 teachers each year, who visit the site at least three times a month. SNL features: High-quality, innovative, original lesson plans aligned to Project 2061 Benchmarks for Science Literacy, Original Internet-based interactives and learning challenges, Reviewed Web resources and demonstrations, Award winning, 60-second audio news features (Science Updates). Science NetLinks has an expansive and growing library of this educational material, aligned and sortable by grade band or benchmark. The program currently offers over 500 lessons, covering 72% of the Benchmarks for Science Literacy content areas in grades K-12. Over the past several years, there has been a strong movement to create online resources that support earth and space science education. Funding for various online educational materials has been available from many sources and has produced a variety of useful products for the education community. Teachers, through the Internet, potentially have access to thousands of activities, lessons and multimedia interactive applications for use in the classroom. But, with so many resources available, it is increasingly more difficult for educators to locate quality resources that are aligned to standards and learning goals. To ensure that the education community utilizes the resources, the material must conform to a format that allows easy understanding, evaluation and integration. Science NetLinks' material has been proven to satisfy these criteria and serve thousands of teachers every year. All online interactive materials that are created by AAAS are aligned to AAAS Project 2061 Benchmarks, which mirror National Science Standards, and are developed based on a rigorous set of criteria. For the purpose of this forum we will provide an overview that explains the need for more of these materials in the earth and space education, a review of the criteria for creating these materials and show examples of online materials created by AAAS that support earth and space science.
Accessing the SEED genome databases via Web services API: tools for programmers.
Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A
2010-06-14
The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.
blend4php: a PHP API for galaxy
Wytko, Connor; Soto, Brian; Ficklin, Stephen P.
2017-01-01
Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy’s RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4php. Database URL: https://github.com/galaxyproject/blend4php PMID:28077564
d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna
2009-06-01
We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.
Teaching the principles of health management to first year veterinary students.
Duffield, Todd; Lissemore, Kerry; Sandals, David
2003-01-01
A course called Health Management 1 was created as part of a new DVM curriculum at the Ontario Veterinary College. This full year course was designed to introduce students to basic concepts of health management, integrating the disciplines of epidemiology, ethology, and public health in the context of selected animal industries. The course was comprised of 60 lecture hours and four two-hour laboratories. A common definition of health management, incorporating five principles, was used throughout the course, in order to reinforce the concepts and to maintain continuity between lecture blocks. Unlike in the years prior to the introduction of the new curriculum, epidemiology was presented as a tool of health management rather than as a separate discipline. To supplement the lecture and laboratory material, a Web-based resource was created and the students were required to review the appropriate section prior to each lecture block. Small quizzes, consisting of 10 questions each within WebCT, were used to stimulate self-directed learning. Overall, the course was well received by the students. The Web resources combined with the WebCT quizzes proved to be an effective method of stimulating students to prepare for lecture.
Shared Medical Imaging Repositories.
Lebre, Rui; Bastião, Luís; Costa, Carlos
2018-01-01
This article describes the implementation of a solution for the integration of ownership concept and access control over medical imaging resources, making possible the centralization of multiple instances of repositories. The proposed architecture allows the association of permissions to repository resources and delegation of rights to third entities. It includes a programmatic interface for management of proposed services, made available through web services, with the ability to create, read, update and remove all components resulting from the architecture. The resulting work is a role-based access control mechanism that was integrated with Dicoogle Open-Source Project. The solution has several application scenarios like, for instance, collaborative platforms for research and tele-radiology services deployed at Cloud.
A Parallel Trade Study Architecture for Design Optimization of Complex Systems
NASA Technical Reports Server (NTRS)
Kim, Hongman; Mullins, James; Ragon, Scott; Soremekun, Grant; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
Design of a successful product requires evaluating many design alternatives in a limited design cycle time. This can be achieved through leveraging design space exploration tools and available computing resources on the network. This paper presents a parallel trade study architecture to integrate trade study clients and computing resources on a network using Web services. The parallel trade study solution is demonstrated to accelerate design of experiments, genetic algorithm optimization, and a cost as an independent variable (CAIV) study for a space system application.
Ontology-Based Administration of Web Directories
NASA Astrophysics Data System (ADS)
Horvat, Marko; Gledec, Gordan; Bogunović, Nikola
Administration of a Web directory and maintenance of its content and the associated structure is a delicate and labor intensive task performed exclusively by human domain experts. Subsequently there is an imminent risk of a directory structures becoming unbalanced, uneven and difficult to use to all except for a few users proficient with the particular Web directory and its domain. These problems emphasize the need to establish two important issues: i) generic and objective measures of Web directories structure quality, and ii) mechanism for fully automated development of a Web directory's structure. In this paper we demonstrate how to formally and fully integrate Web directories with the Semantic Web vision. We propose a set of criteria for evaluation of a Web directory's structure quality. Some criterion functions are based on heuristics while others require the application of ontologies. We also suggest an ontology-based algorithm for construction of Web directories. By using ontologies to describe the semantics of Web resources and Web directories' categories it is possible to define algorithms that can build or rearrange the structure of a Web directory. Assessment procedures can provide feedback and help steer the ontology-based construction process. The issues raised in the article can be equally applied to new and existing Web directories.
PDB-Metrics: a web tool for exploring the PDB contents.
Fileto, Renato; Kuser, Paula R; Yamagishi, Michel E B; Ribeiro, André A; Quinalia, Thiago G; Franco, Eduardo H; Mancini, Adauto L; Higa, Roberto H; Oliveira, Stanley R M; Santos, Edgard H; Vieira, Fabio D; Mazoni, Ivan; Cruz, Sergio A B; Neshich, Goran
2006-06-30
PDB-Metrics (http://sms.cbi.cnptia.embrapa.br/SMS/pdb_metrics/index.html) is a component of the Diamond STING suite of programs for the analysis of protein sequence, structure and function. It summarizes the characteristics of the collection of protein structure descriptions deposited in the Protein Data Bank (PDB) and provides a Web interface to search and browse the PDB, using a variety of alternative criteria. PDB-Metrics is a powerful tool for bioinformaticians to examine the data span in the PDB from several perspectives. Although other Web sites offer some similar resources to explore the PDB contents, PDB-Metrics is among those with the most complete set of such facilities, integrated into a single Web site. This program has been developed using SQLite, a C library that provides all the query facilities of a database management system.
Kamel Boulos, M N; Roudsari, A V; Gordon, C; Muir Gray, J A
2001-01-01
In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health. They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health. It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space.
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
Mining and integration of pathway diagrams from imaging data.
Kozhenkov, Sergey; Baitaluk, Michael
2012-03-01
Pathway diagrams from PubMed and World Wide Web (WWW) contain valuable highly curated information difficult to reach without tools specifically designed and customized for the biological semantics and high-content density of the images. There is currently no search engine or tool that can analyze pathway images, extract their pathway components (molecules, genes, proteins, organelles, cells, organs, etc.) and indicate their relationships. Here, we describe a resource of pathway diagrams retrieved from article and web-page images through optical character recognition, in conjunction with data mining and data integration methods. The recognized pathways are integrated into the BiologicalNetworks research environment linking them to a wealth of data available in the BiologicalNetworks' knowledgebase, which integrates data from >100 public data sources and the biomedical literature. Multiple search and analytical tools are available that allow the recognized cellular pathways, molecular networks and cell/tissue/organ diagrams to be studied in the context of integrated knowledge, experimental data and the literature. BiologicalNetworks software and the pathway repository are freely available at www.biologicalnetworks.org. Supplementary data are available at Bioinformatics online.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.
Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
2017-10-30
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.
An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less
Group Membership Based Authorization to CADC Resources
NASA Astrophysics Data System (ADS)
Damian, A.; Dowler, P.; Gaudet, S.; Hill, N.
2012-09-01
The Group Membership Service (GMS), implemented at the Canadian Astronomy Data Centre (CADC), is a prototype of what could eventually be an IVOA standard for a distributed and interoperable group membership protocol. Group membership is the core authorization concept that enables teamwork and collaboration amongst astronomers accessing distributed resources and services. The service integrates and complements other access control related IVOA standards such as single-sign-on (SSO) using X.509 proxy certificates and the Credential Delegation Protocol (CDP). The GMS has been used at CADC for several years now, initially as a subsystem and then as a stand-alone Web service. It is part of the authorization mechanism for controlling the access to restricted Web resources as well as the VOSpace service hosted by the CADC. We present the role that GMS plays within the access control system at the CADC, including the functionality of the service and how the different CADC services make use of it to assert user authorization to resources. We also describe the main advantages and challenges of using the service as well as future work to increase its robustness and functionality.
The EPA CompTox Chemistry Dashboard - an online resource ...
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data. Recent work has focused on the development of a new architecture that assembles the resources into a single platform. With a focus on delivering access to Open Data streams, web service integration accessibility and a user-friendly web application the CompTox Dashboard provides access to data associated with ~720,000 chemical substances. These data include research data in the form of bioassay screening data associated with the ToxCast program, experimental and predicted physicochemical properties, product and functional use information and related data of value to environmental scientists. This presentation will provide an overview of the CompTox Dashboard and its va
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu
2016-06-01
MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources. © 2016 WILEY PERIODICALS, INC.
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T.; van Oven, Mannis; Wallace, Douglas C.; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F.; Attimonelli, Marcella; Zuchner, Stephan
2016-01-01
MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and disease. MSeqDR-LSDB is a locus specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar-compliant variant annotations. PhenoTips is used for phenotypic data submission on de-identified patients using human phenotype ontology terminology. Development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources. PMID:26919060
Outreach for Outreach: Targeting social media audiences to promote a NASA kids’ web site
NASA Astrophysics Data System (ADS)
Pham, C. C.
2009-12-01
The Space Place is a successful NASA web site that benefits upper elementary school students and educators by providing games, activities, and resources to stimulate interest in science, technology, engineering, and mathematics, as well as to inform the audience of NASA’s contributions. As online social networking grows to be a central component of modern communication, The Space Place has explored the benefits of integrating social networks with the web site to increase awareness of materials the web site offers. This study analyzes the capabilities of social networks, and specifically the demographics of Twitter and Facebook. It then compares these results with the content, audience, and perceived demographics of The Space Place web site. Based upon the demographic results, we identified a target constituency that would benefit from the integration of social networks into The Space Place web site. As a result of this study, a Twitter feed has been established that releases a daily tweet from The Space Place. In addition, a Facebook page has been created to showcase new content and prompt interaction among fans of The Space Place. Currently, plans are under way to populate the Space Place Facebook page. Each social network has been utilized in an effort to spark excitement about the content on The Space Place, as well as to attract followers to the main NASA Space Place web site. To pursue this idea further, a plan has been developed to promote NASA Space Place’s social media tools among the target audience.
Falk, Marni J; Shen, Lishuang; Gonzalez, Michael; Leipzig, Jeremy; Lott, Marie T; Stassen, Alphons P M; Diroma, Maria Angela; Navarro-Gomez, Daniel; Yeske, Philip; Bai, Renkui; Boles, Richard G; Brilhante, Virginia; Ralph, David; DaRe, Jeana T; Shelton, Robert; Terry, Sharon F; Zhang, Zhe; Copeland, William C; van Oven, Mannis; Prokisch, Holger; Wallace, Douglas C; Attimonelli, Marcella; Krotoski, Danuta; Zuchner, Stephan; Gai, Xiaowu
2015-03-01
Success rates for genomic analyses of highly heterogeneous disorders can be greatly improved if a large cohort of patient data is assembled to enhance collective capabilities for accurate sequence variant annotation, analysis, and interpretation. Indeed, molecular diagnostics requires the establishment of robust data resources to enable data sharing that informs accurate understanding of genes, variants, and phenotypes. The "Mitochondrial Disease Sequence Data Resource (MSeqDR) Consortium" is a grass-roots effort facilitated by the United Mitochondrial Disease Foundation to identify and prioritize specific genomic data analysis needs of the global mitochondrial disease clinical and research community. A central Web portal (https://mseqdr.org) facilitates the coherent compilation, organization, annotation, and analysis of sequence data from both nuclear and mitochondrial genomes of individuals and families with suspected mitochondrial disease. This Web portal provides users with a flexible and expandable suite of resources to enable variant-, gene-, and exome-level sequence analysis in a secure, Web-based, and user-friendly fashion. Users can also elect to share data with other MSeqDR Consortium members, or even the general public, either by custom annotation tracks or through the use of a convenient distributed annotation system (DAS) mechanism. A range of data visualization and analysis tools are provided to facilitate user interrogation and understanding of genomic, and ultimately phenotypic, data of relevance to mitochondrial biology and disease. Currently available tools for nuclear and mitochondrial gene analyses include an MSeqDR GBrowse instance that hosts optimized mitochondrial disease and mitochondrial DNA (mtDNA) specific annotation tracks, as well as an MSeqDR locus-specific database (LSDB) that curates variant data on more than 1300 genes that have been implicated in mitochondrial disease and/or encode mitochondria-localized proteins. MSeqDR is integrated with a diverse array of mtDNA data analysis tools that are both freestanding and incorporated into an online exome-level dataset curation and analysis resource (GEM.app) that is being optimized to support needs of the MSeqDR community. In addition, MSeqDR supports mitochondrial disease phenotyping and ontology tools, and provides variant pathogenicity assessment features that enable community review, feedback, and integration with the public ClinVar variant annotation resource. A centralized Web-based informed consent process is being developed, with implementation of a Global Unique Identifier (GUID) system to integrate data deposited on a given individual from different sources. Community-based data deposition into MSeqDR has already begun. Future efforts will enhance capabilities to incorporate phenotypic data that enhance genomic data analyses. MSeqDR will fill the existing void in bioinformatics tools and centralized knowledge that are necessary to enable efficient nuclear and mtDNA genomic data interpretation by a range of shareholders across both clinical diagnostic and research settings. Ultimately, MSeqDR is focused on empowering the global mitochondrial disease community to better define and explore mitochondrial diseases. Copyright © 2014 Elsevier Inc. All rights reserved.
Falk, Marni J.; Shen, Lishuang; Gonzalez, Michael; Leipzig, Jeremy; Lott, Marie T.; Stassen, Alphons P.M.; Diroma, Maria Angela; Navarro-Gomez, Daniel; Yeske, Philip; Bai, Renkui; Boles, Richard G.; Brilhante, Virginia; Ralph, David; DaRe, Jeana T.; Shelton, Robert; Terry, Sharon; Zhang, Zhe; Copeland, William C.; van Oven, Mannis; Prokisch, Holger; Wallace, Douglas C.; Attimonelli, Marcella; Krotoski, Danuta; Zuchner, Stephan; Gai, Xiaowu
2014-01-01
Success rates for genomic analyses of highly heterogeneous disorders can be greatly improved if a large cohort of patient data is assembled to enhance collective capabilities for accurate sequence variant annotation, analysis, and interpretation. Indeed, molecular diagnostics requires the establishment of robust data resources to enable data sharing that informs accurate understanding of genes, variants, and phenotypes. The “Mitochondrial Disease Sequence Data Resource (MSeqDR) Consortium” is a grass-roots effort facilitated by the United Mitochondrial Disease Foundation to identify and prioritize specific genomic data analysis needs of the global mitochondrial disease clinical and research community. A central Web portal (https://mseqdr.org) facilitates the coherent compilation, organization, annotation, and analysis of sequence data from both nuclear and mitochondrial genomes of individuals and families with suspected mitochondrial disease. This Web portal provides users with a flexible and expandable suite of resources to enable variant-, gene-, and exome-level sequence analysis in a secure, Web-based, and user-friendly fashion. Users can also elect to share data with other MSeqDR Consortium members, or even the general public, either by custom annotation tracks or through use of a convenient distributed annotation system (DAS) mechanism. A range of data visualization and analysis tools are provided to facilitate user interrogation and understanding of genomic, and ultimately phenotypic, data of relevance to mitochondrial biology and disease. Currently available tools for nuclear and mitochondrial gene analyses include an MSeqDR GBrowse instance that hosts optimized mitochondrial disease and mitochondrial DNA (mtDNA) specific annotation tracks, as well as an MSeqDR locus-specific database (LSDB) that curates variant data on more than 1,300 genes that have been implicated in mitochondrial disease and/or encode mitochondria-localized proteins. MSeqDR is integrated with a diverse array of mtDNA data analysis tools that are both freestanding and incorporated into an online exome-level dataset curation and analysis resource (GEM.app) that is being optimized to support needs of the MSeqDR community. In addition, MSeqDR supports mitochondrial disease phenotyping and ontology tools, and provides variant pathogenicity assessment features that enable community review, feedback, and integration with the public ClinVar variant annotation resource. A centralized Web-based informed consent process is being developed, with implementation of a Global Unique Identifier (GUID) system to integrate data deposited on a given individual from different sources. Community-based data deposition into MSeqDR has already begun. Future efforts will enhance capabilities to incorporate phenotypic data that enhance genomic data analyses. MSeqDR will fill the existing void in bioinformatics tools and centralized knowledge that are necessary to enable efficient nuclear and mtDNA genomic data interpretation by a range of shareholders across both clinical diagnostic and research settings. Ultimately, MSeqDR is focused on empowering the global mitochondrial disease community to better define and explore mitochondrial disease. PMID:25542617
SEEK: a systems biology data and model management platform.
Wolstencroft, Katherine; Owen, Stuart; Krebs, Olga; Nguyen, Quyen; Stanford, Natalie J; Golebiewski, Martin; Weidemann, Andreas; Bittkowski, Meik; An, Lihua; Shockley, David; Snoep, Jacky L; Mueller, Wolfgang; Goble, Carole
2015-07-11
Systems biology research typically involves the integration and analysis of heterogeneous data types in order to model and predict biological processes. Researchers therefore require tools and resources to facilitate the sharing and integration of data, and for linking of data to systems biology models. There are a large number of public repositories for storing biological data of a particular type, for example transcriptomics or proteomics, and there are several model repositories. However, this silo-type storage of data and models is not conducive to systems biology investigations. Interdependencies between multiple omics datasets and between datasets and models are essential. Researchers require an environment that will allow the management and sharing of heterogeneous data and models in the context of the experiments which created them. The SEEK is a suite of tools to support the management, sharing and exploration of data and models in systems biology. The SEEK platform provides an access-controlled, web-based environment for scientists to share and exchange data and models for day-to-day collaboration and for public dissemination. A plug-in architecture allows the linking of experiments, their protocols, data, models and results in a configurable system that is available 'off the shelf'. Tools to run model simulations, plot experimental data and assist with data annotation and standardisation combine to produce a collection of resources that support analysis as well as sharing. Underlying semantic web resources additionally extract and serve SEEK metadata in RDF (Resource Description Format). SEEK RDF enables rich semantic queries, both within SEEK and between related resources in the web of Linked Open Data. The SEEK platform has been adopted by many systems biology consortia across Europe. It is a data management environment that has a low barrier of uptake and provides rich resources for collaboration. This paper provides an update on the functions and features of the SEEK software, and describes the use of the SEEK in the SysMO consortium (Systems biology for Micro-organisms), and the VLN (virtual Liver Network), two large systems biology initiatives with different research aims and different scientific communities.
Boulos, Maged N Kamel; Maramba, Inocencio; Wheeler, Steve
2006-01-01
Background We have witnessed a rapid increase in the use of Web-based 'collaborationware' in recent years. These Web 2.0 applications, particularly wikis, blogs and podcasts, have been increasingly adopted by many online health-related professional and educational services. Because of their ease of use and rapidity of deployment, they offer the opportunity for powerful information sharing and ease of collaboration. Wikis are Web sites that can be edited by anyone who has access to them. The word 'blog' is a contraction of 'Web Log' – an online Web journal that can offer a resource rich multimedia environment. Podcasts are repositories of audio and video materials that can be "pushed" to subscribers, even without user intervention. These audio and video files can be downloaded to portable media players that can be taken anywhere, providing the potential for "anytime, anywhere" learning experiences (mobile learning). Discussion Wikis, blogs and podcasts are all relatively easy to use, which partly accounts for their proliferation. The fact that there are many free and Open Source versions of these tools may also be responsible for their explosive growth. Thus it would be relatively easy to implement any or all within a Health Professions' Educational Environment. Paradoxically, some of their disadvantages also relate to their openness and ease of use. With virtually anybody able to alter, edit or otherwise contribute to the collaborative Web pages, it can be problematic to gauge the reliability and accuracy of such resources. While arguably, the very process of collaboration leads to a Darwinian type 'survival of the fittest' content within a Web page, the veracity of these resources can be assured through careful monitoring, moderation, and operation of the collaborationware in a closed and secure digital environment. Empirical research is still needed to build our pedagogic evidence base about the different aspects of these tools in the context of medical/health education. Summary and conclusion If effectively deployed, wikis, blogs and podcasts could offer a way to enhance students', clinicians' and patients' learning experiences, and deepen levels of learners' engagement and collaboration within digital learning environments. Therefore, research should be conducted to determine the best ways to integrate these tools into existing e-Learning programmes for students, health professionals and patients, taking into account the different, but also overlapping, needs of these three audience classes and the opportunities of virtual collaboration between them. Of particular importance is research into novel integrative applications, to serve as the "glue" to bind the different forms of Web-based collaborationware synergistically in order to provide a coherent wholesome learning experience. PMID:16911779
A Security Architecture for Grid-enabling OGC Web Services
NASA Astrophysics Data System (ADS)
Angelini, Valerio; Petronzio, Luca
2010-05-01
In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.
[HyperPsych--resources for medicine and psychology on the World Wide Web].
Laszig, P
1997-07-01
Progress in the research of interactive communication technology and the acceleration of processing and transmitting information have promoted the development of computer networks allowing global access to scientific information and services. The recently most well-known net is the internet. Based on its integrative structure as a communication-directed as well as an information-directed medium, the internet helps researchers design scientific research. Especially medicine and psychology as information-dependent scientific disciplines may profit by using this technological offer. As a method to coordinate to the vast amount of medical and psychological data around the globe and to communicate with researchers world-wide, it enhances innovative possibilities for research, diagnosis and therapy. Currently, the World Wide Web is regarded as the most user-friendly and practical of all the internet resources. Based on a systematic introduction to the applications of the WWW, this article discusses relevant resources, points out possibilities and limits of network-supported scientific research and proposes many uses of this new medium.
Linked data and provenance in biological data webs.
Zhao, Jun; Miles, Alistair; Klyne, Graham; Shotton, David
2009-03-01
The Web is now being used as a platform for publishing and linking life science data. The Web's linking architecture can be exploited to join heterogeneous data from multiple sources. However, as data are frequently being updated in a decentralized environment, provenance information becomes critical to providing reliable and trustworthy services to scientists. This article presents design patterns for representing and querying provenance information relating to mapping links between heterogeneous data from sources in the domain of functional genomics. We illustrate the use of named resource description framework (RDF) graphs at different levels of granularity to make provenance assertions about linked data, and demonstrate that these assertions are sufficient to support requirements including data currency, integrity, evidential support and historical queries.
Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen
2015-01-01
The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.
Web-based platform for collaborative medical imaging research
NASA Astrophysics Data System (ADS)
Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.
2015-03-01
Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.
Columba: an integrated database of proteins, structures, and annotations.
Trissl, Silke; Rother, Kristian; Müller, Heiko; Steinke, Thomas; Koch, Ina; Preissner, Robert; Frömmel, Cornelius; Leser, Ulf
2005-03-31
Structural and functional research often requires the computation of sets of protein structures based on certain properties of the proteins, such as sequence features, fold classification, or functional annotation. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, we have created COLUMBA, an integrated database of annotations of protein structures. COLUMBA currently integrates twelve different databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The database can be searched using either keyword search or data source-specific web forms. Users can thus quickly select and download PDB entries that, for instance, participate in a particular pathway, are classified as containing a certain CATH architecture, are annotated as having a certain molecular function in the Gene Ontology, and whose structures have a resolution under a defined threshold. The results of queries are provided in both machine-readable extensible markup language and human-readable format. The structures themselves can be viewed interactively on the web. The COLUMBA database facilitates the creation of protein structure data sets for many structure-based studies. It allows to combine queries on a number of structure-related databases not covered by other projects at present. Thus, information on both many and few protein structures can be used efficiently. The web interface for COLUMBA is available at http://www.columba-db.de.
The DIMA web resource--exploring the protein domain network.
Pagel, Philipp; Oesterheld, Matthias; Stümpflen, Volker; Frishman, Dmitrij
2006-04-15
Conserved domains represent essential building blocks of most known proteins. Owing to their role as modular components carrying out specific functions they form a network based both on functional relations and direct physical interactions. We have previously shown that domain interaction networks provide substantially novel information with respect to networks built on full-length protein chains. In this work we present a comprehensive web resource for exploring the Domain Interaction MAp (DIMA), interactively. The tool aims at integration of multiple data sources and prediction techniques, two of which have been implemented so far: domain phylogenetic profiling and experimentally demonstrated domain contacts from known three-dimensional structures. A powerful yet simple user interface enables the user to compute, visualize, navigate and download domain networks based on specific search criteria. http://mips.gsf.de/genre/proj/dima
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Bri-Mathias
2016-04-08
The primary objective of this work was to create a state-of-the-art national wind resource data set and to provide detailed wind plant output data for specific sites based on that data set. Corresponding retrospective wind forecasts were also included at all selected locations. The combined information from these activities was used to create the Wind Integration National Dataset (WIND), and an extraction tool was developed to allow web-based data access.
Hette-Tronquart, Nicolas; Oberdorff, Thierry; Tales, Evelyne; Zahm, Amandine; Belliard, Jérôme
2017-03-23
As part of the landscape, streams are influenced by land use. Here, we contributed to the understanding of the biological impacts of land use on streams, investigating how landscape effects vary with spatial scales (local vs. regional). We adopted a food web approach integrating both biological structure and functioning, to focus on the overall effect of land use on stream biocœnosis. We selected 17 sites of a small tributary of the Seine River (France) for their contrasted land use, and conducted a natural experiment by sampling three organic matter sources, three macroinvertebrate taxa, and most of the fish community. Using stable isotope analysis, we calculated three food web metrics evaluating two major dimensions of the trophic diversity displayed by the fish community: (i) the diversity of exploited resources and (ii) the trophic level richness. The idea was to examine whether (1) land-use effects varied according to spatial scales, (2) land use affected food webs through an effect on community structure and (3) land use affected food webs through an effect on available resources. Beside an increase in trophic diversity from upstream to downstream, our empirical data showed that food webs were influenced by land use in the riparian corridors (local scale). The effect was complex, and depended on site's position along the upstream-downstream gradient. By contrast, land use in the catchment (regional scale) did not influence stream biocœnosis. At the local scale, community structure was weakly influenced by land use, and thus played a minor role in explaining food web modifications. Our results suggested that the amount of available resources at the base of the food web was partly responsible for food web modifications. In addition, changes in biological functioning (i.e. feeding interactions) can also explain another part of the land-use effect. These results highlight the role played by the riparian corridors as a buffer zone, and advocate that riparian corridor should be at the centre of water management attention.
Beyond accuracy: creating interoperable and scalable text-mining web services.
Wei, Chih-Hsuan; Leaman, Robert; Lu, Zhiyong
2016-06-15
The biomedical literature is a knowledge-rich resource and an important foundation for future research. With over 24 million articles in PubMed and an increasing growth rate, research in automated text processing is becoming increasingly important. We report here our recently developed web-based text mining services for biomedical concept recognition and normalization. Unlike most text-mining software tools, our web services integrate several state-of-the-art entity tagging systems (DNorm, GNormPlus, SR4GN, tmChem and tmVar) and offer a batch-processing mode able to process arbitrary text input (e.g. scholarly publications, patents and medical records) in multiple formats (e.g. BioC). We support multiple standards to make our service interoperable and allow simpler integration with other text-processing pipelines. To maximize scalability, we have preprocessed all PubMed articles, and use a computer cluster for processing large requests of arbitrary text. Our text-mining web service is freely available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#curl : Zhiyong.Lu@nih.gov. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Casimage project: a digital teaching files authoring environment.
Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman
2004-04-01
The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.
COMAN: a web server for comprehensive metatranscriptomics analysis.
Ni, Yueqiong; Li, Jun; Panagiotou, Gianni
2016-08-11
Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.
The Web Resource Collaboration Center
ERIC Educational Resources Information Center
Dunlap, Joanna C.
2004-01-01
The Web Resource Collaboration Center (WRCC) is a web-based tool developed to help software engineers build their own web-based learning and performance support systems. Designed using various online communication and collaboration technologies, the WRCC enables people to: (1) build a learning and professional development resource that provides…
Design and performance of an integrated ground and space sensor web for monitoring active volcanoes.
NASA Astrophysics Data System (ADS)
Lahusen, Richard; Song, Wenzhan; Kedar, Sharon; Shirazi, Behrooz; Chien, Steve; Doubleday, Joshua; Davies, Ashley; Webb, Frank; Dzurisin, Dan; Pallister, John
2010-05-01
An interdisciplinary team of computer, earth and space scientists collaborated to develop a sensor web system for rapid deployment at active volcanoes. The primary goals of this Optimized Autonomous Space In situ Sensorweb (OASIS) are to: 1) integrate complementary space and in situ (ground-based) elements into an interactive, autonomous sensor web; 2) advance sensor web power and communication resource management technology; and 3) enable scalability for seamless addition sensors and other satellites into the sensor web. This three-year project began with a rigorous multidisciplinary interchange that resulted in definition of system requirements to guide the design of the OASIS network and to achieve the stated project goals. Based on those guidelines, we have developed fully self-contained in situ nodes that integrate GPS, seismic, infrasonic and lightning (ash) detection sensors. The nodes in the wireless sensor network are linked to the ground control center through a mesh network that is highly optimized for remote geophysical monitoring. OASIS also features an autonomous bidirectional interaction between ground nodes and instruments on the EO-1 space platform through continuous analysis and messaging capabilities at the command and control center. Data from both the in situ sensors and satellite-borne hyperspectral imaging sensors stream into a common database for real-time visualization and analysis by earth scientists. We have successfully completed a field deployment of 15 nodes within the crater and on the flanks of Mount St. Helens, Washington. The demonstration that sensor web technology facilitates rapid network deployments and that we can achieve real-time continuous data acquisition. We are now optimizing component performance and improving user interaction for additional deployments at erupting volcanoes in 2010.
What's in a story? A text analysis of burn survivors' web-posted narratives.
Badger, Karen; Royse, David; Moore, Kelly
2011-01-01
Story-telling has been found to be beneficial following trauma, suggesting a potential intervention for burn survivors who frequently make use of? telling their story? as part of their recovery. This study is the first to examine the word content of burn survivors' Web-posted narratives to explore their perceptions of the event, supportive resources, their post-burn well-being, and re-integration using a comparison group and a text data analysis software developed by the widely recognized James Pennebaker. Suggestions for using expressive writing or story-telling as a guided psychosocial intervention with burn survivors are made.
A Linked Data-Based Collaborative Annotation System for Increasing Learning Achievements
ERIC Educational Resources Information Center
Zarzour, Hafed; Sellami, Mokhtar
2017-01-01
With the emergence of the Web 2.0, collaborative annotation practices have become more mature in the field of learning. In this context, several recent studies have shown the powerful effects of the integration of annotation mechanism in learning process. However, most of these studies provide poor support for semantically structured resources,…
Learning Engines - A Functional Object Model for Developing Learning Resources for the WWW.
ERIC Educational Resources Information Center
Fritze, Paul; Ip, Albert
The Learning Engines (LE) model, developed at the University of Melbourne (Australia), supports the integration of rich learning activities into the World Wide Web. The model is concerned with the practical design, educational value, and reusability of software components. The model is focused on the academic teacher who is in the best position to…
Building a Community of Readers: BookSpace
ERIC Educational Resources Information Center
Peterson, Glenn; McGlinn, Sharon Hilts
2008-01-01
In the spring of 2006, Hennepin County Library (HCL) in Minneapolis set out to create a compelling new reader's advisory website. Inspired by Web 2.0 principles, the library wanted to engage users as active participants, to integrate resources in ways that made sense to them, and to blend staff- and patron-contributed content to create an…
Research and Application of Knowledge Resources Network for Product Innovation
Li, Chuan; Li, Wen-qiang; Li, Yan; Na, Hui-zhen; Shi, Qian
2015-01-01
In order to enhance the capabilities of knowledge service in product innovation design service platform, a method of acquiring knowledge resources supporting for product innovation from the Internet and providing knowledge active push is proposed. Through knowledge modeling for product innovation based on ontology, the integrated architecture of knowledge resources network is put forward. The technology for the acquisition of network knowledge resources based on focused crawler and web services is studied. Knowledge active push is provided for users by user behavior analysis and knowledge evaluation in order to improve users' enthusiasm for participation in platform. Finally, an application example is illustrated to prove the effectiveness of the method. PMID:25884031
2013-01-01
Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691
Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur
2013-03-01
Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.
Web server for priority ordered multimedia services
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund
2001-10-01
In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.
Workflow and web application for annotating NCBI BioProject transcriptome data
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A.; Barrero, Luz S.; Landsman, David
2017-01-01
Abstract The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. Database URL: http://www.ncbi.nlm.nih.gov/projects/physalis/ PMID:28605765
Linked Data: what does it offer Earth Sciences?
NASA Astrophysics Data System (ADS)
Cox, Simon; Schade, Sven
2010-05-01
'Linked Data' is a current buzz-phrase promoting access to various forms of data on the internet. It starts from the two principles that have underpinned the architecture and scalability of the World Wide Web: 1. Universal Resource Identifiers - using the http protocol which is supported by the DNS system. 2. Hypertext - in which URIs of related resources are embedded within a document. Browsing is the key mode of interaction, with traversal of links between resources under control of the client. Linked Data also adds, or re-emphasizes: • Content negotiation - whereby the client uses http headers to tell the service what representation of a resource is acceptable, • Semantic Web principles - formal semantics for links, following the RDF data model and encoding, and • The 'mashup' effect - in which original and unexpected value may emerge from reuse of data, even if published in raw or unpolished form. Linked Data promotes typed links to all kinds of data, so is where the semantic web meets the 'deep web', i.e. resources which may be accessed using web protocols, but are in representations not indexed by search engines. Earth sciences are data rich, but with a strong legacy of specialized formats managed and processed by disconnected applications. However, most contemporary research problems require a cross-disciplinary approach, in which the heterogeneity resulting from that legacy is a significant challenge. In this context, Linked Data clearly has much to offer the earth sciences. But, there are some important questions to answer. What is a resource? Most earth science data is organized in arrays and databases. A subset useful for a particular study is usually identified by a parameterized query. The Linked Data paradigm emerged from the world of documents, and will often only resolve data-sets. It is impractical to create even nested navigation resources containing links to all potentially useful objects or subsets. From the viewpoint of human user interfaces, the browse metaphor, which has been such an important part of the success of the web, must be augmented with other interaction mechanisms, including query. What are the impacts on search and metadata? Hypertext provides links selected by the page provider. However, science should endeavor to be exhaustive in its use of data. Resource discovery through links must be supplemented by more systematic data discovery through search. Conversely, the crawlers that generate search indexes must be fed by resource providers (a) serving navigation pages with links to every dataset (b) adding enough 'metadata' (semantics) on each link to effectively populate the indexes. Linked Data makes this easier due to its integration with semantic web technologies, including structured vocabularies. What is the relation between structured data and Linked Data? Linked Data has focused on web-pages (primarily HTML) for human browsing, and RDF for semantics, assuming that other representations are opaque. However, this overlooks the wealth of XML data on the web, some of which is structured according to XML Schemas that provide semantics. Technical applications can use content-negotiation to get a structured representation, and exploit its semantics. Particularly relevant for earth sciences are data representations based on OGC Geography Markup Language (GML), such as GeoSciML, O&M and MOLES. GML was strongly influenced by RDF, and typed links are intrinsic: xlink:href plays the role that rdf:resource does in RDF representations. Services which expose GML-formatted resources (such as OGC Web Feature Service) are a prototype of Linked Data. Giving credit where it is due. Organizations investing in data collection may be reluctant to publish the raw data prior to completing an initial analysis. To encourage early data publication the system must provide suitable incentives, and citation analysis must recognize the increasing diversity of publication routes and forms. Linked Data makes it easier to include rich citation information when data is both published and used.
NASA Astrophysics Data System (ADS)
Arias, Carolina; Brovelli, Maria Antonia; Moreno, Rafael
2015-04-01
We are in an age when water resources are increasingly scarce and the impacts of human activities on them are ubiquitous. These problems don't respect administrative or political boundaries and they must be addressed integrating information from multiple sources at multiple spatial and temporal scales. Communication, coordination and data sharing are critical for addressing the water conservation and management issues of the 21st century. However, different countries, provinces, local authorities and agencies dealing with water resources have diverse organizational, socio-cultural, economic, environmental and information technology (IT) contexts that raise challenges to the creation of information systems capable of integrating and distributing information across their areas of responsibility in an efficient and timely manner. Tight and disparate financial resources, and dissimilar IT infrastructures (data, hardware, software and personnel expertise) further complicate the creation of these systems. There is a pressing need for distributed interoperable water information systems that are user friendly, easily accessible and capable of managing and sharing large volumes of spatial and non-spatial data. In a distributed system, data and processes are created and maintained in different locations each with competitive advantages to carry out specific activities. Open Data (data that can be freely distributed) is available in the water domain, and it should be further promoted across countries and organizations. Compliance with Open Specifications for data collection, storage and distribution is the first step toward the creation of systems that are capable of interacting and exchanging data in a seamlessly (interoperable) way. The features of Free and Open Source Software (FOSS) offer low access cost that facilitate scalability and long-term viability of information systems. The World Wide Web (the Web) will be the platform of choice to deploy and access these systems. Geospatial capabilities for mapping, visualization, and spatial analysis will be important components of these new generation of Web-based interoperable information systems in the water domain. The purpose of this presentation is to increase the awareness of scientists, IT personnel and agency managers about the advantages offered by the combined use of Open Data, Open Specifications for geospatial and water-related data collection, storage and sharing, as well as mature FOSS projects for the creation of interoperable Web-based information systems in the water domain. A case study is used to illustrate how these principles and technologies can be integrated to create a system with the previously mentioned characteristics for managing and responding to flood events.
Atlas Basemaps in Web 2.0 Epoch
NASA Astrophysics Data System (ADS)
Chabaniuk, V.; Dyshlyk, O.
2016-06-01
The authors have analyzed their experience of the production of various Electronic Atlases (EA) and Atlas Information Systems (AtIS) of so-called "classical type". These EA/AtIS have been implemented in the past decade in the Web 1.0 architecture (e.g., National Atlas of Ukraine, Atlas of radioactive contamination of Ukraine, and others). One of the main distinguishing features of these atlases was their static nature - the end user could not change the content of EA/AtIS. Base maps are very important element of any EA/AtIS. In classical type EA/AtIS they were static datasets, which consisted of two parts: the topographic data of a fixed scale and data of the administrative-territorial division of Ukraine. It is important to note that the technique of topographic data production was based on the use of direct channels of topographic entity observation (such as aerial photography) for the selected scale. Changes in the information technology of the past half-decade are characterized by the advent of the "Web 2.0 epoch". Due to this, in cartography appeared such phenomena as, for example, "neo-cartography" and various mapping platforms like OpenStreetMap. These changes have forced developers of EA/AtIS to use new atlas basemaps. Our approach is described in the article. The phenomenon of neo-cartography and/or Web 2.0 cartography are analysed by authors using previously developed Conceptual framework of EA/AtIS. This framework logically explains the cartographic phenomena relations of three formations: Web 1.0, Web 1.0x1.0 and Web 2.0. Atlas basemaps of the Web 2.0 epoch are integrated information systems. We use several ways to integrate separate atlas basemaps into the information system - by building: weak integrated information system, structured system and meta-system. This integrated information system consists of several basemaps and falls under the definition of "big data". In real projects it is already used the basemaps of three strata: Conceptual, Application and Operational. It is possible to use several variants of the basemap for each stratum. Furthermore, the developed methods of integration allow logically coordinate the application of different types of basemaps into a specific EA/AtIS. For example, such variants of the Conceptual strata basemap as the National map of Ukraine of our production and external resources such as OpenStreetMap are used with the help of meta-system replacement procedures. The authors propose a Conceptual framework of the basemap, which consists of the Conceptual solutions framework of the basemap and few Application solutions frameworks of the basemap. Conceptual framework is intended to be reused in many projects and significantly reduce the resources. We differentiate Application frameworks for mobile and non-mobile environments. The results of the research are applied in few EA produced in 2014-2015 at the Institute of Geography of the National Academy of Sciences of Ukraine. One of them is the Atlas of emergency situations. It includes elements that work on mobile devices. At its core it is "ubiquitous" subset of the Atlas.
DisGeNET: a discovery platform for the dynamical exploration of human diseases and their genes.
Piñero, Janet; Queralt-Rosinach, Núria; Bravo, Àlex; Deu-Pons, Jordi; Bauer-Mehren, Anna; Baron, Martin; Sanz, Ferran; Furlong, Laura I
2015-01-01
DisGeNET is a comprehensive discovery platform designed to address a variety of questions concerning the genetic underpinning of human diseases. DisGeNET contains over 380,000 associations between >16,000 genes and 13,000 diseases, which makes it one of the largest repositories currently available of its kind. DisGeNET integrates expert-curated databases with text-mined data, covers information on Mendelian and complex diseases, and includes data from animal disease models. It features a score based on the supporting evidence to prioritize gene-disease associations. It is an open access resource available through a web interface, a Cytoscape plugin and as a Semantic Web resource. The web interface supports user-friendly data exploration and navigation. DisGeNET data can also be analysed via the DisGeNET Cytoscape plugin, and enriched with the annotations of other plugins of this popular network analysis software suite. Finally, the information contained in DisGeNET can be expanded and complemented using Semantic Web technologies and linked to a variety of resources already present in the Linked Data cloud. Hence, DisGeNET offers one of the most comprehensive collections of human gene-disease associations and a valuable set of tools for investigating the molecular mechanisms underlying diseases of genetic origin, designed to fulfill the needs of different user profiles, including bioinformaticians, biologists and health-care practitioners. Database URL: http://www.disgenet.org/ © The Author(s) 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Yamada, Hiroshi; Kawaguchi, Akira
Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.
Restful Implementation of Catalogue Service for Geospatial Data Provenance
NASA Astrophysics Data System (ADS)
Jiang, L. C.; Yue, P.; Lu, X. C.
2013-10-01
Provenance, also known as lineage, is important in understanding the derivation history of data products. Geospatial data provenance helps data consumers to evaluate the quality and reliability of geospatial data. In a service-oriented environment, where data are often consumed or produced by distributed services, provenance could be managed by following the same service-oriented paradigm. The Open Geospatial Consortium (OGC) Catalogue Service for the Web (CSW) is used for the registration and query of geospatial data provenance by extending ebXML Registry Information Model (ebRIM). Recent advance of the REpresentational State Transfer (REST) paradigm has shown great promise for the easy integration of distributed resources. RESTful Web Service aims to provide a standard way for Web clients to communicate with servers based on REST principles. The existing approach for provenance catalogue service could be improved by adopting the RESTful design. This paper presents the design and implementation of a catalogue service for geospatial data provenance following RESTful architecture style. A middleware named REST Converter is added on the top of the legacy catalogue service to support a RESTful style interface. The REST Converter is composed of a resource request dispatcher and six resource handlers. A prototype service is developed to demonstrate the applicability of the approach.
NASA Astrophysics Data System (ADS)
Foster, S. Q.; Carbone, L.; Gardiner, L.; Johnson, R.; Russell, R.; Advisory Committee, S.; Ammann, C.; Lu, G.; Richmond, A.; Maute, A.; Haller, D.; Conery, C.; Bintner, G.
2005-12-01
The Climate Discovery Exhibit at the National Center for Atmospheric Research (NCAR) Mesa Lab provides an exciting conceptual outline for the integration of several EPO activities with other well-established NCAR educational resources and programs. The exhibit is organized into four topic areas intended to build understanding among NCAR's 80,000 annual visitors, including 10,000 school children, about Earth system processes and scientific methods contributing to a growing body of knowledge about climate and global change. These topics include: 'Sun-Earth Connections,' 'Climate Now,' 'Climate Past,' and 'Climate Future.' Exhibit text, graphics, film and electronic media, and interactives are developed and updated through collaborations between NCAR's climate research scientists and staff in the Office of Education and Outreach (EO) at the University Corporation for Atmospheric Research (UCAR). With funding from NCAR, paleoclimatologists have contributed data and ideas for a new exhibit Teachers' Guide unit about 'Climate Past.' This collection of middle-school level, standards-aligned lessons are intended to help students gain understanding about how scientists use proxy data and direct observations to describe past climates. Two NASA EPO's have funded the development of 'Sun-Earth Connection' lessons, visual media, and tips for scientists and teachers. Integrated with related content and activities from the NASA-funded Windows to the Universe web site, these products have been adapted to form a second unit in the Climate Discovery Teachers' Guide about the Sun's influence on Earth's climate. Other lesson plans, previously developed by on-going efforts of EO staff and NSF's previously-funded Project Learn program are providing content for a third Teachers' Guide unit on 'Climate Now' - the dynamic atmospheric and geological processes that regulate Earth's climate. EO has plans to collaborate with NCAR climatologists and computer modelers in the next year to develop lessons and ancillary exhibit interactives and visualizations for the final Teachers' Guide unit about 'Climate Future.' Units developed so far are available in downloadable format on the NCAR EO and Windows to the Universe web sites for dissemination to educators and the general public public. Those web sites are, respectively, (http://eo.ucar.edu/educators/ClimateDiscovery) and (http://www.windows.ucar.edu). Encouragement from funding agencies to integrate and relate resources and growing pressure to implement efficiencies in educational programs have created excellent opportunities which will be described from the viewpoints of EO staff and scientists'. Challenges related to public and student perceptions about climate and global change, the scientific endeavor, and how to establish successful dialogues between educators and scientists will also be discussed.
CerealsDB 3.0: expansion of resources and data integration.
Wilkinson, Paul A; Winfield, Mark O; Barker, Gary L A; Tyrrell, Simon; Bian, Xingdong; Allen, Alexandra M; Burridge, Amanda; Coghill, Jane A; Waterfall, Christy; Caccamo, Mario; Davey, Robert P; Edwards, Keith J
2016-06-24
The increase in human populations around the world has put pressure on resources, and as a consequence food security has become an important challenge for the 21st century. Wheat (Triticum aestivum) is one of the most important crops in human and livestock diets, and the development of wheat varieties that produce higher yields, combined with increased resistance to pests and resilience to changes in climate, has meant that wheat breeding has become an important focus of scientific research. In an attempt to facilitate these improvements in wheat, plant breeders have employed molecular tools to help them identify genes for important agronomic traits that can be bred into new varieties. Modern molecular techniques have ensured that the rapid and inexpensive characterisation of SNP markers and their validation with modern genotyping methods has produced a valuable resource that can be used in marker assisted selection. CerealsDB was created as a means of quickly disseminating this information to breeders and researchers around the globe. CerealsDB version 3.0 is an online resource that contains a wide range of genomic datasets for wheat that will assist plant breeders and scientists to select the most appropriate markers for use in marker assisted selection. CerealsDB includes a database which currently contains in excess of a million putative varietal SNPs, of which several hundreds of thousands have been experimentally validated. In addition, CerealsDB also contains new data on functional SNPs predicted to have a major effect on protein function and we have constructed a web service to encourage data integration and high-throughput programmatic access. CerealsDB is an open access website that hosts information on SNPs that are considered useful for both plant breeders and research scientists. The recent inclusion of web services designed to federate genomic data resources allows the information on CerealsDB to be more fully integrated with the WheatIS network and other biological databases.
Student Web Use, Columbia Earthscape, and Their Implications for Online Earth Science Resources
NASA Astrophysics Data System (ADS)
Haber, J.; Luby, M.; Wittenberg, K.
2002-12-01
For three years, Columbia Earthscape, www.earthscape.org, has served as a test bed for the development and evaluation of Web-based geoscience education. Last fall (EOS Trans. AGU, 82(47), Fall Meet. Suppl., Abstract ED11A-11, 2001), we described how librarian, scientist, instructor, and student feedback led to sweeping changes in interface and acquisitions. Further assessment has looked at the value of a central online resource for Earth-system science education in light of patterns of study. Columbia Earthscape aimed to create an authoritative resource that reflects the interconnectedness of the Internet, of the disciplines of Earth-systems science, and of research, education, and public policy. Evaluation thus has three parts. The editors and editorial advisory board have evaluated projects for the site for accuracy and relevance to the project?s original context of Earth issues and topical mini-courses. Second, our research sought patterns of student use and library acquisition of Internet sources. Last, we asked if and how students benefit from Columbia Earthscape. We found, first, that while libraries are understandably reluctant to add online resources to strained budgets, almost all students work online; they vary almost solely in personal Web use. Second, Web use does not discourage use of print. Third, researchers often search Columbia Earthscape, but students, especially in schools, prefer browsing by topic of interest. Fourth, if they did not have this resource, most would surf, but many feel lost on the Web, and few say they can judge the quality of materials they used. Fifth, students found Columbia Earthscape helpful, relevant, and current, but most often for its research and policy materials. Many commented on issue-related collections original to Columbia Earthscape. While indeed we intended our Classroom Models and Sample Syllabi primarily as aids to instructor course design, we conclude, first, that students stick anyway to assigned materials and projects. Second, these assignments put students in need of materials not originally meant for education and not easy for students to evaluate. Third, an online resource must not choose simply a card-catalog or search model. In short, many have asked how scientists can support education and outreach and how curricula can integrate research and policy; but students already demand those connections, and a central online resource can help scientists, students, and the public by itself making them.
Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
VizieR Data Extraction Disseminated through Widgets
NASA Astrophysics Data System (ADS)
Landais, G.; Boch, T.; Ochsenbein, F.; Simon, A.-C.
2015-09-01
The CDS widgets are a collection of web applications easily embeddable in web pages. The Apache Shindig framework, relying on OpenSocial specification, enables to reuse code in any web page by giving interactive output and broadcasting capabilities: for instance to use the result of a search widget to populate other widgets. Some of these widgets are already used in the VizieR web application. The “plot widget” is used to illustrate associated data like time-series or spectra coming from publications. The data extracted with a SQL-like language (which can operate with different type of resources like FITS, ASCII files, etc.) are then disseminated in a “plot widge” that is ergonomic and contains evolved customization capabilities. The VizieR photometry viewer is the result of filter gathering and pipeline automatization: the interface use a dedicated widget that integrates three linked views: a photometry point, a sky chart and the VizieR tabular data.
Meeting the challenge of finding resources for ophthalmic nurses on the World Wide Web.
Duffel, P G
1998-12-01
The World Wide Web ("the Web") is a macrocosm of resources that can be overwhelming. Often the sheer volume of material available causes one to give up in despair before finding information of any use. The Web is such a popular resource that it cannot be ignored. Two of the biggest challenges to finding good information on the Web are knowing where to start and judging whether the information gathered is pertinent and credible. This article addresses these two challenges and introduces the reader to a variety of ophthalmology and vision science resources on the World Wide Web.
Health and medication information resources on the World Wide Web.
Grossman, Sara; Zerilli, Tina
2013-04-01
Health care practitioners have increasingly used the Internet to obtain health and medication information. The vast number of Internet Web sites providing such information and concerns with their reliability makes it essential for users to carefully select and evaluate Web sites prior to use. To this end, this article reviews the general principles to consider in this process. Moreover, as cost may limit access to subscription-based health and medication information resources with established reputability, freely accessible online resources that may serve as an invaluable addition to one's reference collection are highlighted. These include government- and organization-sponsored resources (eg, US Food and Drug Administration Web site and the American Society of Health-System Pharmacists' Drug Shortage Resource Center Web site, respectively) as well as commercial Web sites (eg, Medscape, Google Scholar). Familiarity with such online resources can assist health care professionals in their ability to efficiently navigate the Web and may potentially expedite the information gathering and decision-making process, thereby improving patient care.
32 CFR 701.102 - Online resources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... online Web site (http://www.privacy.navy.mil). This Web site supplements this subpart and subpart G. It...) Web site (http://www.doncio.navy.mil). This Web site provides detailed guidance on PIAs. (c) DOD's PA Web site (http://www.defenselink.mil/privacy). This Web site is an excellent resource that contains a...
VectorBase: a data resource for invertebrate vector genomics
Lawson, Daniel; Arensburger, Peter; Atkinson, Peter; Besansky, Nora J.; Bruggner, Robert V.; Butler, Ryan; Campbell, Kathryn S.; Christophides, George K.; Christley, Scott; Dialynas, Emmanuel; Hammond, Martin; Hill, Catherine A.; Konopinski, Nathan; Lobo, Neil F.; MacCallum, Robert M.; Madey, Greg; Megy, Karine; Meyer, Jason; Redmond, Seth; Severson, David W.; Stinson, Eric O.; Topalis, Pantelis; Birney, Ewan; Gelbart, William M.; Kafatos, Fotis C.; Louis, Christos; Collins, Frank H.
2009-01-01
VectorBase (http://www.vectorbase.org) is an NIAID-funded Bioinformatic Resource Center focused on invertebrate vectors of human pathogens. VectorBase annotates and curates vector genomes providing a web accessible integrated resource for the research community. Currently, VectorBase contains genome information for three mosquito species: Aedes aegypti, Anopheles gambiae and Culex quinquefasciatus, a body louse Pediculus humanus and a tick species Ixodes scapularis. Since our last report VectorBase has initiated a community annotation system, a microarray and gene expression repository and controlled vocabularies for anatomy and insecticide resistance. We have continued to develop both the software infrastructure and tools for interrogating the stored data. PMID:19028744
A Semantic Web Management Model for Integrative Biomedical Informatics
Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.
2008-01-01
Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353
SPARQL-enabled identifier conversion with Identifiers.org
Wimalaratne, Sarala M.; Bolleman, Jerven; Juty, Nick; Katayama, Toshiaki; Dumontier, Michel; Redaschi, Nicole; Le Novère, Nicolas; Hermjakob, Henning; Laibe, Camille
2015-01-01
Motivation: On the semantic web, in life sciences in particular, data is often distributed via multiple resources. Each of these sources is likely to use their own International Resource Identifier for conceptually the same resource or database record. The lack of correspondence between identifiers introduces a barrier when executing federated SPARQL queries across life science data. Results: We introduce a novel SPARQL-based service to enable on-the-fly integration of life science data. This service uses the identifier patterns defined in the Identifiers.org Registry to generate a plurality of identifier variants, which can then be used to match source identifiers with target identifiers. We demonstrate the utility of this identifier integration approach by answering queries across major producers of life science Linked Data. Availability and implementation: The SPARQL-based identifier conversion service is available without restriction at http://identifiers.org/services/sparql. Contact: sarala@ebi.ac.uk PMID:25638809
Ferderer, David A.
2001-01-01
Documented, reliable, and accessible data and information are essential building blocks supporting scientific research and applications that enhance society's knowledge base (fig. 1). The U.S. Geological Survey (USGS), a leading provider of science data, information, and knowledge, is uniquely positioned to integrate science and natural resource information to address societal needs. The USGS Central Energy Resources Team (USGS-CERT) provides critical information and knowledge on the quantity, quality, and distribution of the Nation's and the world's oil, gas, and coal resources. By using a life-cycle model, the USGS-CERT Data Management Project is developing an integrated data management system to (1) promote access to energy data and information, (2) increase data documentation, and (3) streamline product delivery to the public, scientists, and decision makers. The project incorporates web-based technology, data cataloging systems, data processing routines, and metadata documentation tools to improve data access, enhance data consistency, and increase office efficiency
SPARQL-enabled identifier conversion with Identifiers.org.
Wimalaratne, Sarala M; Bolleman, Jerven; Juty, Nick; Katayama, Toshiaki; Dumontier, Michel; Redaschi, Nicole; Le Novère, Nicolas; Hermjakob, Henning; Laibe, Camille
2015-06-01
On the semantic web, in life sciences in particular, data is often distributed via multiple resources. Each of these sources is likely to use their own International Resource Identifier for conceptually the same resource or database record. The lack of correspondence between identifiers introduces a barrier when executing federated SPARQL queries across life science data. We introduce a novel SPARQL-based service to enable on-the-fly integration of life science data. This service uses the identifier patterns defined in the Identifiers.org Registry to generate a plurality of identifier variants, which can then be used to match source identifiers with target identifiers. We demonstrate the utility of this identifier integration approach by answering queries across major producers of life science Linked Data. The SPARQL-based identifier conversion service is available without restriction at http://identifiers.org/services/sparql. © The Author 2015. Published by Oxford University Press.
Sig2BioPAX: Java tool for converting flat files to BioPAX Level 3 format.
Webb, Ryan L; Ma'ayan, Avi
2011-03-21
The World Wide Web plays a critical role in enabling molecular, cell, systems and computational biologists to exchange, search, visualize, integrate, and analyze experimental data. Such efforts can be further enhanced through the development of semantic web concepts. The semantic web idea is to enable machines to understand data through the development of protocol free data exchange formats such as Resource Description Framework (RDF) and the Web Ontology Language (OWL). These standards provide formal descriptors of objects, object properties and their relationships within a specific knowledge domain. However, the overhead of converting datasets typically stored in data tables such as Excel, text or PDF into RDF or OWL formats is not trivial for non-specialists and as such produces a barrier to seamless data exchange between researchers, databases and analysis tools. This problem is particularly of importance in the field of network systems biology where biochemical interactions between genes and their protein products are abstracted to networks. For the purpose of converting biochemical interactions into the BioPAX format, which is the leading standard developed by the computational systems biology community, we developed an open-source command line tool that takes as input tabular data describing different types of molecular biochemical interactions. The tool converts such interactions into the BioPAX level 3 OWL format. We used the tool to convert several existing and new mammalian networks of protein interactions, signalling pathways, and transcriptional regulatory networks into BioPAX. Some of these networks were deposited into PathwayCommons, a repository for consolidating and organizing biochemical networks. The software tool Sig2BioPAX is a resource that enables experimental and computational systems biologists to contribute their identified networks and pathways of molecular interactions for integration and reuse with the rest of the research community.
CHEMICAL STRUCTURE INDEXING OF TOXICITY DATA ON ...
Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the 'flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives that are accelerating the pace of this transformation, with particular reference to toxicology-related chemical information. Chemical content annotators, structure locator services, large structure/data aggregator web sites, structure browsers, International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) codes, toxicity data models and public chemical/biological activity profiling initiatives are all playing a role in overcoming barriers to the integration of toxicity data, and are bringing researchers closer to the reality of a mineable chemical Semantic Web. An example of this integration of data is provided by the collaboration among researchers involved with the Distributed Structure-Searchable Toxicity (DSSTox) project, the Carcinogenic Potency Project, projects at the National Cancer Institute and the PubChem database. Standardizing chemical structure annotation of public toxicity databases
Teaching and Learning: Web Engagement--Are We at the Next Level?
ERIC Educational Resources Information Center
Lindeman, Cheryl A.
2011-01-01
The challenge for those who are working with talented STEM students is to engage them with like-minded science leaders through direct contact and by using meaningful web resources. The author discovered new web resources by attending a workshop and by reading an alumni magazine. She introduced both web resources to her senior classes and…
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Web-based integrated public healthcare information system of Korea: development and performance.
Ryu, Seewon; Park, Minsu; Lee, Jaegook; Kim, Sung-Soo; Han, Bum Soo; Mo, Kyoung Chun; Lee, Hyung Seok
2013-12-01
The Web-based integrated public healthcare information system (PHIS) of Korea was planned and developed from 2005 to 2010, and it is being used in 3,501 regional health organizations. This paper introduces and discusses development and performance of the system. We reviewed and examined documents about the development process and performance of the newly integrated PHIS. The resources we analyzed the national plan for public healthcare, information strategy for PHIS, usage and performance reports of the system. The integrated PHIS included 19 functional business areas, 47 detailed health programs, and 48 inter-organizational tasks. The new PHIS improved the efficiency and effectiveness of the business process and inter-organizational business, and enhanced user satisfaction. Economic benefits were obtained from five categories: labor, health education and monitoring, clinical information management, administration and civil service, and system maintenance. The system was certified by a patent from the Korean Intellectual Property Office and accredited as an ISO 9001. It was also reviewed and received preliminary comments about its originality, advancement, and business applicability from the Patent Cooperation Treaty. It has been found to enhance the quality of policy decision-making about regional healthcare at the self-governing local government level. PHIS, a Web-based integrated system, has contributed to the improvement of regional healthcare services of Korea. However, when it comes to an appropriate evolution, the needs and changing environments of community-level healthcare service and IT infrastructure should be analyzed properly in advance.
Web-Based Integrated Public Healthcare Information System of Korea: Development and Performance
Park, Minsu; Lee, Jaegook; Kim, Sung-Soo; Han, Bum Soo; Mo, Kyoung Chun; Lee, Hyung Seok
2013-01-01
Objectives The Web-based integrated public healthcare information system (PHIS) of Korea was planned and developed from 2005 to 2010, and it is being used in 3,501 regional health organizations. This paper introduces and discusses development and performance of the system. Methods We reviewed and examined documents about the development process and performance of the newly integrated PHIS. The resources we analyzed the national plan for public healthcare, information strategy for PHIS, usage and performance reports of the system. Results The integrated PHIS included 19 functional business areas, 47 detailed health programs, and 48 inter-organizational tasks. The new PHIS improved the efficiency and effectiveness of the business process and inter-organizational business, and enhanced user satisfaction. Economic benefits were obtained from five categories: labor, health education and monitoring, clinical information management, administration and civil service, and system maintenance. The system was certified by a patent from the Korean Intellectual Property Office and accredited as an ISO 9001. It was also reviewed and received preliminary comments about its originality, advancement, and business applicability from the Patent Cooperation Treaty. It has been found to enhance the quality of policy decision-making about regional healthcare at the self-governing local government level. Conclusions PHIS, a Web-based integrated system, has contributed to the improvement of regional healthcare services of Korea. However, when it comes to an appropriate evolution, the needs and changing environments of community-level healthcare service and IT infrastructure should be analyzed properly in advance. PMID:24523997
Developing a kidney and urinary pathway knowledge base
2011-01-01
Background Chronic renal disease is a global health problem. The identification of suitable biomarkers could facilitate early detection and diagnosis and allow better understanding of the underlying pathology. One of the challenges in meeting this goal is the necessary integration of experimental results from multiple biological levels for further analysis by data mining. Data integration in the life science is still a struggle, and many groups are looking to the benefits promised by the Semantic Web for data integration. Results We present a Semantic Web approach to developing a knowledge base that integrates data from high-throughput experiments on kidney and urine. A specialised KUP ontology is used to tie the various layers together, whilst background knowledge from external databases is incorporated by conversion into RDF. Using SPARQL as a query mechanism, we are able to query for proteins expressed in urine and place these back into the context of genes expressed in regions of the kidney. Conclusions The KUPKB gives KUP biologists the means to ask queries across many resources in order to aggregate knowledge that is necessary for answering biological questions. The Semantic Web technologies we use, together with the background knowledge from the domain’s ontologies, allows both rapid conversion and integration of this knowledge base. The KUPKB is still relatively small, but questions remain about scalability, maintenance and availability of the knowledge itself. Availability The KUPKB may be accessed via http://www.e-lico.eu/kupkb. PMID:21624162
Web-Based Tools for Data Visualization and Decision Support for South Asia
NASA Astrophysics Data System (ADS)
Jones, N.; Nelson, J.; Pulla, S. T.; Ames, D. P.; Souffront, M.; David, C. H.; Zaitchik, B. F.; Gatlin, P. N.; Matin, M. A.
2017-12-01
The objective of the NASA SERVIR project is to assist developing countries in using information provided by Earth observing satellites to assess and manage climate risks, land use, and water resources. We present a collection of web apps that integrate earth observations and in situ data to facilitate deployment of data and water resources models as decision-making tools in support of this effort. The interactive nature of web apps makes this an excellent medium for creating decision support tools that harness cutting edge modeling techniques. Thin client apps hosted in a cloud portal eliminates the need for the decision makers to procure and maintain the high performance hardware required by the models, deal with issues related to software installation and platform incompatibilities, or monitor and install software updates, a problem that is exacerbated for many of the regional SERVIR hubs where both financial and technical capacity may be limited. All that is needed to use the system is an Internet connection and a web browser. We take advantage of these technologies to develop tools which can be centrally maintained but openly accessible. Advanced mapping and visualization make results intuitive and information derived actionable. We also take advantage of the emerging standards for sharing water information across the web using the OGC and WMO approved WaterML standards. This makes our tools interoperable and extensible via application programming interfaces (APIs) so that tools and data from other projects can both consume and share the tools developed in our project. Our approach enables the integration of multiple types of data and models, thus facilitating collaboration between science teams in SERVIR. The apps developed thus far by our team process time-varying netCDF files from Earth observations and large-scale computer simulations and allow visualization and exploration via raster animation and extraction of time series at selected points and/or regions.
Web-based courses. More than curriculum.
Mills, M E; Fisher, C; Stair, N
2001-01-01
Online program development depends on an educationally and technologically sound curriculum supported by a solid infrastructure. Creation of a virtual environment through design of online registration and records, financial aid, orientation, advisement, resources, and evaluation and assessment provides students with access and program integrity.The planning of an academic support system as an electronic environment provides challenges and institutional issues requiring systematic analysis.
Pre-Service Teachers and Search Engines: Prior Knowledge and Instructional Implications
ERIC Educational Resources Information Center
Colaric, Susan M.; Fine, Bethann; Hofmann, William
2004-01-01
There is a wealth of information available on the World Wide Web that can assist pre-service teachers in their course studies. Yet observation of students in a technology integration class indicated that students were not able to find resources efficiently or reliably. The purpose of this study was to establish a baseline of what undergraduate,…
ERIC Educational Resources Information Center
Baskerville, Delia
2012-01-01
Continuing emphasis given to computer technology resourcing in schools presents potential for web-based initiatives which focus on quality arts teaching and learning, as ways to improve arts outcomes for all students. An arts e-learning collaborative research project between specialist on-line teacher/researchers and generalist primary teachers…
RESTFul based heterogeneous Geoprocessing workflow interoperation for Sensor Web Service
NASA Astrophysics Data System (ADS)
Yang, Chao; Chen, Nengcheng; Di, Liping
2012-10-01
Advanced sensors on board satellites offer detailed Earth observations. A workflow is one approach for designing, implementing and constructing a flexible and live link between these sensors' resources and users. It can coordinate, organize and aggregate the distributed sensor Web services to meet the requirement of a complex Earth observation scenario. A RESTFul based workflow interoperation method is proposed to integrate heterogeneous workflows into an interoperable unit. The Atom protocols are applied to describe and manage workflow resources. The XML Process Definition Language (XPDL) and Business Process Execution Language (BPEL) workflow standards are applied to structure a workflow that accesses sensor information and one that processes it separately. Then, a scenario for nitrogen dioxide (NO2) from a volcanic eruption is used to investigate the feasibility of the proposed method. The RESTFul based workflows interoperation system can describe, publish, discover, access and coordinate heterogeneous Geoprocessing workflows.
Bittorf, A.; Diepgen, T. L.
1996-01-01
The World Wide Web (WWW) is becoming the major way of acquiring information in all scientific disciplines as well as in business. It is very well suitable for fast distribution and exchange of up to date teaching resources. However, to date most teaching applications on the Web do not use its full power by integrating interactive components. We have set up a computer based training (CBT) framework for Dermatology, which consists of dynamic lecture scripts, case reports, an atlas and a quiz system. All these components heavily rely on an underlying image database that permits the creation of dynamic documents. We used a demon process that keeps the database open and can be accessed using HTTP to achieve better performance and avoid the overhead involved by starting CGI-processes. The result of our evaluation was very encouraging. Images Figure 3 PMID:8947625
Proteopedia: Exciting Advances in the 3D Encyclopedia of Biomolecular Structure
NASA Astrophysics Data System (ADS)
Prilusky, Jaime; Hodis, Eran; Sussman, Joel L.
Proteopedia is a collaborative, 3D web-encyclopedia of protein, nucleic acid and other structures. Proteopedia ( http://www.proteopedia.org ) presents 3D biomolecule structures in a broadly accessible manner to a diverse scientific audience through easy-to-use molecular visualization tools integrated into a wiki environment that anyone with a user account can edit. We describe recent advances in the web resource in the areas of content and software. In terms of content, we describe a large growth in user-added content as well as improvements in automatically-generated content for all PDB entry pages in the resource. In terms of software, we describe new features ranging from the capability to create pages hidden from public view to the capability to export pages for offline viewing. New software features also include an improved file-handling system and availability of biological assemblies of protein structures alongside their asymmetric units.
Pereira, Marta Cristiane Alves; Melo, Márcia Regina Antonietto da Costa; Silva, Adriana Serafim Bispo E; Evora, Yolanda Dora Martinez
2010-01-01
The learning process mediated by information and communication technology has considerable importance in the current context. This study describes the evaluation of a WebQuest on the theme "Management of Material Resources in Nursing". It was developed in three stages: Stage 1 consisted of its pedagogical aspect, that is, elaboration and definition of content; Stage 2 involved the organization of content, inclusion of images and completion; Stage 3 corresponded to its availability to students. Results confirm the importance of information technology and information as instruments for a mediating teaching practice in the integration between valid knowledge and the complex and dynamic reality of health services. As a result of the students' favorable evaluation of the approximation with the reality of nursing work and satisfaction for performing the activity successfully, the WebQuest method was considered valid and innovating for the teaching-learning process.
The RCSB Protein Data Bank: views of structural biology for basic and applied research and education
Rose, Peter W.; Prlić, Andreas; Bi, Chunxiao; Bluhm, Wolfgang F.; Christie, Cole H.; Dutta, Shuchismita; Green, Rachel Kramer; Goodsell, David S.; Westbrook, John D.; Woo, Jesse; Young, Jasmine; Zardecki, Christine; Berman, Helen M.; Bourne, Philip E.; Burley, Stephen K.
2015-01-01
The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine. PMID:25428375
Promoting Science-based Governance through Monitoring Changes to Agencies' Public Presentation
NASA Astrophysics Data System (ADS)
Rinberg, A.; Bergman, A.
2017-12-01
As the scientific basis for the missions of agencies like the Environmental Protection Agency have come under attack this year, political appointees and career staff alike have made changes to public-facing agency web content and resources. These changes have resulted in scientific information and findings being removed from the government's web presence and have reduced access to resources that inform the public about important topics like climate change and clean water. But these removals also obscure the work that the the federal government has done and the role it is legally required to play, which is often based squarely on key scientific findings. By monitoring changes to federal agency websites and ensuring that the public continues to be informed about the government's role in using science to improve the public's well-being, we can help retain the integrity of these important agencies and bolster their public support.
Web-based education in anesthesiology: a critical overview.
Doyle, D John
2008-12-01
The purpose of this review is to discuss the rise of web-based educational resources available to the anesthesiology community. Recent developments of particular importance include the growth of 'Web 2.0' resources, the development of the concepts of 'open access' and 'information philanthropy', and the expansion of web-based medical simulation software products.In addition, peer review of online educational resources has now come of age. The worldwide web has made available a large variety of valuable medical information and education resources only dreamed of two decades ago. To a large extent,these developments represent a shift in the focus of medical education resources to emphasize free access to materials and to encourage collaborative development efforts.
TISSUES 2.0: an integrative web resource on mammalian tissue expression
Palasca, Oana; Santos, Alberto; Stolte, Christian; Gorodkin, Jan; Jensen, Lars Juhl
2018-01-01
Abstract Physiological and molecular similarities between organisms make it possible to translate findings from simpler experimental systems—model organisms—into more complex ones, such as human. This translation facilitates the understanding of biological processes under normal or disease conditions. Researchers aiming to identify the similarities and differences between organisms at the molecular level need resources collecting multi-organism tissue expression data. We have developed a database of gene–tissue associations in human, mouse, rat and pig by integrating multiple sources of evidence: transcriptomics covering all four species and proteomics (human only), manually curated and mined from the scientific literature. Through a scoring scheme, these associations are made comparable across all sources of evidence and across organisms. Furthermore, the scoring produces a confidence score assigned to each of the associations. The TISSUES database (version 2.0) is publicly accessible through a user-friendly web interface and as part of the STRING app for Cytoscape. In addition, we analyzed the agreement between datasets, across and within organisms, and identified that the agreement is mainly affected by the quality of the datasets rather than by the technologies used or organisms compared. Database URL: http://tissues.jensenlab.org/ PMID:29617745
NASA Astrophysics Data System (ADS)
Cuttler, R. T. H.; Tonner, T. W. W.; Al-Naimi, F. A.; Dingwall, L. M.; Al-Hemaidi, N.
2013-07-01
The development of the Qatar National Historic Environment Record (QNHER) by the Qatar Museums Authority and the University of Birmingham in 2008 was based on a customised, bilingual Access database and ArcGIS. While both platforms are stable and well supported, neither was designed for the documentation and retrieval of cultural heritage data. As a result it was decided to develop a custom application using Open Source code. The core module of this application is now completed and is orientated towards the storage and retrieval of geospatial heritage data for the curation of heritage assets. Based on MIDAS Heritage data standards and regionally relevant thesauri, it is a truly bilingual system. Significant attention has been paid to the user interface, which is userfriendly and intuitive. Based on a suite of web services and accessed through a web browser, the system makes full use of internet resources such as Google Maps and Bing Maps. The application avoids long term vendor ''tie-ins'' and as a fully integrated data management system, is now an important tool for both cultural resource managers and heritage researchers in Qatar.
OLIVER: an online library of images for veterinary education and research.
McGreevy, Paul; Shaw, Tim; Burn, Daniel; Miller, Nick
2007-01-01
As part of a strategic move by the University of Sydney toward increased flexibility in learning, the Faculty of Veterinary Science undertook a number of developments involving Web-based teaching and assessment. OLIVER underpins them by providing a rich, durable repository for learning objects. To integrate Web-based learning, case studies, and didactic presentations for veterinary and animal science students, we established an online library of images and other learning objects for use by academics in the Faculties of Veterinary Science and Agriculture. The objectives of OLIVER were to maximize the use of the faculty's teaching resources by providing a stable archiving facility for graphic images and other multimedia learning objects that allows flexible and precise searching, integrating indexing standards, thesauri, pull-down lists of preferred terms, and linking of objects within cases. OLIVER offers a portable and expandable Web-based shell that facilitates ongoing storage of learning objects in a range of media. Learning objects can be downloaded in common, standardized formats so that they can be easily imported for use in a range of applications, including Microsoft PowerPoint, WebCT, and Microsoft Word. OLIVER now contains more than 9,000 images relating to many facets of veterinary science; these are annotated and supported by search engines that allow rapid access to both images and relevant information. The Web site is easily updated and adapted as required.
RAIN: RNA–protein Association and Interaction Networks
Junge, Alexander; Refsgaard, Jan C.; Garde, Christian; Pan, Xiaoyong; Santos, Alberto; Alkan, Ferhat; Anthon, Christian; von Mering, Christian; Workman, Christopher T.; Jensen, Lars Juhl; Gorodkin, Jan
2017-01-01
Protein association networks can be inferred from a range of resources including experimental data, literature mining and computational predictions. These types of evidence are emerging for non-coding RNAs (ncRNAs) as well. However, integration of ncRNAs into protein association networks is challenging due to data heterogeneity. Here, we present a database of ncRNA–RNA and ncRNA–protein interactions and its integration with the STRING database of protein–protein interactions. These ncRNA associations cover four organisms and have been established from curated examples, experimental data, interaction predictions and automatic literature mining. RAIN uses an integrative scoring scheme to assign a confidence score to each interaction. We demonstrate that RAIN outperforms the underlying microRNA-target predictions in inferring ncRNA interactions. RAIN can be operated through an easily accessible web interface and all interaction data can be downloaded. Database URL: http://rth.dk/resources/rain PMID:28077569
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
Della Seta, Maurella; Sellitri, Cinzia
2004-01-01
The research project "Collection and dissemination of bioethical information through an integrated electronic system", started in 2001 by the Istituto Superiore di Sanità (ISS), had among its objectives, the realization of an integrated system for data collection and exchange of documents related to bioethics. The system should act as a reference tool for those research activities impacting on citizens' health and welfare. This paper aims at presenting some initiatives, developed in the project framework, in order to establish an Italian documentation network, among which: a) exchange of ISS publications with Italian institutions active in this field; b) survey through a questionnaire aimed at assessing Italian informative resources, state-of-the-art and holdings of documentation centres and ethical committees; c) Italian Internet resources analysis. The results of the survey, together with the analysis of web sites, show that at present in Italy there are many interesting initiatives for collecting and spreading of documentation in the bioethical fields, but there is an urgent need for an integration of such resources. Ethical committees generally speaking need a larger availability of documents, while there are good potentialities for the establishment of an electronic network for document retrieval and delivery.
Use of Web Resources in the Journal Literature 2001 and 2007: A Cross-Disciplinary Study
ERIC Educational Resources Information Center
Zhang, Li
2011-01-01
This article examines Web resources in research articles from 30 scholarly journals in disciplines across the sciences, social sciences, and humanities. The purpose of the study is to report the degree to which scholars make use of Web-based resources in the journal literature and to identify Web citation characteristics within different subject…
GDA, a web-based tool for Genomics and Drugs integrated analysis.
Caroli, Jimmy; Sorrentino, Giovanni; Forcato, Mattia; Del Sal, Giannino; Bicciato, Silvio
2018-05-25
Several major screenings of genetic profiling and drug testing in cancer cell lines proved that the integration of genomic portraits and compound activities is effective in discovering new genetic markers of drug sensitivity and clinically relevant anticancer compounds. Despite most genetic and drug response data are publicly available, the availability of user-friendly tools for their integrative analysis remains limited, thus hampering an effective exploitation of this information. Here, we present GDA, a web-based tool for Genomics and Drugs integrated Analysis that combines drug response data for >50 800 compounds with mutations and gene expression profiles across 73 cancer cell lines. Genomic and pharmacological data are integrated through a modular architecture that allows users to identify compounds active towards cancer cell lines bearing a specific genomic background and, conversely, the mutational or transcriptional status of cells responding or not-responding to a specific compound. Results are presented through intuitive graphical representations and supplemented with information obtained from public repositories. As both personalized targeted therapies and drug-repurposing are gaining increasing attention, GDA represents a resource to formulate hypotheses on the interplay between genomic traits and drug response in cancer. GDA is freely available at http://gda.unimore.it/.
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
Roudsari, AV; Gordon, C; Gray, JA Muir
2001-01-01
Background In 1998, the U.K. National Health Service Information for Health Strategy proposed the implementation of a National electronic Library for Health to provide clinicians, healthcare managers and planners, patients and the public with easy, round the clock access to high quality, up-to-date electronic information on health and healthcare. The Virtual Branch Libraries are among the most important components of the National electronic Library for Health . They aim at creating online knowledge based communities, each concerned with some specific clinical and other health-related topics. Objectives This study is about the envisaged Dermatology Virtual Branch Libraries of the National electronic Library for Health . It aims at selecting suitable dermatology Web resources for inclusion in the forthcoming Virtual Branch Libraries after establishing preliminary quality benchmarking rules for this task. Psoriasis, being a common dermatological condition, has been chosen as a starting point. Methods Because quality is a principal concern of the National electronic Library for Health, the study includes a review of the major quality benchmarking systems available today for assessing health-related Web sites. The methodology of developing a quality benchmarking system has been also reviewed. Aided by metasearch Web tools, candidate resources were hand-selected in light of the reviewed benchmarking systems and specific criteria set by the authors. Results Over 90 professional and patient-oriented Web resources on psoriasis and dermatology in general are suggested for inclusion in the forthcoming Dermatology Virtual Branch Libraries. The idea of an all-in knowledge-hallmarking instrument for the National electronic Library for Health is also proposed based on the reviewed quality benchmarking systems. Conclusions Skilled, methodical, organized human reviewing, selection and filtering based on well-defined quality appraisal criteria seems likely to be the key ingredient in the envisaged National electronic Library for Health service. Furthermore, by promoting the application of agreed quality guidelines and codes of ethics by all health information providers and not just within the National electronic Library for Health, the overall quality of the Web will improve with time and the Web will ultimately become a reliable and integral part of the care space. PMID:11720947
POLYVIEW-MM: web-based platform for animation and analysis of molecular simulations
Porollo, Aleksey; Meller, Jaroslaw
2010-01-01
Molecular simulations offer important mechanistic and functional clues in studies of proteins and other macromolecules. However, interpreting the results of such simulations increasingly requires tools that can combine information from multiple structural databases and other web resources, and provide highly integrated and versatile analysis tools. Here, we present a new web server that integrates high-quality animation of molecular motion (MM) with structural and functional analysis of macromolecules. The new tool, dubbed POLYVIEW-MM, enables animation of trajectories generated by molecular dynamics and related simulation techniques, as well as visualization of alternative conformers, e.g. obtained as a result of protein structure prediction methods or small molecule docking. To facilitate structural analysis, POLYVIEW-MM combines interactive view and analysis of conformational changes using Jmol and its tailored extensions, publication quality animation using PyMol, and customizable 2D summary plots that provide an overview of MM, e.g. in terms of changes in secondary structure states and relative solvent accessibility of individual residues in proteins. Furthermore, POLYVIEW-MM integrates visualization with various structural annotations, including automated mapping of known inter-action sites from structural homologs, mapping of cavities and ligand binding sites, transmembrane regions and protein domains. URL: http://polyview.cchmc.org/conform.html. PMID:20504857
Ontology for Transforming Geo-Spatial Data for Discovery and Integration of Scientific Data
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.
2013-12-01
Discovery and access to geo-spatial scientific data across heterogeneous repositories and multi-discipline datasets can present challenges for scientist. We propose to build a workflow for transforming geo-spatial datasets into semantic environment by using relationships to describe the resource using OWL Web Ontology, RDF, and a proposed geo-spatial vocabulary. We will present methods for transforming traditional scientific dataset, use of a semantic repository, and querying using SPARQL to integrate and access datasets. This unique repository will enable discovery of scientific data by geospatial bound or other criteria.
E-Government Goes Semantic Web: How Administrations Can Transform Their Information Processes
NASA Astrophysics Data System (ADS)
Klischewski, Ralf; Ukena, Stefan
E-government applications and services are built mainly on access to, retrieval of, integration of, and delivery of relevant information to citizens, businesses, and administrative users. In order to perform such information processing automatically through the Semantic Web,1 machine-readable2 enhancements of web resources are needed, based on the understanding of the content and context of the information in focus. While these enhancements are far from trivial to produce, administrations in their role of information and service providers so far find little guidance on how to migrate their web resources and enable a new quality of information processing; even research is still seeking best practices. Therefore, the underlying research question of this chapter is: what are the appropriate approaches which guide administrations in transforming their information processes toward the Semantic Web? In search for answers, this chapter analyzes the challenges and possible solutions from the perspective of administrations: (a) the reconstruction of the information processing in the e-government in terms of how semantic technologies must be employed to support information provision and consumption through the Semantic Web; (b) the required contribution to the transformation is compared to the capabilities and expectations of administrations; and (c) available experience with the steps of transformation are reviewed and discussed as to what extent they can be expected to successfully drive the e-government to the Semantic Web. This research builds on studying the case of Schleswig-Holstein, Germany, where semantic technologies have been used within the frame of the Access-eGov3 project in order to semantically enhance electronic service interfaces with the aim of providing a new way of accessing and combining e-government services.
Global opportunities on 239 general surgery residency Web sites.
Wackerbarth, Joel J; Campbell, Timothy D; Wren, Sherry; Price, Raymond R; Maier, Ronald V; Numann, Patricia; Kushner, Adam L
2015-09-01
Many general surgical residency programs lack a formal international component. We hypothesized that most surgery programs do not have international training or do not provide the information to prospective applicants regarding electives or programs in an easily accessible manner via Web-based resources. Individual general surgery program Web sites and the American College of Surgeons residency tool were used to identify 239 residencies. The homepages were examined for specific mention of international or global health programs. Ease of access was also considered. Global surgery specific pages or centers were noted. Programs were assessed for length of rotation, presence of research component, and mention of benefits to residents and respective institution. Of 239 programs, 24 (10%) mentioned international experiences on their home page and 42 (18%) contained information about global surgery. Of those with information available, 69% were easily accessible. Academic programs were more likely than independent programs to have information about international opportunities on their home page (13.7% versus 4.0%, P = 0.006) and more likely to have a dedicated program or pathway Web site (18.8% versus 2.0%, P < 0.0001). Half of the residencies with global surgery information did not have length of rotation available. Research was only mentioned by 29% of the Web sites. Benefits to high-income country residents were discussed more than benefits to low- and middle-income country residents (57% versus 17%). General surgery residency programs do not effectively communicate international opportunities for prospective residents through Web-based resources and should seriously consider integrating international options into their curriculum and better present them on department Web sites. Copyright © 2015 Elsevier Inc. All rights reserved.
Workflow and web application for annotating NCBI BioProject transcriptome data.
Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo
2017-01-01
The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.
Improving information retrieval with multiple health terminologies in a quality-controlled gateway.
Soualmia, Lina F; Sakji, Saoussen; Letord, Catherine; Rollin, Laetitia; Massari, Philippe; Darmoni, Stéfan J
2013-01-01
The Catalog and Index of French-language Health Internet resources (CISMeF) is a quality-controlled health gateway, primarily for Web resources in French (n=89,751). Recently, we achieved a major improvement in the structure of the catalogue by setting-up multiple terminologies, based on twelve health terminologies available in French, to overcome the potential weakness of the MeSH thesaurus, which is the main and pivotal terminology we use for indexing and retrieval since 1995. The main aim of this study was to estimate the added-value of exploiting several terminologies and their semantic relationships to improve Web resource indexing and retrieval in CISMeF, in order to provide additional health resources which meet the users' expectations. Twelve terminologies were integrated into the CISMeF information system to set up multiple-terminologies indexing and retrieval. The same sets of thirty queries were run: (i) by exploiting the hierarchical structure of the MeSH, and (ii) by exploiting the additional twelve terminologies and their semantic links. The two search modes were evaluated and compared. The overall coverage of the multiple-terminologies search mode was improved by comparison to the coverage of using the MeSH (16,283 vs. 14,159) (+15%). These additional findings were estimated at 56.6% relevant results, 24.7% intermediate results and 18.7% irrelevant. The multiple-terminologies approach improved information retrieval. These results suggest that integrating additional health terminologies was able to improve recall. Since performing the study, 21 other terminologies have been added which should enable us to make broader studies in multiple-terminologies information retrieval.
Scientific Workflows and the Sensor Web for Virtual Environmental Observatories
NASA Astrophysics Data System (ADS)
Simonis, I.; Vahed, A.
2008-12-01
Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.
InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; ...
2016-08-31
Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less
InteGO2: a web tool for measuring and visualizing gene semantic similarities using Gene Ontology.
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; Juan, Liran; Jiang, Qinghua; Wang, Yadong; Chen, Jin
2016-08-31
The Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. We present InteGO2, a web tool that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface. InteGO2 can be accessed via http://mlg.hit.edu.cn:8089/ .
IAServ: an intelligent home care web services platform in a cloud for aging-in-place.
Su, Chuan-Jun; Chiang, Chang-Yu
2013-11-12
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.
InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang
Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less
IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place
Su, Chuan-Jun; Chiang, Chang-Yu
2013-01-01
As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647
A Prototype Flood Early Warning SensorWeb System for Namibia
NASA Astrophysics Data System (ADS)
Sohlberg, R. A.; Mandl, D.; Frye, S. W.; Cappelaere, P. G.; Szarzynski, J.; Policelli, F.; van Langenhove, G.
2010-12-01
During the past two years, there have been extensive floods in the country of Namibia, Africa which have affected up to a quarter of the population. Via a collaboration between a group funded by the Earth Science Technology Office (ESTO) at NASA that has been performing various SensorWeb prototyping activities for disasters, the Department of Hydrology in Namibia and the United Nations Space-based Information for Disaster and Emergency Response (UN-SPIDER) , experiments were conducted on how to apply various satellite resources integrated into a SensorWeb architecture along with in-situ sensors such as river gauges and rain gauges into a flood early warning system. The SensorWeb includes a global flood model and a higher resolution basin specific flood model. Furthermore, flood extent and status is monitored by optical and radar types of satellites and integrated via some automation. We have taken a practical approach to find out how to create a working system by selectively using the components that provide good results. The vision for the future is to combine this with the country side dwelling unit data base to create risk maps that provide specific warnings to houses within high risk areas based on near term predictions. This presentation will show some of the highlights of the effort thus far plus our future plans.
A web-portal for interactive data exploration, visualization, and hypothesis testing
Bartsch, Hauke; Thompson, Wesley K.; Jernigan, Terry L.; Dale, Anders M.
2014-01-01
Clinical research studies generate data that need to be shared and statistically analyzed by their participating institutions. The distributed nature of research and the different domains involved present major challenges to data sharing, exploration, and visualization. The Data Portal infrastructure was developed to support ongoing research in the areas of neurocognition, imaging, and genetics. Researchers benefit from the integration of data sources across domains, the explicit representation of knowledge from domain experts, and user interfaces providing convenient access to project specific data resources and algorithms. The system provides an interactive approach to statistical analysis, data mining, and hypothesis testing over the lifetime of a study and fulfills a mandate of public sharing by integrating data sharing into a system built for active data exploration. The web-based platform removes barriers for research and supports the ongoing exploration of data. PMID:24723882
phiGENOME: an integrative navigation throughout bacteriophage genomes.
Stano, Matej; Klucar, Lubos
2011-11-01
phiGENOME is a web-based genome browser generating dynamic and interactive graphical representation of phage genomes stored in the phiSITE, database of gene regulation in bacteriophages. phiGENOME is an integral part of the phiSITE web portal (http://www.phisite.org/phigenome) and it was optimised for visualisation of phage genomes with the emphasis on the gene regulatory elements. phiGENOME consists of three components: (i) genome map viewer built using Adobe Flash technology, providing dynamic and interactive graphical display of phage genomes; (ii) sequence browser based on precisely formatted HTML tags, providing detailed exploration of genome features on the sequence level and (iii) regulation illustrator, based on Scalable Vector Graphics (SVG) and designed for graphical representation of gene regulations. Bringing 542 complete genome sequences accompanied with their rich annotations and references, makes phiGENOME a unique information resource in the field of phage genomics. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Narock, T.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Finin, T.; Hitzler, P.; Krisnadhi, A.; Raymond, L. M.; Shepherd, A.; Wiebe, P. H.
2014-12-01
A wide spectrum of maturing methods and tools, collectively characterized as the Semantic Web, is helping to vastly improve the dissemination of scientific research. Creating semantic integration requires input from both domain and cyberinfrastructure scientists. OceanLink, an NSF EarthCube Building Block, is demonstrating semantic technologies through the integration of geoscience data repositories, library holdings, conference abstracts, and funded research awards. Meeting project objectives involves applying semantic technologies to support data representation, discovery, sharing and integration. Our semantic cyberinfrastructure components include ontology design patterns, Linked Data collections, semantic provenance, and associated services to enhance data and knowledge discovery, interoperation, and integration. We discuss how these components are integrated, the continued automated and semi-automated creation of semantic metadata, and techniques we have developed to integrate ontologies, link resources, and preserve provenance and attribution.
NASA Astrophysics Data System (ADS)
Sivolella, A.; Ferreira, F.; Maidantchik, C.; Solans, C.; Solodkov, A.; Burghgrave, B.; Smirnov, Y.
2015-12-01
The ATLAS Tile Calorimeter collaboration assesses the quality of calibration data in order to ensure its proper operation. A number of tasks is then performed by executing several tools and accessing web systems, which were independently developed to meet distinct collaboration's requirements and do not necessarily are connected with each other. Thus, to attend the collaboration needs, several programs are usually implemented without a global perspective of the detector, requiring basic software features. In addition, functionalities may overlap in their objectives and frequently replicate resources retrieval mechanisms. Tile-in-ONE is a designed and implemented platform that assembles various web systems used by the calorimeter community through a single framework and a standard technology. It provides an infrastructure to support the code implementation, avoiding duplication of work while integrating with an overall view of the detector status. Database connectors smooth the process of information access since developers do not need to be aware of where records are placed and how to extract them. Within the environment, a dashboard stands for a particular Tile operation aspect and gets together plug-ins, i.e. software components that add specific features to an existing application. A server contains the platform core, which represents the basic environment to deal with the configuration, manage user settings and load plug-ins at runtime. A web middleware assists users to develop their own plug-ins, perform tests and integrate them into the platform as a whole. Backends are employed to allow that any type of application is interpreted and displayed in a uniform way. This paper describes Tile-in-ONE web platform.
Surfing the World Wide Web to Education Hot-Spots.
ERIC Educational Resources Information Center
Dyrli, Odvard Egil
1995-01-01
Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)
NASA Astrophysics Data System (ADS)
Bos, Nathan Daniel
This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing source, and evaluating their organizational features; however, students struggled to identify scientific evidence, bias, or sophisticated use of media in Web pages. Shortcomings were shown to be partly due to deficiencies in the Web pages themselves and partly due to students' inexperience with the medium or lack of critical evaluation skills. Future directions of this idea are discussed, including discussion of how students' reviews have been integrated into a current digital library development project.
Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia
2016-09-09
Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.
Iyappan, Anandhi; Kawalia, Shweta Bagewadi; Raschka, Tamara; Hofmann-Apitius, Martin; Senger, Philipp
2016-07-08
Neurodegenerative diseases are incurable and debilitating indications with huge social and economic impact, where much is still to be learnt about the underlying molecular events. Mechanistic disease models could offer a knowledge framework to help decipher the complex interactions that occur at molecular and cellular levels. This motivates the need for the development of an approach integrating highly curated and heterogeneous data into a disease model of different regulatory data layers. Although several disease models exist, they often do not consider the quality of underlying data. Moreover, even with the current advancements in semantic web technology, we still do not have cure for complex diseases like Alzheimer's disease. One of the key reasons accountable for this could be the increasing gap between generated data and the derived knowledge. In this paper, we describe an approach, called as NeuroRDF, to develop an integrative framework for modeling curated knowledge in the area of complex neurodegenerative diseases. The core of this strategy lies in the usage of well curated and context specific data for integration into one single semantic web-based framework, RDF. This increases the probability of the derived knowledge to be novel and reliable in a specific disease context. This infrastructure integrates highly curated data from databases (Bind, IntAct, etc.), literature (PubMed), and gene expression resources (such as GEO and ArrayExpress). We illustrate the effectiveness of our approach by asking real-world biomedical questions that link these resources to prioritize the plausible biomarker candidates. Among the 13 prioritized candidate genes, we identified MIF to be a potential emerging candidate due to its role as a pro-inflammatory cytokine. We additionally report on the effort and challenges faced during generation of such an indication-specific knowledge base comprising of curated and quality-controlled data. Although many alternative approaches have been proposed and practiced for modeling diseases, the semantic web technology is a flexible and well established solution for harmonized aggregation. The benefit of this work, to use high quality and context specific data, becomes apparent in speculating previously unattended biomarker candidates around a well-known mechanism, further leveraged for experimental investigations.
TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling
NASA Astrophysics Data System (ADS)
Nelson, J.; Jones, N.; Ames, D. P.
2015-12-01
Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.
WebGIS based on semantic grid model and web services
NASA Astrophysics Data System (ADS)
Zhang, WangFei; Yue, CaiRong; Gao, JianGuo
2009-10-01
As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.
Barton, G; Abbott, J; Chiba, N; Huang, DW; Huang, Y; Krznaric, M; Mack-Smith, J; Saleem, A; Sherman, BT; Tiwari, B; Tomlinson, C; Aitman, T; Darlington, J; Game, L; Sternberg, MJE; Butcher, SA
2008-01-01
Background Microarray experimentation requires the application of complex analysis methods as well as the use of non-trivial computer technologies to manage the resultant large data sets. This, together with the proliferation of tools and techniques for microarray data analysis, makes it very challenging for a laboratory scientist to keep up-to-date with the latest developments in this field. Our aim was to develop a distributed e-support system for microarray data analysis and management. Results EMAAS (Extensible MicroArray Analysis System) is a multi-user rich internet application (RIA) providing simple, robust access to up-to-date resources for microarray data storage and analysis, combined with integrated tools to optimise real time user support and training. The system leverages the power of distributed computing to perform microarray analyses, and provides seamless access to resources located at various remote facilities. The EMAAS framework allows users to import microarray data from several sources to an underlying database, to pre-process, quality assess and analyse the data, to perform functional analyses, and to track data analysis steps, all through a single easy to use web portal. This interface offers distance support to users both in the form of video tutorials and via live screen feeds using the web conferencing tool EVO. A number of analysis packages, including R-Bioconductor and Affymetrix Power Tools have been integrated on the server side and are available programmatically through the Postgres-PLR library or on grid compute clusters. Integrated distributed resources include the functional annotation tool DAVID, GeneCards and the microarray data repositories GEO, CELSIUS and MiMiR. EMAAS currently supports analysis of Affymetrix 3' and Exon expression arrays, and the system is extensible to cater for other microarray and transcriptomic platforms. Conclusion EMAAS enables users to track and perform microarray data management and analysis tasks through a single easy-to-use web application. The system architecture is flexible and scalable to allow new array types, analysis algorithms and tools to be added with relative ease and to cope with large increases in data volume. PMID:19032776
An Ontology-Based Approach to Incorporate User-Generated Geo-Content Into Sdi
NASA Astrophysics Data System (ADS)
Deng, D.-P.; Lemmens, R.
2011-08-01
The Web is changing the way people share and communicate information because of emergence of various Web technologies, which enable people to contribute information on the Web. User-Generated Geo-Content (UGGC) is a potential resource of geographic information. Due to the different production methods, UGGC often cannot fit in geographic information model. There is a semantic gap between UGGC and formal geographic information. To integrate UGGC into geographic information, this study conducts an ontology-based process to bridge this semantic gap. This ontology-based process includes five steps: Collection, Extraction, Formalization, Mapping, and Deployment. In addition, this study implements this process on Twitter messages, which is relevant to Japan Earthquake disaster. By using this process, we extract disaster relief information from Twitter messages, and develop a knowledge base for GeoSPARQL queries in disaster relief information.
Graph Mining Meets the Semantic Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluatemore » the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.« less
Predictive Systems for Customer Interactions
NASA Astrophysics Data System (ADS)
Vijayaraghavan, Ravi; Albert, Sam; Singh, Vinod Kumar; Kannan, Pallipuram V.
With the coming of age of web as a mainstream customer service channel, B2C companies have invested substantial resources in enhancing their web presence. Today customers can interact with a company, not only through the traditional phone channel but also through chat, email, SMS or web self-service. Each of these channels is best suited for some services and ill-matched for others. Customer service organizations today struggle with the challenge of delivering seamlessly integrated services through these different channels. This paper will evaluate some of the key challenges in multi-channel customer service. It will address the challenge of creating the right channel mix i.e. providing the right choice of channels for a given customer/behavior/issue profile. It will also provide strategies for optimizing the performance of a given channel in creating the right customer experience.
de la Calle, Guillermo; García-Remesal, Miguel; Chiesa, Stefano; de la Iglesia, Diana; Maojo, Victor
2009-10-07
The rapid evolution of Internet technologies and the collaborative approaches that dominate the field have stimulated the development of numerous bioinformatics resources. To address this new framework, several initiatives have tried to organize these services and resources. In this paper, we present the BioInformatics Resource Inventory (BIRI), a new approach for automatically discovering and indexing available public bioinformatics resources using information extracted from the scientific literature. The index generated can be automatically updated by adding additional manuscripts describing new resources. We have developed web services and applications to test and validate our approach. It has not been designed to replace current indexes but to extend their capabilities with richer functionalities. We developed a web service to provide a set of high-level query primitives to access the index. The web service can be used by third-party web services or web-based applications. To test the web service, we created a pilot web application to access a preliminary knowledge base of resources. We tested our tool using an initial set of 400 abstracts. Almost 90% of the resources described in the abstracts were correctly classified. More than 500 descriptions of functionalities were extracted. These experiments suggest the feasibility of our approach for automatically discovering and indexing current and future bioinformatics resources. Given the domain-independent characteristics of this tool, it is currently being applied by the authors in other areas, such as medical nanoinformatics. BIRI is available at http://edelman.dia.fi.upm.es/biri/.
An Integrated Korean Biodiversity and Genetic Information Retrieval System
Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee
2008-01-01
Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing bio-resource utilization for conservation, management, research, education, industrial applications, and integration with other bioinformation data resources. It can be found at . PMID:19091024
An Optimized Autonomous Space In-situ Sensorweb (OASIS) for Volcano Monitoring
NASA Astrophysics Data System (ADS)
Song, W.; Shirazi, B.; Lahusen, R.; Chien, S.; Kedar, S.; Webb, F.
2006-12-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, we are developing a prototype real-time Optimized Autonomous Space In-situ Sensorweb. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been in continuous eruption since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO- 1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real- time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of triggering the other. Sensor-web data acquisition and dissemination will be accomplished through the use of SensorML language standards for geospatial information. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform.
biochem4j: Integrated and extensible biochemical knowledge through graph databases.
Swainston, Neil; Batista-Navarro, Riza; Carbonell, Pablo; Dobson, Paul D; Dunstan, Mark; Jervis, Adrian J; Vinaixa, Maria; Williams, Alan R; Ananiadou, Sophia; Faulon, Jean-Loup; Mendes, Pedro; Kell, Douglas B; Scrutton, Nigel S; Breitling, Rainer
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and-crucially-the relationships between them. Such a resource should be extensible, such that newly discovered relationships-for example, those between novel, synthetic enzymes and non-natural products-can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists.
biochem4j: Integrated and extensible biochemical knowledge through graph databases
Batista-Navarro, Riza; Dunstan, Mark; Jervis, Adrian J.; Vinaixa, Maria; Ananiadou, Sophia; Faulon, Jean-Loup; Kell, Douglas B.
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and–crucially–the relationships between them. Such a resource should be extensible, such that newly discovered relationships–for example, those between novel, synthetic enzymes and non-natural products–can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists. PMID:28708831
Creating a Pilot Educational Psychiatry Website: Opportunities, Barriers, and Next Steps.
Torous, John; O'Connor, Ryan; Franzen, Jamie; Snow, Caitlin; Boland, Robert; Kitts, Robert
2015-11-05
While medical students and residents may be utilizing websites as online learning resources, medical trainees and educators now have the opportunity to create such educational websites and digital tools on their own. However, the process and theory of building educational websites for medical education have not yet been fully explored. To understand the opportunities, barriers, and process of creating a novel medical educational website. We created a pilot psychiatric educational website to better understand the options, opportunities, challenges, and processes involved in the creation of a psychiatric educational website. We sought to integrate visual and interactive Web design elements to underscore the potential of such Web technology. A pilot website (PsychOnCall) was created to demonstrate the potential of Web technology in medical and psychiatric education. Creating an educational website is now technically easier than ever before, and the primary challenge no longer is technology but rather the creation, validation, and maintenance of information for such websites as well as translating text-based didactics into visual and interactive tools. Medical educators can influence the design and implementation of online educational resources through creating their own websites and engaging medical students and residents in the process.
Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko
2014-07-01
TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544
Creating a Pilot Educational Psychiatry Website: Opportunities, Barriers, and Next Steps
O'Connor, Ryan; Franzen, Jamie; Snow, Caitlin; Boland, Robert; Kitts, Robert
2015-01-01
Background While medical students and residents may be utilizing websites as online learning resources, medical trainees and educators now have the opportunity to create such educational websites and digital tools on their own. However, the process and theory of building educational websites for medical education have not yet been fully explored. Objective To understand the opportunities, barriers, and process of creating a novel medical educational website. Methods We created a pilot psychiatric educational website to better understand the options, opportunities, challenges, and processes involved in the creation of a psychiatric educational website. We sought to integrate visual and interactive Web design elements to underscore the potential of such Web technology. Results A pilot website (PsychOnCall) was created to demonstrate the potential of Web technology in medical and psychiatric education. Conclusions Creating an educational website is now technically easier than ever before, and the primary challenge no longer is technology but rather the creation, validation, and maintenance of information for such websites as well as translating text-based didactics into visual and interactive tools. Medical educators can influence the design and implementation of online educational resources through creating their own websites and engaging medical students and residents in the process. PMID:27731837
Optimized autonomous space in-situ sensor web for volcano monitoring
Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.
2010-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.
A Framework for Integrating Oceanographic Data Repositories
NASA Astrophysics Data System (ADS)
Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.
2010-12-01
Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.
WebCHECK: The Website Evaluation Instrument
ERIC Educational Resources Information Center
Small, Ruth V.; Arnone, Marilyn P.
2014-01-01
Just as with print resources, as the number of Web-based resources continues to soar, the need to evaluate them has become a critical information skill for both children and adults. This is particularly true for schools where librarians often are called on to recommend Web resources to classroom teachers, parents, and students, and to support…
DW3 Classical Music Resources: Managing Mozart on the Web.
ERIC Educational Resources Information Center
Fineman, Yale
2001-01-01
Discusses the development of DW3 (Duke World Wide Web) Classical Music Resources, a vertical portal that comprises the most comprehensive collection of classical music resources on the Web with links to more than 2800 non-commercial pages/sites in over a dozen languages. Describes the hierarchical organization of subject headings and considers…
A Comparison of Web Resource Access Experiments: Planning for the New Millennium.
ERIC Educational Resources Information Center
Greenberg, Jane
This paper reports on research that compared five leading experiments that aim to improve access to the growing number of information resources on the World Wide Web. The objective was to identify characteristics of success and considerations for improvement in experiments providing access to Web resources via bibliographic control methods. The…
Combining data from multiple sources using the CUAHSI Hydrologic Information System
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Ames, D. P.; Horsburgh, J. S.; Goodall, J. L.
2012-12-01
The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) has developed a Hydrologic Information System (HIS) to provide better access to data by enabling the publication, cataloging, discovery, retrieval, and analysis of hydrologic data using web services. The CUAHSI HIS is an Internet based system comprised of hydrologic databases and servers connected through web services as well as software for data publication, discovery and access. The HIS metadata catalog lists close to 100 web services registered to provide data through this system, ranging from large federal agency data sets to experimental watersheds managed by University investigators. The system's flexibility in storing and enabling public access to similarly formatted data and metadata has created a community data resource from governmental and academic data that might otherwise remain private or analyzed only in isolation. Comprehensive understanding of hydrology requires integration of this information from multiple sources. HydroDesktop is the client application developed as part of HIS to support data discovery and access through this system. HydroDesktop is founded on an open source GIS client and has a plug-in architecture that has enabled the integration of modeling and analysis capability with the functionality for data discovery and access. Model integration is possible through a plug-in built on the OpenMI standard and data visualization and analysis is supported by an R plug-in. This presentation will demonstrate HydroDesktop, showing how it provides an analysis environment within which data from multiple sources can be discovered, accessed and integrated.
dbSUPER: a database of super-enhancers in mouse and human genome
Khan, Aziz; Zhang, Xuegong
2016-01-01
Super-enhancers are clusters of transcriptional enhancers that drive cell-type-specific gene expression and are crucial to cell identity. Many disease-associated sequence variations are enriched in super-enhancer regions of disease-relevant cell types. Thus, super-enhancers can be used as potential biomarkers for disease diagnosis and therapeutics. Current studies have identified super-enhancers in more than 100 cell types and demonstrated their functional importance. However, a centralized resource to integrate all these findings is not currently available. We developed dbSUPER (http://bioinfo.au.tsinghua.edu.cn/dbsuper/), the first integrated and interactive database of super-enhancers, with the primary goal of providing a resource for assistance in further studies related to transcriptional control of cell identity and disease. dbSUPER provides a responsive and user-friendly web interface to facilitate efficient and comprehensive search and browsing. The data can be easily sent to Galaxy instances, GREAT and Cistrome web-servers for downstream analysis, and can also be visualized in the UCSC genome browser where custom tracks can be added automatically. The data can be downloaded and exported in variety of formats. Furthermore, dbSUPER lists genes associated with super-enhancers and also links to external databases such as GeneCards, UniProt and Entrez. dbSUPER also provides an overlap analysis tool to annotate user-defined regions. We believe dbSUPER is a valuable resource for the biology and genetic research communities. PMID:26438538
A Web-Based Earth-Systems Knowledge Portal and Collaboration Platform
NASA Astrophysics Data System (ADS)
D'Agnese, F. A.; Turner, A. K.
2010-12-01
In support of complex water-resource sustainability projects in the Great Basin region of the United States, Earth Knowledge, Inc. has developed several web-based data management and analysis platforms that have been used by its scientists, clients, and public to facilitate information exchanges, collaborations, and decision making. These platforms support accurate water-resource decision-making by combining second-generation internet (Web 2.0) technologies with traditional 2D GIS and web-based 2D and 3D mapping systems such as Google Maps, and Google Earth. Most data management and analysis systems use traditional software systems to address the data needs and usage behavior of the scientific community. In contrast, these platforms employ more accessible open-source and “off-the-shelf” consumer-oriented, hosted web-services. They exploit familiar software tools using industry standard protocols, formats, and APIs to discover, process, fuse, and visualize earth, engineering, and social science datasets. Thus, they respond to the information needs and web-interface expectations of both subject-matter experts and the public. Because the platforms continue to gather and store all the contributions of their broad-spectrum of users, each new assessment leverages the data, information, and expertise derived from previous investigations. In the last year, Earth Knowledge completed a conceptual system design and feasibility study for a platform, which has a Knowledge Portal providing access to users wishing to retrieve information or knowledge developed by the science enterprise and a Collaboration Environment Module, a framework that links the user-access functions to a Technical Core supporting technical and scientific analyses including Data Management, Analysis and Modeling, and Decision Management, and to essential system administrative functions within an Administrative Module. The over-riding technical challenge is the design and development of a single technical platform that is accessed through a flexible series of knowledge portal and collaboration environment styles reflecting the information needs and user expectations of a diverse community of users. Recent investigations have defined the information needs and expectations of the major end-users and also have reviewed and assessed a wide variety of modern web-based technologies. Combining these efforts produced design specifications and recommendations for the selection and integration of web- and client-based tools. When fully developed, the resulting platform will: -Support new, advanced information systems and decision environments that take full advantage of multiple data sources and platforms; -Provide a distribution network tailored to the timely delivery of products to a broad range of users that are needed to support applications in disaster management, resource management, energy, and urban sustainability; -Establish new integrated multiple-user requirements and knowledge databases that support researchers and promote infusion of successful technologies into existing processes; and -Develop new decision support strategies and presentation methodologies for applied earth science applications to reduce risk, cost, and time.
Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G
2016-01-01
The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.
Li, Rhea; Raber, Margaret; Chandra, Joya
2015-03-31
Obesity has been a growing problem among children and adolescents in the United States for a number of decades. Childhood cancer survivors (CCS) are more susceptible to the downstream health consequences of obesity such as cardiovascular disease, endocrine issues, and risk of cancer recurrence due to late effects of treatment and suboptimal dietary and physical activity habits. The objective of this study was to document the development of a Web-based cookbook of healthy recipes and nutrition resources to help enable pediatric cancer patients and survivors to lead healthier lifestyles. The Web-based cookbook, named "@TheTable", was created by a committee of researchers, a registered dietitian, patients and family members, a hospital chef, and community advisors and donors. Recipes were collected from several sources including recipe contests and social media. We incorporated advice from current patients, parents, and CCS. Over 400 recipes, searchable by several categories and with accompanying nutritional information, are currently available on the website. In addition to healthy recipes, social media functionality and cooking videos are integrated into the website. The website also features nutrition information resources including nutrition and cooking tip sheets available on several subjects. The "@TheTable" website is a unique resource for promoting healthy lifestyles spanning pediatric oncology prevention, treatment, and survivorship. Through evaluations of the website's current and future use, as well as incorporation into interventions designed to promote energy balance, we will continue to adapt and build this unique resource to serve cancer patients, survivors, and the general public.
Raber, Margaret
2015-01-01
Background Obesity has been a growing problem among children and adolescents in the United States for a number of decades. Childhood cancer survivors (CCS) are more susceptible to the downstream health consequences of obesity such as cardiovascular disease, endocrine issues, and risk of cancer recurrence due to late effects of treatment and suboptimal dietary and physical activity habits. Objective The objective of this study was to document the development of a Web-based cookbook of healthy recipes and nutrition resources to help enable pediatric cancer patients and survivors to lead healthier lifestyles. Methods The Web-based cookbook, named “@TheTable”, was created by a committee of researchers, a registered dietitian, patients and family members, a hospital chef, and community advisors and donors. Recipes were collected from several sources including recipe contests and social media. We incorporated advice from current patients, parents, and CCS. Results Over 400 recipes, searchable by several categories and with accompanying nutritional information, are currently available on the website. In addition to healthy recipes, social media functionality and cooking videos are integrated into the website. The website also features nutrition information resources including nutrition and cooking tip sheets available on several subjects. Conclusions The “@TheTable” website is a unique resource for promoting healthy lifestyles spanning pediatric oncology prevention, treatment, and survivorship. Through evaluations of the website’s current and future use, as well as incorporation into interventions designed to promote energy balance, we will continue to adapt and build this unique resource to serve cancer patients, survivors, and the general public. PMID:25840596
Enabling Discoveries in Earth Sciences Through the Geosciences Network (GEON)
NASA Astrophysics Data System (ADS)
Seber, D.; Baru, C.; Memon, A.; Lin, K.; Youn, C.
2005-12-01
Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, semantic data integration, high-end computations and 4D visualization in easy-to-use web-based environments. The GEON Network currently allows users to search and register Earth science resources such as data sets (GIS layers, GMT files, geoTIFF images, ASCII files, relational databases etc), software applications or ontologies. Portal based access mechanisms enable developers to built dynamic user interfaces to conduct advanced processing and modeling efforts across distributed computers and supercomputers. Researchers and educators can access the networked resources through the GEON portal and its portlets that were developed to conduct better and more comprehensive science and educational studies. For example, the SYNSEIS portlet in GEON enables users to access in near-real time seismic waveforms from the IRIS Data Management Center, easily build a 3D geologic model within the area of the seismic station(s) and the epicenter and perform a 3D synthetic seismogram analysis to understand the lithospheric structure and earthquake source parameters for any given earthquake in the US. Similarly, GEON's workbench area enables users to create their own work environment and copy, visualize and analyze any data sets within the network, and create subsets of the data sets for their own purposes. Since all these resources are built as part of a Service-oriented Architecture (SOA), they are also used in other development platforms. One such platform is Kepler Workflow system which can access web service based resources and provides users with graphical programming interfaces to build a model to conduct computations and/or visualization efforts using the networked resources. Developments in the area of semantic integration of the networked datasets continue to advance and prototype studies can be accessed via the GEON portal at www.geongrid.org
[Design and implementation of Chinese materia medica resources survey results display system].
Wang, Hui; Zhang, Xiao-Bo; Ge, Xiao-Guang; Jin, Yan; Wang, Ling; Zhao, Yan-Ping; Jing, Zhi-Xian; Guo, Lan-Ping; Huang, Lu-Qi
2017-11-01
From the beginning of the fourth national census of traditional Chinese medicine resources in 2011, a large amount of data have been collected and compiled, including wild medicinal plant resource data, cultivation of medicinal plant information, traditional knowledge, and specimen information. The traditional paper-based recording method is inconvenient for query and application. The B/S architecture, JavaWeb framework and SOA are used to design and develop the fourth national census results display platform. Through the data integration and sorting, the users are to provide with integrated data services and data query display solutions. The platform realizes the fine data classification, and has the simple data retrieval and the university statistical analysis function. The platform uses Echarts components, Geo Server, Open Layers and other technologies to provide a variety of data display forms such as charts, maps and other visualization forms, intuitive reflects the number, distribution and type of Chinese material medica resources. It meets the data mapping requirements of different levels of users, and provides support for management decision-making. Copyright© by the Chinese Pharmaceutical Association.
PhytoPath: an integrative resource for plant pathogen genomics.
Pedro, Helder; Maheswari, Uma; Urban, Martin; Irvine, Alistair George; Cuzick, Alayne; McDowall, Mark D; Staines, Daniel M; Kulesha, Eugene; Hammond-Kosack, Kim Elizabeth; Kersey, Paul Julian
2016-01-04
PhytoPath (www.phytopathdb.org) is a resource for genomic and phenotypic data from plant pathogen species, that integrates phenotypic data for genes from PHI-base, an expertly curated catalog of genes with experimentally verified pathogenicity, with the Ensembl tools for data visualization and analysis. The resource is focused on fungi, protists (oomycetes) and bacterial plant pathogens that have genomes that have been sequenced and annotated. Genes with associated PHI-base data can be easily identified across all plant pathogen species using a BioMart-based query tool and visualized in their genomic context on the Ensembl genome browser. The PhytoPath resource contains data for 135 genomic sequences from 87 plant pathogen species, and 1364 genes curated for their role in pathogenicity and as targets for chemical intervention. Support for community annotation of gene models is provided using the WebApollo online gene editor, and we are working with interested communities to improve reference annotation for selected species. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Shakespeare Goes Online: Web Resources for Teaching Shakespeare.
ERIC Educational Resources Information Center
Schuetz, Carol L.
This annotated bibliography contains five sections and 62 items. The first section lists general resources including six Web site addresses; the second section, on Shakespeare's works, contains five Web site addresses; the third section, on Shakespeare and the Globe Theatre, provides five Web site addresses; the fourth section presents classroom…
SoyBase Simple Semantic Web Architecture and Protocol (SSWAP) Services
USDA-ARS?s Scientific Manuscript database
Semantic web technologies offer the potential to link internet resources and data by shared concepts without having to rely on absolute lexical matches. Thus two web sites or web resources which are concerned with similar data types could be identified based on similar semantics. In the biological...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wanderer, Thomas, E-mail: thomas.wanderer@dlr.de; Herle, Stefan, E-mail: stefan.herle@rwth-aachen.de
2015-04-15
By their spatially very distributed nature, profitability and impacts of renewable energy resources are highly correlated with the geographic locations of power plant deployments. A web-based Spatial Decision Support System (SDSS) based on a Multi-Criteria Decision Analysis (MCDA) approach has been implemented for identifying preferable locations for solar power plants based on user preferences. The designated areas found serve for the input scenario development for a subsequent integrated Environmental Impact Assessment. The capabilities of the SDSS service get showcased for Concentrated Solar Power (CSP) plants in the region of Andalusia, Spain. The resulting spatial patterns of possible power plant sitesmore » are an important input to the procedural chain of assessing impacts of renewable energies in an integrated effort. The applied methodology and the implemented SDSS are applicable for other renewable technologies as well. - Highlights: • The proposed tool facilitates well-founded CSP plant siting decisions. • Spatial MCDA methods are implemented in a WebGIS environment. • GIS-based SDSS can contribute to a modern integrated impact assessment workflow. • The conducted case study proves the suitability of the methodology.« less
Resource Management Scheme Based on Ubiquitous Data Analysis
Lee, Heung Ki; Jung, Jaehee
2014-01-01
Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692
Developing A Large-Scale, Collaborative, Productive Geoscience Education Network
NASA Astrophysics Data System (ADS)
Manduca, C. A.; Bralower, T. J.; Egger, A. E.; Fox, S.; Ledley, T. S.; Macdonald, H.; Mcconnell, D. A.; Mogk, D. W.; Tewksbury, B. J.
2012-12-01
Over the past 15 years, the geoscience education community has grown substantially and developed broad and deep capacity for collaboration and dissemination of ideas. While this community is best viewed as emergent from complex interactions among changing educational needs and opportunities, we highlight the role of several large projects in the development of a network within this community. In the 1990s, three NSF projects came together to build a robust web infrastructure to support the production and dissemination of on-line resources: On The Cutting Edge (OTCE), Earth Exploration Toolbook, and Starting Point: Teaching Introductory Geoscience. Along with the contemporaneous Digital Library for Earth System Education, these projects engaged geoscience educators nationwide in exploring professional development experiences that produced lasting on-line resources, collaborative authoring of resources, and models for web-based support for geoscience teaching. As a result, a culture developed in the 2000s in which geoscience educators anticipated that resources for geoscience teaching would be shared broadly and that collaborative authoring would be productive and engaging. By this time, a diverse set of examples demonstrated the power of the web infrastructure in supporting collaboration, dissemination and professional development . Building on this foundation, more recent work has expanded both the size of the network and the scope of its work. Many large research projects initiated collaborations to disseminate resources supporting educational use of their data. Research results from the rapidly expanding geoscience education research community were integrated into the Pedagogies in Action website and OTCE. Projects engaged faculty across the nation in large-scale data collection and educational research. The Climate Literacy and Energy Awareness Network and OTCE engaged community members in reviewing the expanding body of on-line resources. Building Strong Geoscience Departments sought to create the same type of shared information base that was supporting individual faculty for departments. The Teach the Earth portal and its underlying web development tools were used by NSF-funded projects in education to disseminate their results. Leveraging these funded efforts, the Climate Literacy Network has expanded this geoscience education community to include individuals broadly interested in fostering climate literacy. Most recently, the InTeGrate project is implementing inter-institutional collaborative authoring, testing and evaluation of curricular materials. While these projects represent only a fraction of the activity in geoscience education, they are important drivers in the development of a large, national, coherent geoscience education network with the ability to collaborate and disseminate information effectively. Importantly, the community is open and defined by active participation. Key mechanisms for engagement have included alignment of project activities with participants needs and goals; productive face-to-face and virtual workshops, events, and series; stipends for completion of large products; and strong supporting staff to keep projects moving and assist with product production. One measure of its success is the adoption and adaptation of resources and models by emerging projects, which results in the continued growth of the network.
U.S. Geological Survey Community for Data Integration-NWIS Web Services Snapshot Tool for ArcGIS
Holl, Sally
2011-01-01
U.S. Geological Survey (USGS) data resources are so vast that many scientists are unaware of data holdings that may be directly relevant to their research. Data are also difficult to access and large corporate databases, such as the National Water Information System (NWIS) that houses hydrologic data for the Nation, are challenging to use without considerable expertise and investment of time. The USGS Community for Data Integration (CDI) was established in 2009 to address data and information management issues affecting the proficiency of earth science research. A CDI workshop convened in 2009 identified common data integration needs of USGS scientists and targeted high value opportunities that might address these needs by leveraging existing projects in USGS science centers, in-kind contributions, and supplemental funding. To implement this strategy, CDI sponsored a software development project in 2010 to facilitate access and use of NWIS data with ArcGIS, a widely used Geographic Information System. The resulting software product, the NWIS Web Services Snapshot Tool for ArcGIS, is presented here.
Haney, Gillian; Cocoros, Noelle; Cranston, Kevin; DeMaria, Alfred
2014-01-01
The Massachusetts Virtual Epidemiologic Network (MAVEN) was deployed in 2006 by the Massachusetts Department of Public Health, Bureau of Infectious Disease to serve as an integrated, Web-based disease surveillance and case management system. MAVEN replaced program-specific, siloed databases, which were inaccessible to local public health and unable to integrate electronic reporting. Disease events are automatically created without human intervention when a case or laboratory report is received and triaged in real time to state and local public health personnel. Events move through workflows for initial notification, case investigation, and case management. Initial development was completed within 12 months and recent state regulations mandate the use of MAVEN by all 351 jurisdictions. More than 300 local boards of health are using MAVEN, there are approximately one million events, and 70 laboratories report electronically. MAVEN has demonstrated responsiveness and flexibility to emerging diseases while also streamlining routine surveillance processes and improving timeliness of notifications and data completeness, although the long-term resource requirements are significant. PMID:24587547
Autumn leaf subsidies influence spring dynamics of freshwater plankton communities.
Fey, Samuel B; Mertens, Andrew N; Cottingham, Kathryn L
2015-07-01
While ecologists primarily focus on the immediate impact of ecological subsidies, understanding the importance of ecological subsidies requires quantifying the long-term temporal dynamics of subsidies on recipient ecosystems. Deciduous leaf litter transferred from terrestrial to aquatic ecosystems exerts both immediate and lasting effects on stream food webs. Recently, deciduous leaf additions have also been shown to be important subsidies for planktonic food webs in ponds during autumn; however, the inter-seasonal effects of autumn leaf subsidies on planktonic food webs have not been studied. We hypothesized that autumn leaf drop will affect the spring dynamics of freshwater pond food webs by altering the availability of resources, water transparency, and the metabolic state of ponds. We created leaf-added and no-leaf-added field mesocosms in autumn 2012, allowed mesocosms to ice-over for the winter, and began sampling the physical, chemical, and biological properties of mesocosms immediately following ice-off in spring 2013. At ice-off, leaf additions reduced dissolved oxygen, elevated total phosphorus concentrations and dissolved materials, and did not alter temperature or total nitrogen. These initial abiotic effects contributed to higher bacterial densities and lower chlorophyll concentrations, but by the end of spring, the abiotic environment, chlorophyll and bacterial densities converged. By contrast, zooplankton densities diverged between treatments during the spring, with leaf additions stimulating copepods but inhibiting cladocerans. We hypothesized that these differences between zooplankton orders resulted from resource shifts following leaf additions. These results suggest that leaf subsidies can alter both the short- and long-term dynamics of planktonic food webs, and highlight the importance of fully understanding how ecological subsidies are integrated into recipient food webs.
VectorBase: a home for invertebrate vectors of human pathogens
Lawson, Daniel; Arensburger, Peter; Atkinson, Peter; Besansky, Nora J.; Bruggner, Robert V.; Butler, Ryan; Campbell, Kathryn S.; Christophides, George K.; Christley, Scott; Dialynas, Emmanuel; Emmert, David; Hammond, Martin; Hill, Catherine A.; Kennedy, Ryan C.; Lobo, Neil F.; MacCallum, M. Robert; Madey, Greg; Megy, Karine; Redmond, Seth; Russo, Susan; Severson, David W.; Stinson, Eric O.; Topalis, Pantelis; Zdobnov, Evgeny M.; Birney, Ewan; Gelbart, William M.; Kafatos, Fotis C.; Louis, Christos; Collins, Frank H.
2007-01-01
VectorBase () is a web-accessible data repository for information about invertebrate vectors of human pathogens. VectorBase annotates and maintains vector genomes providing an integrated resource for the research community. Currently, VectorBase contains genome information for two organisms: Anopheles gambiae, a vector for the Plasmodium protozoan agent causing malaria, and Aedes aegypti, a vector for the flaviviral agents causing Yellow fever and Dengue fever. PMID:17145709
Integration of cardiac proteome biology and medicine by a specialized knowledgebase.
Zong, Nobel C; Li, Haomin; Li, Hua; Lam, Maggie P Y; Jimenez, Rafael C; Kim, Christina S; Deng, Ning; Kim, Allen K; Choi, Jeong Ho; Zelaya, Ivette; Liem, David; Meyer, David; Odeberg, Jacob; Fang, Caiyun; Lu, Hao-Jie; Xu, Tao; Weiss, James; Duan, Huilong; Uhlen, Mathias; Yates, John R; Apweiler, Rolf; Ge, Junbo; Hermjakob, Henning; Ping, Peipei
2013-10-12
Omics sciences enable a systems-level perspective in characterizing cardiovascular biology. Integration of diverse proteomics data via a computational strategy will catalyze the assembly of contextualized knowledge, foster discoveries through multidisciplinary investigations, and minimize unnecessary redundancy in research efforts. The goal of this project is to develop a consolidated cardiac proteome knowledgebase with novel bioinformatics pipeline and Web portals, thereby serving as a new resource to advance cardiovascular biology and medicine. We created Cardiac Organellar Protein Atlas Knowledgebase (COPaKB; www.HeartProteome.org), a centralized platform of high-quality cardiac proteomic data, bioinformatics tools, and relevant cardiovascular phenotypes. Currently, COPaKB features 8 organellar modules, comprising 4203 LC-MS/MS experiments from human, mouse, drosophila, and Caenorhabditis elegans, as well as expression images of 10,924 proteins in human myocardium. In addition, the Java-coded bioinformatics tools provided by COPaKB enable cardiovascular investigators in all disciplines to retrieve and analyze pertinent organellar protein properties of interest. COPaKB provides an innovative and interactive resource that connects research interests with the new biological discoveries in protein sciences. With an array of intuitive tools in this unified Web server, nonproteomics investigators can conveniently collaborate with proteomics specialists to dissect the molecular signatures of cardiovascular phenotypes.
Rangel, Luiz Thibério; Novaes, Jeniffer; Durham, Alan M.; Madeira, Alda Maria B. N.; Gruber, Arthur
2013-01-01
Parasites of the genus Eimeria infect a wide range of vertebrate hosts, including chickens. We have recently reported a comparative analysis of the transcriptomes of Eimeria acervulina, Eimeria maxima and Eimeria tenella, integrating ORESTES data produced by our group and publicly available Expressed Sequence Tags (ESTs). All cDNA reads have been assembled, and the reconstructed transcripts have been submitted to a comprehensive functional annotation pipeline. Additional studies included orthology assignment across apicomplexan parasites and clustering analyses of gene expression profiles among different developmental stages of the parasites. To make all this body of information publicly available, we constructed the Eimeria Transcript Database (EimeriaTDB), a web repository that provides access to sequence data, annotation and comparative analyses. Here, we describe the web interface, available sequence data sets and query tools implemented on the site. The main goal of this work is to offer a public repository of sequence and functional annotation data of reconstructed transcripts of parasites of the genus Eimeria. We believe that EimeriaTDB will represent a valuable and complementary resource for the Eimeria scientific community and for those researchers interested in comparative genomics of apicomplexan parasites. Database URL: http://www.coccidia.icb.usp.br/eimeriatdb/ PMID:23411718
Learning about the Human Genome. Part 2: Resources for Science Educators. ERIC Digest.
ERIC Educational Resources Information Center
Haury, David L.
This ERIC Digest identifies how the human genome project fits into the "National Science Education Standards" and lists Human Genome Project Web sites found on the World Wide Web. It is a resource companion to "Learning about the Human Genome. Part 1: Challenge to Science Educators" (Haury 2001). The Web resources and…
A health analytics semantic ETL service for obesity surveillance.
Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G
2015-01-01
The increasingly large amount of data produced in healthcare (e.g. collected through health information systems such as electronic medical records - EMRs or collected through novel data sources such as personal health records - PHRs, social media, web resources) enable the creation of detailed records about people's health, sentiments and activities (e.g. physical activity, diet, sleep quality) that can be used in the public health area among others. However, despite the transformative potential of big data in public health surveillance there are several challenges in integrating big data. In this paper, the interoperability challenge is tackled and a semantic Extract Transform Load (ETL) service is proposed that seeks to semantically annotate big data to result into valuable data for analysis. This service is considered as part of a health analytics engine on the cloud that interacts with existing healthcare information exchange networks, like the Integrating the Healthcare Enterprise (IHE), PHRs, sensors, mobile applications, and other web resources to retrieve patient health, behavioral and daily activity data. The semantic ETL service aims at semantically integrating big data for use by analytic mechanisms. An illustrative implementation of the service on big data which is potentially relevant to human obesity, enables using appropriate analytic techniques (e.g. machine learning, text mining) that are expected to assist in identifying patterns and contributing factors (e.g. genetic background, social, environmental) for this social phenomenon and, hence, drive health policy changes and promote healthy behaviors where residents live, work, learn, shop and play.
GeoSearch: A lightweight broking middleware for geospatial resources discovery
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; Liu, K.; Xia, J.
2012-12-01
With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.
e-Infrastructures for Astronomy: An Integrated View
NASA Astrophysics Data System (ADS)
Pasian, F.; Longo, G.
2010-12-01
As for other disciplines, the capability of performing “Big Science” in astrophysics requires the availability of large facilities. In the field of ICT, computational resources (e.g. HPC) are important, but are far from being enough for the community: as a matter of fact, the whole set of e-infrastructures (network, computing nodes, data repositories, applications) need to work in an interoperable way. This implies the development of common (or at least compatible) user interfaces to computing resources, transparent access to observations and numerical simulations through the Virtual Observatory, integrated data processing pipelines, data mining and semantic web applications. Achieving this interoperability goal is a must to build a real “Knowledge Infrastructure” in the astrophysical domain. Also, the emergence of new professional profiles (e.g. the “astro-informatician”) is necessary to allow defining and implementing properly this conceptual schema.
Utilizing Social Bookmarking Tag Space for Web Content Discovery: A Social Network Analysis Approach
ERIC Educational Resources Information Center
Wei, Wei
2010-01-01
Social bookmarking has gained popularity since the advent of Web 2.0. Keywords known as tags are created to annotate web content, and the resulting tag space composed of the tags, the resources, and the users arises as a new platform for web content discovery. Useful and interesting web resources can be located through searching and browsing based…
Dynamic Space for Rent: Using Commercial Web Hosting to Develop a Web 2.0 Intranet
ERIC Educational Resources Information Center
Hodgins, Dave
2010-01-01
The explosion of Web 2.0 into libraries has left many smaller academic libraries (and other libraries with limited computing resources or support) to work in the cloud using free Web applications. The use of commercial Web hosting is an innovative approach to the problem of inadequate local resources. While the idea of insourcing IT will seem…
G6PDdb, an integrated database of glucose-6-phosphate dehydrogenase (G6PD) mutations.
Kwok, Colin J; Martin, Andrew C R; Au, Shannon W N; Lam, Veronica M S
2002-03-01
G6PDdb (http://www.rubic.rdg.ac.uk/g6pd/ or http://www.bioinf.org.uk/g6pd/) is a newly created web-accessible locus-specific mutation database for the human Glucose-6-phosphate dehydrogenase (G6PD) gene. The relational database integrates up-to-date mutational and structural data from various databanks (GenBank, Protein Data Bank, etc.) with biochemically characterized variants and their associated phenotypes obtained from published literature and the Favism website. An automated analysis of the mutations likely to have a significant impact on the structure of the protein has been performed using a recently developed procedure. The database may be queried online and the full results of the analysis of the structural impact of mutations are available. The web page provides a form for submitting additional mutation data and is linked to resources such as the Favism website, OMIM, HGMD, HGVBASE, and the PDB. This database provides insights into the molecular aspects and clinical significance of G6PD deficiency for researchers and clinicians and the web page functions as a knowledge base relevant to the understanding of G6PD deficiency and its management. Copyright 2002 Wiley-Liss, Inc.
A web-based rapid assessment tool for production publishing solutions
NASA Astrophysics Data System (ADS)
Sun, Tong
2010-02-01
Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.
blend4php: a PHP API for galaxy.
Wytko, Connor; Soto, Brian; Ficklin, Stephen P
2017-01-01
Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy's RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4phpDatabase URL: https://github.com/galaxyproject/blend4php. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Warming and Resource Availability Shift Food Web Structure and Metabolism
O'Connor, Mary I.; Piehler, Michael F.; Leech, Dina M.; Anton, Andrea; Bruno, John F.
2009-01-01
Climate change disrupts ecological systems in many ways. Many documented responses depend on species' life histories, contributing to the view that climate change effects are important but difficult to characterize generally. However, systematic variation in metabolic effects of temperature across trophic levels suggests that warming may lead to predictable shifts in food web structure and productivity. We experimentally tested the effects of warming on food web structure and productivity under two resource supply scenarios. Consistent with predictions based on universal metabolic responses to temperature, we found that warming strengthened consumer control of primary production when resources were augmented. Warming shifted food web structure and reduced total biomass despite increases in primary productivity in a marine food web. In contrast, at lower resource levels, food web production was constrained at all temperatures. These results demonstrate that small temperature changes could dramatically shift food web dynamics and provide a general, species-independent mechanism for ecological response to environmental temperature change. PMID:19707271
Development of a web application for water resources based on open source software
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.
2014-01-01
This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.
Metadata for Web Resources: How Metadata Works on the Web.
ERIC Educational Resources Information Center
Dillon, Martin
This paper discusses bibliographic control of knowledge resources on the World Wide Web. The first section sets the context of the inquiry. The second section covers the following topics related to metadata: (1) definitions of metadata, including metadata as tags and as descriptors; (2) metadata on the Web, including general metadata systems,…
Replacement of SSE with NASA's POWER Project GIS-enabled Web Data Portal
Atmospheric Science Data Center
2018-04-30
Replacement of SSE with NASA's POWER Project GIS-enabled Web Data Portal Friday, March ... 2018 Replacement of SSE (Release 6) with NASA's Prediction of Worldwide Energy Resource (POWER) Project GIS-enabled Web ... Worldwide Energy Resource (POWER) Project funded largely by NASA Earth Applied Sciences program. The new POWER web portal ...
Web usage mining at an academic health sciences library: an exploratory study.
Bracke, Paul J
2004-10-01
This paper explores the potential of multinomial logistic regression analysis to perform Web usage mining for an academic health sciences library Website. Usage of database-driven resource gateway pages was logged for a six-month period, including information about users' network addresses, referring uniform resource locators (URLs), and types of resource accessed. It was found that referring URL did vary significantly by two factors: whether a user was on-campus and what type of resource was accessed. Although the data available for analysis are limited by the nature of the Web and concerns for privacy, this method demonstrates the potential for gaining insight into Web usage that supplements Web log analysis. It can be used to improve the design of static and dynamic Websites today and could be used in the design of more advanced Web systems in the future.
Integration of robotic resources into FORCEnet
NASA Astrophysics Data System (ADS)
Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa
2006-05-01
The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse. The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.
Web based aphasia test using service oriented architecture (SOA)
NASA Astrophysics Data System (ADS)
Voos, J. A.; Vigliecca, N. S.; Gonzalez, E. A.
2007-11-01
Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.
Rose, Peter W; Prlić, Andreas; Bi, Chunxiao; Bluhm, Wolfgang F; Christie, Cole H; Dutta, Shuchismita; Green, Rachel Kramer; Goodsell, David S; Westbrook, John D; Woo, Jesse; Young, Jasmine; Zardecki, Christine; Berman, Helen M; Bourne, Philip E; Burley, Stephen K
2015-01-01
The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Models and Simulations as a Service: Exploring the Use of Galaxy for Delivering Computational Models
Walker, Mark A.; Madduri, Ravi; Rodriguez, Alex; Greenstein, Joseph L.; Winslow, Raimond L.
2016-01-01
We describe the ways in which Galaxy, a web-based reproducible research platform, can be used for web-based sharing of complex computational models. Galaxy allows users to seamlessly customize and run simulations on cloud computing resources, a concept we refer to as Models and Simulations as a Service (MaSS). To illustrate this application of Galaxy, we have developed a tool suite for simulating a high spatial-resolution model of the cardiac Ca2+ spark that requires supercomputing resources for execution. We also present tools for simulating models encoded in the SBML and CellML model description languages, thus demonstrating how Galaxy’s reproducible research features can be leveraged by existing technologies. Finally, we demonstrate how the Galaxy workflow editor can be used to compose integrative models from constituent submodules. This work represents an important novel approach, to our knowledge, to making computational simulations more accessible to the broader scientific community. PMID:26958881
A journey to Semantic Web query federation in the life sciences.
Cheung, Kei-Hoi; Frost, H Robert; Marshall, M Scott; Prud'hommeaux, Eric; Samwald, Matthias; Zhao, Jun; Paschke, Adrian
2009-10-01
As interest in adopting the Semantic Web in the biomedical domain continues to grow, Semantic Web technology has been evolving and maturing. A variety of technological approaches including triplestore technologies, SPARQL endpoints, Linked Data, and Vocabulary of Interlinked Datasets have emerged in recent years. In addition to the data warehouse construction, these technological approaches can be used to support dynamic query federation. As a community effort, the BioRDF task force, within the Semantic Web for Health Care and Life Sciences Interest Group, is exploring how these emerging approaches can be utilized to execute distributed queries across different neuroscience data sources. We have created two health care and life science knowledge bases. We have explored a variety of Semantic Web approaches to describe, map, and dynamically query multiple datasets. We have demonstrated several federation approaches that integrate diverse types of information about neurons and receptors that play an important role in basic, clinical, and translational neuroscience research. Particularly, we have created a prototype receptor explorer which uses OWL mappings to provide an integrated list of receptors and executes individual queries against different SPARQL endpoints. We have also employed the AIDA Toolkit, which is directed at groups of knowledge workers who cooperatively search, annotate, interpret, and enrich large collections of heterogeneous documents from diverse locations. We have explored a tool called "FeDeRate", which enables a global SPARQL query to be decomposed into subqueries against the remote databases offering either SPARQL or SQL query interfaces. Finally, we have explored how to use the vocabulary of interlinked Datasets (voiD) to create metadata for describing datasets exposed as Linked Data URIs or SPARQL endpoints. We have demonstrated the use of a set of novel and state-of-the-art Semantic Web technologies in support of a neuroscience query federation scenario. We have identified both the strengths and weaknesses of these technologies. While Semantic Web offers a global data model including the use of Uniform Resource Identifiers (URI's), the proliferation of semantically-equivalent URI's hinders large scale data integration. Our work helps direct research and tool development, which will be of benefit to this community.
A journey to Semantic Web query federation in the life sciences
Cheung, Kei-Hoi; Frost, H Robert; Marshall, M Scott; Prud'hommeaux, Eric; Samwald, Matthias; Zhao, Jun; Paschke, Adrian
2009-01-01
Background As interest in adopting the Semantic Web in the biomedical domain continues to grow, Semantic Web technology has been evolving and maturing. A variety of technological approaches including triplestore technologies, SPARQL endpoints, Linked Data, and Vocabulary of Interlinked Datasets have emerged in recent years. In addition to the data warehouse construction, these technological approaches can be used to support dynamic query federation. As a community effort, the BioRDF task force, within the Semantic Web for Health Care and Life Sciences Interest Group, is exploring how these emerging approaches can be utilized to execute distributed queries across different neuroscience data sources. Methods and results We have created two health care and life science knowledge bases. We have explored a variety of Semantic Web approaches to describe, map, and dynamically query multiple datasets. We have demonstrated several federation approaches that integrate diverse types of information about neurons and receptors that play an important role in basic, clinical, and translational neuroscience research. Particularly, we have created a prototype receptor explorer which uses OWL mappings to provide an integrated list of receptors and executes individual queries against different SPARQL endpoints. We have also employed the AIDA Toolkit, which is directed at groups of knowledge workers who cooperatively search, annotate, interpret, and enrich large collections of heterogeneous documents from diverse locations. We have explored a tool called "FeDeRate", which enables a global SPARQL query to be decomposed into subqueries against the remote databases offering either SPARQL or SQL query interfaces. Finally, we have explored how to use the vocabulary of interlinked Datasets (voiD) to create metadata for describing datasets exposed as Linked Data URIs or SPARQL endpoints. Conclusion We have demonstrated the use of a set of novel and state-of-the-art Semantic Web technologies in support of a neuroscience query federation scenario. We have identified both the strengths and weaknesses of these technologies. While Semantic Web offers a global data model including the use of Uniform Resource Identifiers (URI's), the proliferation of semantically-equivalent URI's hinders large scale data integration. Our work helps direct research and tool development, which will be of benefit to this community. PMID:19796394
Scholarly context not found: One in five articles suffers from reference rot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Martin; Van de Sompel, Herbert; Sanderson, Robert
The emergence of the web has fundamentally affected most aspects of information communication, including scholarly communication. The immediacy that characterizes publishing information to the web, as well as accessing it, allows for a dramatic increase in the speed of dissemination of scholarly knowledge. But, the transition from a paper-based to a web-based scholarly communication system also poses challenges. In this paper, we focus on reference rot, the combination of link rot and content drift to which references to web resources included in Science, Technology, and Medicine (STM) articles are subject. We investigate the extent to which reference rot impacts themore » ability to revisit the web context that surrounds STM articles some time after their publication. We do so on the basis of a vast collection of articles from three corpora that span publication years 1997 to 2012. For over one million references to web resources extracted from over 3.5 million articles, we determine whether the HTTP URI is still responsive on the live web and whether web archives contain an archived snapshot representative of the state the referenced resource had at the time it was referenced. We observe that the fraction of articles containing references to web resources is growing steadily over time. We find one out of five STM articles suffering from reference rot, meaning it is impossible to revisit the web context that surrounds them some time after their publication. When only considering STM articles that contain references to web resources, this fraction increases to seven out of ten.« less
Scholarly context not found: One in five articles suffers from reference rot
Klein, Martin; Van de Sompel, Herbert; Sanderson, Robert; ...
2014-12-26
The emergence of the web has fundamentally affected most aspects of information communication, including scholarly communication. The immediacy that characterizes publishing information to the web, as well as accessing it, allows for a dramatic increase in the speed of dissemination of scholarly knowledge. But, the transition from a paper-based to a web-based scholarly communication system also poses challenges. In this paper, we focus on reference rot, the combination of link rot and content drift to which references to web resources included in Science, Technology, and Medicine (STM) articles are subject. We investigate the extent to which reference rot impacts themore » ability to revisit the web context that surrounds STM articles some time after their publication. We do so on the basis of a vast collection of articles from three corpora that span publication years 1997 to 2012. For over one million references to web resources extracted from over 3.5 million articles, we determine whether the HTTP URI is still responsive on the live web and whether web archives contain an archived snapshot representative of the state the referenced resource had at the time it was referenced. We observe that the fraction of articles containing references to web resources is growing steadily over time. We find one out of five STM articles suffering from reference rot, meaning it is impossible to revisit the web context that surrounds them some time after their publication. When only considering STM articles that contain references to web resources, this fraction increases to seven out of ten.« less
Web 2.0 and internet social networking: a new tool for disaster management?--lessons from Taiwan.
Huang, Cheng-Min; Chan, Edward; Hyder, Adnan A
2010-10-06
Internet social networking tools and the emerging web 2.0 technologies are providing a new way for web users and health workers in information sharing and knowledge dissemination. Based on the characters of immediate, two-way and large scale of impact, the internet social networking tools have been utilized as a solution in emergency response during disasters. This paper highlights the use of internet social networking in disaster emergency response and public health management of disasters by focusing on a case study of the typhoon Morakot disaster in Taiwan. In the case of typhoon disaster in Taiwan, internet social networking and mobile technology were found to be helpful for community residents, professional emergency rescuers, and government agencies in gathering and disseminating real-time information, regarding volunteer recruitment and relief supplies allocation. We noted that if internet tools are to be integrated in the development of emergency response system, the accessibility, accuracy, validity, feasibility, privacy and the scalability of itself should be carefully considered especially in the effort of applying it in resource poor settings. This paper seeks to promote an internet-based emergency response system by integrating internet social networking and information communication technology into central government disaster management system. Web-based networking provides two-way communication which establishes a reliable and accessible tunnel for proximal and distal users in disaster preparedness and management.
ERIC Educational Resources Information Center
Bodily, Robert; Wood, Steven
2017-01-01
This paper presents the technical infrastructure required to track student use of web-based resources in an introductory chemistry course, the design of a student dashboard, and the results from analyzing student web-based resource use. Students were tracked as they interacted with online homework problems and high quality course content videos.…
NASA Astrophysics Data System (ADS)
Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.
2017-12-01
As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental, social, and economic impacts, highlight knowledge or technology gaps, and reduce uncertainty for a range of `what if' scenarios relevant to oil spill prevention efforts. These examples illustrate EDX's growing capabilities for advanced spatial data search and analysis to support geo-data science needs.
NASA Technical Reports Server (NTRS)
Hinke, Thomas H.
2004-01-01
Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid services discovered using semantic grid technology. As required, high-end computational resources could be drawn from available grid resource pools. Using grid technology, this confluence of data, services and computational resources could easily be harnessed to transform data from many different sources into a desired product that is delivered to a user's workstation or to a web portal though which it could be accessed by its intended audience.
NASA Astrophysics Data System (ADS)
Poland, M. P.; Teasdale, R.; Kraft, K.
2010-12-01
Internet-accessible real- and near-real-time Earth science datasets are an important resource for geoscience education, but relatively few comprehensive datasets are available, and background information to aid interpretation is often lacking. In response to this need, the U.S. Geological Survey’s (USGS) Hawaiian Volcano Observatory, in collaboration with the National Aeronautics and Space Administration and the University of Hawai‘i, Mānoa, established the Volcanoes Exploration Project: Pu‘u ‘O‘o (VEPP). The VEPP Web site provides access, in near-real time, to geodetic, seismic, and geologic data from the Pu‘u ‘O‘o eruptive vent on Kilauea Volcano, Hawai‘i. On the VEPP Web site, a time series query tool provides a means of interacting with continuous geophysical data. In addition, results from episodic kinematic GPS campaigns and lava flow field maps are posted as data are collected, and archived Webcam images from Pu‘u ‘O‘o crater are available as a tool for examining visual changes in volcanic activity over time. A variety of background information on volcano surveillance and the history of the 1983-present Pu‘u ‘O‘o-Kupaianaha eruption puts the available monitoring data in context. The primary goal of the VEPP Web site is to take advantage of high visibility monitoring data that are seldom suitably well-organized to constitute an established educational resource. In doing so, the VEPP project provides a geoscience education resource that demonstrates the dynamic nature of volcanoes and promotes excitement about the process of scientific discovery through hands-on learning. To support use of the VEPP Web site, a week-long workshop was held at Kilauea Volcano in July 2010, which included 25 participants from the United States and Canada. The participants represented a diverse cross-section of higher learning, from community colleges to research universities, and included faculty who teach both large introductory non-major classes and seminar-style upper division and graduate-level classes. Overall workshop goals were for participants to learn how to interpret each of the VEPP data types, become proficient in the use of the VEPP Web site, provide feedback on site content, and create teaching modules that integrate the site into college and university geoscience curriculum. By the end of the workshop, over 20 new teaching modules were developed and the VEPP Web site was modified based on participant feedback. Teaching activities are available via the VEPP Workshop section of the Science Education Resource Center (SERC) Web site (http://www.nagt.org/nagt/vepp/index.html).
BioPortal: An Open-Source Community-Based Ontology Repository
NASA Astrophysics Data System (ADS)
Noy, N.; NCBO Team
2011-12-01
Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its content, enabling its integration in semantically enriched applications. [1] Noy, N.F., Shah, N.H., et al., BioPortal: ontologies and integrated data resources at the click of a mouse. Nucleic Acids Res, 2009. 37(Web Server issue): p. W170-3.
Web party effect: a cocktail party effect in the web environment
Gerbino, Walter
2015-01-01
In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others. PMID:25802803
Web party effect: a cocktail party effect in the web environment.
Rigutti, Sara; Fantoni, Carlo; Gerbino, Walter
2015-01-01
In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.
Automatically exposing OpenLifeData via SADI semantic Web Services.
González, Alejandro Rodríguez; Callahan, Alison; Cruz-Toledo, José; Garcia, Adrian; Egaña Aranguren, Mikel; Dumontier, Michel; Wilkinson, Mark D
2014-01-01
Two distinct trends are emerging with respect to how data is shared, collected, and analyzed within the bioinformatics community. First, Linked Data, exposed as SPARQL endpoints, promises to make data easier to collect and integrate by moving towards the harmonization of data syntax, descriptive vocabularies, and identifiers, as well as providing a standardized mechanism for data access. Second, Web Services, often linked together into workflows, normalize data access and create transparent, reproducible scientific methodologies that can, in principle, be re-used and customized to suit new scientific questions. Constructing queries that traverse semantically-rich Linked Data requires substantial expertise, yet traditional RESTful or SOAP Web Services cannot adequately describe the content of a SPARQL endpoint. We propose that content-driven Semantic Web Services can enable facile discovery of Linked Data, independent of their location. We use a well-curated Linked Dataset - OpenLifeData - and utilize its descriptive metadata to automatically configure a series of more than 22,000 Semantic Web Services that expose all of its content via the SADI set of design principles. The OpenLifeData SADI services are discoverable via queries to the SHARE registry and easy to integrate into new or existing bioinformatics workflows and analytical pipelines. We demonstrate the utility of this system through comparison of Web Service-mediated data access with traditional SPARQL, and note that this approach not only simplifies data retrieval, but simultaneously provides protection against resource-intensive queries. We show, through a variety of different clients and examples of varying complexity, that data from the myriad OpenLifeData can be recovered without any need for prior-knowledge of the content or structure of the SPARQL endpoints. We also demonstrate that, via clients such as SHARE, the complexity of federated SPARQL queries is dramatically reduced.
Stellefson, Michael L; Shuster, Jonathan J; Chaney, Beth H; Paige, Samantha R; Alber, Julia M; Chaney, J Don; Sriram, P S
2017-09-05
Many people living with Chronic Obstructive Pulmonary Disease (COPD) have low general health literacy; however, there is little information available on these patients' eHealth literacy, or their ability to seek, find, understand, and appraise online health information and apply this knowledge to address or solve disease-related health concerns. A nationally representative sample of patients registered in the COPD Foundation's National Research Registry (N = 1,270) was invited to complete a web-based survey to assess socio-demographic (age, gender, marital status, education), health status (generic and lung-specific health-related quality of life), and socio-cognitive (social support, self-efficacy, COPD knowledge) predictors of eHealth literacy, measured using the 8-item eHealth literacy scale (eHEALS). Over 50% of the respondents (n = 176) were female (n = 89), with a mean age of 66.19 (SD = 9.47). Overall, participants reported moderate levels of eHealth literacy, with more than 70% feeling confident in their ability to find helpful health resources on the Internet. However, respondents were much less confident in their ability to distinguish between high- and low-quality sources of web-based health information. Very severe versus less severe COPD (β = 4.15), lower lung-specific health-related quality of life (β = -0.19), and greater COPD knowledge (β = 0.62) were significantly associated with higher eHealth literacy. Higher COPD knowledge was also significantly associated with greater knowledge (ρ = 0.24, p = .001) and use (ρ = 0.24, p = .001) of web-based health resources. Findings emphasize the importance of integrating skill-building activities into comprehensive patient education programs that enable patients with severe cases of COPD to identify high-quality sources of web-based health information. Additional research is needed to understand how new social technologies can be used to help medically underserved COPD patients benefit from web-based self-management support resources.
Chapter 8: Web-based Tools - CARNIVORE
NASA Astrophysics Data System (ADS)
Graham, M. J.
Registries are an integral part of the VO infrastructure, yet the greatest exposure that most users will ever need to have to one is discovering resources through a registry portal. Some users, however, will have resources of their own that they need to register and will go to an existing registry to do so, but a small number will want to set up their own registry. They may have too many resources to register with an existing registry; they may want more control over their resource metadata than an existing registry will afford; or they may want to set up a specialized registry, e.g. a subjectspecific one. CARNIVORE is designed to offer those who want their own registry the functionality they require in an off-the-shelf implementation. This chapter describes how to set up your own registry using CARNIVORE.
Indexing method of digital audiovisual medical resources with semantic Web integration.
Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre
2005-03-01
Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.
ERIC Educational Resources Information Center
Web Feet K-8, 2001
2001-01-01
This annotated subject guide to Web sites and additional resources focuses on biomes. Specifies age levels for resources that include Web sites, CD-ROMs and software, videos, books, audios, and magazines; includes professional resources; and presents a relevant class activity. (LRW)
Why Can't I Find Newton's Third Law? Case Studies of Students' Use of the Web as a Science Resource.
ERIC Educational Resources Information Center
MaKinster, James G.; Beghetto, Ronald A.; Plucker, Jonathan A.
2002-01-01
Examines searching patterns of students using the Web as science information resources. Attempts to provide detailed accounts of how students use the Web as a science resource and illuminate how the different levels of domain knowledge and expertise, and situational interest impact students' ability to find useful and relevant information on the…
ERIC Educational Resources Information Center
Fry, Amy; Rich, Linda
2011-01-01
In early 2010, library staff at Bowling Green State University (BGSU) in Ohio designed and conducted a usability study of key parts of the library web site, focusing on the web pages generated by the library's electronic resources management system (ERM) that list and describe the library's databases. The goal was to discover how users find and…
EO Domain Specific Knowledge Enabled Services (KES-B)
NASA Astrophysics Data System (ADS)
Varas, J.; Busto, J.; Torguet, R.
2004-09-01
This paper recovers and describes a number of major statements with respect to the vision, mission and technological approaches of the Technological Research Project (TRP) "EO Domain Specific Knowledge Enabled Services" (project acronym KES-B), which is currently under development at the European Space Research Institute (ESRIN) under contract "16397/02/I- SB". Resulting from the on-going R&D activities, the KES-B project aims are to demonstrate with a prototype system the feasibility of the application of innovative knowledge-based technologies to provide services for easy, scheduled and controlled exploitation of EO resources (e.g.: data, algorithms, procedures, storage, processors, ...), to automate the generation of products, and to support users in easily identifying and accessing the required information or products by using their own vocabulary, domain knowledge and preferences. The ultimate goals of KES-B are summarized in the provision of the two main types of KES services: 1st the Search service (also referred to as Product Exploitation or Information Retrieval; and 2nd the Production service (also referred to as Information Extraction), with the strategic advantage that they are enabled by Knowledge consolidated (formalized) within the system. The KES-B system technical solution approach is driven by a strong commitment for the adoption of industry (XML-based) language standards, aiming to have an interoperable, scalable and flexible operational prototype. In that sense, the Search KES services builds on the basis of the adoption of consolidated and/or emergent W3C semantic-web standards. Remarkably the languages/models Dublin Core (DC), Universal Resource Identifier (URI), Resource Description Framework (RDF) and Ontology Web Language (OWL), and COTS like Protege [1] and JENA [2] are being integrated in the system as building bricks for the construction of the KES based Search services. On the other hand, the Production KES services builds on top of workflow management standards and tools. In this side, the Business Process Execution Language (BPEL), the Web Services Definition Language (WSDL), and the Collaxa [3] COTS tool for workflow management are being integrated for the construction of the KES-B Production Services. The KES-B platform (web portal and web-server) architecture is build on the basis of the J2EE reference architecture. These languages represent the mean for the codification of the different types of knowledge that are to be formalized in the system. This representing the ontological architecture of the system. This shall enable in fact the interoperability with other KES-based systems committing as well to those standards. The motivation behind this vision is pointing towards the construction of the Semantic-web based GRID supply- chain infrastructure for EO-services, in line with the INSPIRE initiative suggestions.
An Online Course for Instruction in the Reponsible Conduct of Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael Kalichman
2004-10-12
Responsible Conduct of Research (RCR) is the process by which regulations, guidelines, standards and ethics are reconciled to promote integrity in research. The development of this online resource, with contributions from the Department of Health and Human Services (DHHS), will allow the DOE system to offer state-of-the-art education in RCR to its sites. The intent of the project is to establish basic RCR content websites, publicize for public use and review, revise as recommended or as ethics change, and to continue supplementing with new material. The resulting resources will be posted on the Web (http://rcrec.org/r)
BP-Broker use-cases in the UncertWeb framework
NASA Astrophysics Data System (ADS)
Roncella, Roberto; Bigagli, Lorenzo; Schulz, Michael; Stasch, Christoph; Proß, Benjamin; Jones, Richard; Santoro, Mattia
2013-04-01
The UncertWeb framework is a distributed, Web-based Information and Communication Technology (ICT) system to support scientific data modeling in presence of uncertainty. We designed and prototyped a core component of the UncertWeb framework: the Business Process Broker. The BP-Broker implements several functionalities, such as: discovery of available processes/BPs, preprocessing of a BP into its executable form (EBP), publication of EBPs and their execution through a workflow-engine. According to the Composition-as-a-Service (CaaS) approach, the BP-Broker supports discovery and chaining of modeling resources (and processing resources in general), providing the necessary interoperability services for creating, validating, editing, storing, publishing, and executing scientific workflows. The UncertWeb project targeted several scenarios, which were used to evaluate and test the BP-Broker. The scenarios cover the following environmental application domains: biodiversity and habitat change, land use and policy modeling, local air quality forecasting, and individual activity in the environment. This work reports on the study of a number of use-cases, by means of the BP-Broker, namely: - eHabitat use-case: implements a Monte Carlo simulation performed on a deterministic ecological model; an extended use-case supports inter-comparison of model outputs; - FERA use-case: is composed of a set of models for predicting land-use and crop yield response to climatic and economic change; - NILU use-case: is composed of a Probabilistic Air Quality Forecasting model for predicting concentrations of air pollutants; - Albatross use-case: includes two model services for simulating activity-travel patterns of individuals in time and space; - Overlay use-case: integrates the NILU scenario with the Albatross scenario to calculate the exposure to air pollutants of individuals. Our aim was to prove the feasibility of describing composite modeling processes with a high-level, abstract notation (i.e. BPMN 2.0), and delegating the resolution of technical issues (e.g. I/O matching) as much as possible to an external service. The results of the experimented solution indicate that this approach facilitates the integration of environmental model workflows into the standard geospatial Web Services framework (e.g. the GEOSS Common Infrastructure), mitigating its inherent complexity. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.
Integration of Sustainable Practices into Standard Army MILCON Designs
2011-09-01
Sustainable Installations Regional Resource Assessment (SIRRA™) web -based database analysis tool output, ERDC-CERL, 2010. 18 Roy, Sujoy, B. L. Chen, E...Water issues white paper . ERDC/CERL TR-11-27 25 streams. This policy states that in the absence of other flow limits as estab- lished by the...89 Note that flush quality in some efficient toilets is undermined by non-flushable recyclable toilet paper that builds up in the
IsoWeb: A Bayesian Isotope Mixing Model for Diet Analysis of the Whole Food Web
Kadoya, Taku; Osada, Yutaka; Takimoto, Gaku
2012-01-01
Quantitative description of food webs provides fundamental information for the understanding of population, community, and ecosystem dynamics. Recently, stable isotope mixing models have been widely used to quantify dietary proportions of different food resources to a focal consumer. Here we propose a novel mixing model (IsoWeb) that estimates diet proportions of all consumers in a food web based on stable isotope information. IsoWeb requires a topological description of a food web, and stable isotope signatures of all consumers and resources in the web. A merit of IsoWeb is that it takes into account variation in trophic enrichment factors among different consumer-resource links. Sensitivity analysis using realistic hypothetical food webs suggests that IsoWeb is applicable to a wide variety of food webs differing in the number of species, connectance, sample size, and data variability. Sensitivity analysis based on real topological webs showed that IsoWeb can allow for a certain level of topological uncertainty in target food webs, including erroneously assuming false links, omission of existent links and species, and trophic aggregation into trophospecies. Moreover, using an illustrative application to a real food web, we demonstrated that IsoWeb can compare the plausibility of different candidate topologies for a focal web. These results suggest that IsoWeb provides a powerful tool to analyze food-web structure from stable isotope data. We provide R and BUGS codes to aid efficient applications of IsoWeb. PMID:22848427
NASA Astrophysics Data System (ADS)
Bruckner, M. Z.; Manduca, C. A.; Egger, A. E.; Macdonald, H.
2014-12-01
The InTeGrate Student Portal is a suite of web pages that utilize InTeGrate resources to support student success by providing undergraduates with tools and information necessary to be proactive in their career choices and development. Drawn from various InTeGrate workshops and programming, the Portal organizes these resources to illuminate a variety of career opportunities and pathways to both traditional and non-traditional jobs that support a sustainable future. Informed from a variety of sources including employers, practitioners, faculty, students, reports, and articles, the pages explore five facets: (1) sustainability across the disciplines, (2) workforce preparation, (3) professional communication, (4) teaching and teaching careers, and (5) the future of green research and technology. The first three facets explore how sustainability is integrated across disciplines and how sustainability and 'green' jobs are available in a wide range of traditional and non-traditional workplaces within and beyond science. They provide students guidance in preparing for this sustainability workforce, including where to learn about jobs and how to pursue them, advice for strengthening their job applications, and how to build a set of skills that employers seek. This advice encompasses classroom skills as well as those acquired and strengthened as part of extracurricular or workplace experiences. The fourth facet, aimed at teaching assistants with little or no experience as well as at students who are interested in pursuing teaching as a career, provides information and resources about teaching. The fifth facet explores future directions of technology and the need for innovations in the workforce of the future to address sustainability issues. We seek your input and invite you to explore the Portal at: serc.carleton.edu/integrate/students/
The Librarian's Internet Survival Guide: Strategies for the High-Tech Reference Desk.
ERIC Educational Resources Information Center
McDermott, Irene E.; Quint, Barbara, Ed.
This guide discusses the use of the World Wide Web for library reference service. Part 1, "Ready Reference on the Web: Resources for Patrons," contains chapters on searching and meta-searching the Internet, using the Web to find people, news on the Internet, quality reference resources on the Web, Internet sites for kids, free full-text…
Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices
ERIC Educational Resources Information Center
Schrimpsher, Dan
2011-01-01
As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…
The Impact of Web Based Resource Material on Learning Outcome in Open Distance Higher Education
ERIC Educational Resources Information Center
Masrur, Rehana
2010-01-01
One of the most powerful educational option in open and distance education is web-based learning. A blended (hybrid) course combines traditional face to face and web-based learning approaches in an educational environment that is nonspecific as to time and place. The study reported here investigated the impact of web based resource material…
ERIC Educational Resources Information Center
Web Feet K-8, 2001
2001-01-01
This annotated subject guide to Web sites and additional resources focuses on mythology. Specific age levels are given for resources that include Web sites, CD-ROMs and software, videos, books, audios, and magazines; offers professional resources; and presents a relevant class activity. (LRW)
ERIC Educational Resources Information Center
Web Feet K-8, 2001
2001-01-01
This annotated subject guide to Web sites and additional resources focuses on space and astronomy. Specifies age levels for resources that include Web sites, CD-ROMS and software, videos, books, audios, and magazines; offers professional resources; and presents a relevant class activity. (LRW)
Course Management Systems: Traveling Beyond Powerpoint Slides Online
NASA Astrophysics Data System (ADS)
Gauthier, A. J.; Impey, C. D.
2004-12-01
Course management systems (CMS) like WebCT, Blackboard, Astronomica, etc., have reached and surpassed their tipping point in higher education. They are no longer a technology-trendy item to use in a course, but rather an expected supplement to undergraduate courses. There is a well known disconnect between the student population of ''digital natives'' (1) and higher education instructors, the ''digital immigrants'' (1). What expectations and technology skills do the new generations of undergraduates have? How can instructors easily meet their students' needs? What needs do instructors have and what resources are available to meet those needs? In the past, instructors would create their own HTML web pages to post class materials like PowerPoint slides, homework, and announcements. How does an instructor-created web resource differ from a secure university run CMS? How can you make your university or college's CMS system into a productive learning tool and not just a repository for class materials and grades? How can the astronomy instructor benefit from integrating a CMS into their course? What are common student attitudes regarding CMS usage in a course? How are instructors using CMSs in innovative ways? Where on your campus can you get free help designing and implementing a CMS resource for your students? This presentation aims to answer these questions. Extensive literature reviews, formal surveys, case study reports, and educational research from the instructional technology community inform our astronomy teaching community of the answers. Highlights from innovative systems and uses of CMSs in undergraduate Astro 101 classrooms will be presented. Resources and further references will be made available as handouts. (1) M. Prensky. ''Digital Natives, Digital Immigrants,'' On The Horizon, Vol.9, 2001.
Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela
2011-06-01
The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.
Creating an Integrated Community-Wide Effort to Enhance Diversity in the Geosciences
NASA Astrophysics Data System (ADS)
Manduca, C. A.; Weingroff, M.
2001-05-01
Supporting the development and sustenance of a diverse geoscience workforce and improving Earth system education for the full diversity of students are important goals for our community. There are numerous established programs and many new efforts beginning. However, these efforts can become more powerful if dissemination of opportunities, effective practices, and web-based resources enable synergies to develop throughout our community. The Digital Library for Earth System Education (DLESE; www.dlese.org) has developed a working group and a website to support these goals. The DLESE Diversity Working Group provides an open, virtual community for those interested in enhancing diversity in the geosciences. The working group has focused its initial effort on 1) creating a geoscience community engaged in supporting increased diversity that builds on and is integrated with work taking place in other venues; 2) developing a web resource designed to engage and support members of underrepresented groups in learning about the Earth; and 3) assisting in enhancing DLESE collections and services to better support learning experiences of students from underrepresented groups. You are invited to join the working group and participate in these efforts. The DLESE diversity website provides a mechanism for sharing information and resources. Serving as a community database, the website provides a structure in which community members can post announcements of opportunities, information on programs, and links to resources and services. Information currently available on the site includes links to professional society activities; mentoring opportunities; grant, fellowship, employment, and internship opportunities for students and educators; information on teaching students from underrepresented groups; and professional development opportunities of high interest to members of underrepresented groups. These tools provide a starting point for developing a community wide effort to enhance diversity in the geosciences that builds on our collective experiences, knowledge and resources and the work that is taking place in communities around us.
ERIC Educational Resources Information Center
Garfield, Joan; delMas, Robert
2010-01-01
The Assessment Resource Tools for Improving Statistical Thinking (ARTIST) Web site was developed to provide high-quality assessment resources for faculty who teach statistics at the tertiary level but resources are also useful to statistics teachers at the secondary level. This article describes some of the numerous ARTIST resources and suggests…
Gray, Kathleen Mary; Clarke, Ken; Alzougool, Basil; Hines, Carolyn; Tidhar, Gil; Frukhtman, Feodor
2014-03-10
The use of Internet protocol television (IPTV) as a channel for consumer health information is a relatively under-explored area of medical Internet research. IPTV may afford new opportunities for health care service providers to provide health information and for consumers, patients, and caretakers to access health information. The technologies of Web 2.0 add a new and even less explored dimension to IPTV's potential. Our research explored an application of Web 2.0 integrated with IPTV for personalized home-based health information in diabetes education, particularly for people with diabetes who are not strong computer and Internet users, and thus may miss out on Web-based resources. We wanted to establish whether this system could enable diabetes educators to deliver personalized health information directly to people with diabetes in their homes; and whether this system could encourage people with diabetes who make little use of Web-based health information to build their health literacy via the interface of a home television screen and remote control. This project was undertaken as design-based research in two stages. Stage 1 comprised a feasibility study into the technical work required to integrate an existing Web 2.0 platform with an existing IPTV system, populated with content and implemented for user trials in a laboratory setting. Stage 2 comprised an evaluation of the system by consumers and providers of diabetes information. The project succeeded in developing a Web 2.0 IPTV system for people with diabetes and low literacies and their diabetes educators. The performance of the system in the laboratory setting gave them the confidence to engage seriously in thinking about the actual and potential features and benefits of a more widely-implemented system. In their feedback they pointed out a range of critical usability and usefulness issues related to Web 2.0 affordances and learning fundamentals. They also described their experiences with the system in terms that bode well for its educational potential, and they suggested many constructive improvements to the system. The integration of Web 2.0 and IPTV merits further technical development, business modeling, and health services and health outcomes research, as a solution to extend the reach and scale of home-based health care.
Web portal on environmental sciences "ATMOS''
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Lykosov, V. N.; Fazliev, A. Z.
2006-06-01
The developed under INTAS grant web portal ATMOS (http://atmos.iao.ru and http://atmos.scert.ru) makes available to the international research community, environmental managers, and the interested public, a bilingual information source for the domain of Atmospheric Physics and Chemistry, and the related application domain of air quality assessment and management. It offers access to integrated thematic information, experimental data, analytical tools and models, case studies, and related information and educational resources compiled, structured, and edited by the partners into a coherent and consistent thematic information resource. While offering the usual components of a thematic site such as link collections, user group registration, discussion forum, news section etc., the site is distinguished by its scientific information services and tools: on-line models and analytical tools, and data collections and case studies together with tutorial material. The portal is organized as a set of interrelated scientific sites, which addressed basic branches of Atmospheric Sciences and Climate Modeling as well as the applied domains of Air Quality Assessment and Management, Modeling, and Environmental Impact Assessment. Each scientific site is open for external access information-computational system realized by means of Internet technologies. The main basic science topics are devoted to Atmospheric Chemistry, Atmospheric Spectroscopy and Radiation, Atmospheric Aerosols, Atmospheric Dynamics and Atmospheric Models, including climate models. The portal ATMOS reflects current tendency of Environmental Sciences transformation into exact (quantitative) sciences and is quite effective example of modern Information Technologies and Environmental Sciences integration. It makes the portal both an auxiliary instrument to support interdisciplinary projects of regional environment and extensive educational resource in this important domain.
Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun
2017-01-01
Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503
EST-PAC a web package for EST annotation and protein sequence prediction
Strahm, Yvan; Powell, David; Lefèvre, Christophe
2006-01-01
With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST) from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST) annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1) searching local or remote biological databases for sequence similarities using Blast services, 2) predicting protein coding sequence from EST data and, 3) annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics. PMID:17147782
Schweigkofler, U; Reimertz, C; Auhuber, T C; Jung, H G; Gottschalk, R; Hoffmann, R
2011-10-01
The outcome of injured patients depends on intrastractural circumstances as well as on the time until clinical treatment begins. A rapid patient allocation can only be achieved occur if informations about the care capacity status of the medical centers are available. Considering this an improvement at the interface prehospital/clinical care seems possible. In 2010 in Frankfurt am Main the announcement of free capacity (positive proof) was converted to a web-based negative proof of interdisciplinary care capacities. So-called closings are indicated in a web portal, recorded centrally and registered at the local health authority and the management of participating hospitals. Analyses of the allocations to hospitals of all professional disciplines from the years 2009 and 2010 showed an optimized use of the resources. A decline of the allocations by the order from 261 to 0 could be reached by the introduction of the clear care capacity proof system. The health authorities as the regulating body rarely had to intervene (decline from 400 to 7 cases). Surgical care in Frankfurt was guaranteed at any time by one of the large medical centers. The web-based care capacity proof system introduced in 2010 does justice to the demand for optimum resource use on-line. Integration of this allocation system into the developing trauma networks can optimize the process for a quick and high quality care of severely injured patients. It opens new approaches to improve allocation of high numbers of casualties in disaster medicine.
Meyer, Michael J; Geske, Philip; Yu, Haiyuan
2016-05-15
Biological sequence databases are integral to efforts to characterize and understand biological molecules and share biological data. However, when analyzing these data, scientists are often left holding disparate biological currency-molecular identifiers from different databases. For downstream applications that require converting the identifiers themselves, there are many resources available, but analyzing associated loci and variants can be cumbersome if data is not given in a form amenable to particular analyses. Here we present BISQUE, a web server and customizable command-line tool for converting molecular identifiers and their contained loci and variants between different database conventions. BISQUE uses a graph traversal algorithm to generalize the conversion process for residues in the human genome, genes, transcripts and proteins, allowing for conversion across classes of molecules and in all directions through an intuitive web interface and a URL-based web service. BISQUE is freely available via the web using any major web browser (http://bisque.yulab.org/). Source code is available in a public GitHub repository (https://github.com/hyulab/BISQUE). haiyuan.yu@cornell.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Jin, Wenquan; Kim, DoHyeun
2018-05-26
The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.
Risk markers for disappearance of pediatric Web resources
Hernández-Borges, Angel A.; Jiménez-Sosa, Alejandro; Torres-Álvarez de Arcaya, Maria L.; Macías-Cervi, Pablo; Gaspar-Guardado, Maria A.; Ruíz-Rabaza, Ana
2005-01-01
Objectives: The authors sought to find out whether certain Webometric indexes of a sample of pediatric Web resources, and some tests based on them, could be helpful predictors of their disappearance. Methods: The authors performed a retrospective study of a sample of 363 pediatric Websites and pages they had followed for 4 years. Main measurements included: number of resources that disappeared, number of inbound links and their annual increment, average daily visits to the resources in the sample, sample compliance with the quality criteria of 3 international organizations, and online time of the Web resources. Results: On average, 11% of the sample disappeared annually. However, 13% of these were available again at the end of follow up. Disappearing and surviving Websites did not show differences in the variables studied. However, surviving Web pages had a higher number of inbound links and higher annual increment in inbound links. Similarly, Web pages that survived showed higher compliance with recognized sets of quality criteria than those that disappeared. A subset of 14 quality criteria whose compliance accounted for 90% of the probability of online permanence was identified. Finally, a progressive increment of inbound links was found to be a marker of good prognosis, showing high specificity and positive predictive value (88% and 94%, respectively). Conclusions: The number of inbound links and annual increment of inbound links could be useful markers of the permanence probability for pediatric Web pages. Strategies that assure the Web editors' awareness of their Web resources' popularity could stimulate them to improve the quality of their Websites. PMID:16059427
The Papillomavirus Episteme: a central resource for papillomavirus sequence data and analysis.
Van Doorslaer, Koenraad; Tan, Qina; Xirasagar, Sandhya; Bandaru, Sandya; Gopalan, Vivek; Mohamoud, Yasmin; Huyen, Yentram; McBride, Alison A
2013-01-01
The goal of the Papillomavirus Episteme (PaVE) is to provide an integrated resource for the analysis of papillomavirus (PV) genome sequences and related information. The PaVE is a freely accessible, web-based tool (http://pave.niaid.nih.gov) created around a relational database, which enables storage, analysis and exchange of sequence information. From a design perspective, the PaVE adopts an Open Source software approach and stresses the integration and reuse of existing tools. Reference PV genome sequences have been extracted from publicly available databases and reannotated using a custom-created tool. To date, the PaVE contains 241 annotated PV genomes, 2245 genes and regions, 2004 protein sequences and 47 protein structures, which users can explore, analyze or download. The PaVE provides scientists with the data and tools needed to accelerate scientific progress for the study and treatment of diseases caused by PVs.
Law, MeiYee; Shaw, David R
2018-01-01
Mouse Genome Informatics (MGI, http://www.informatics.jax.org/ ) web resources provide free access to meticulously curated information about the laboratory mouse. MGI's primary goal is to help researchers investigate the genetic foundations of human diseases by translating information from mouse phenotypes and disease models studies to human systems. MGI provides comprehensive phenotypes for over 50,000 mutant alleles in mice and provides experimental model descriptions for over 1500 human diseases. Curated data from scientific publications are integrated with those from high-throughput phenotyping and gene expression centers. Data are standardized using defined, hierarchical vocabularies such as the Mammalian Phenotype (MP) Ontology, Mouse Developmental Anatomy and the Gene Ontologies (GO). This chapter introduces you to Gene and Allele Detail pages and provides step-by-step instructions for simple searches and those that take advantage of the breadth of MGI data integration.
CHOmine: an integrated data warehouse for CHO systems biology and modeling
Hanscho, Michael; Ruckerbauer, David E.; Zanghellini, Jürgen; Borth, Nicole
2017-01-01
Abstract The last decade has seen a surge in published genome-scale information for Chinese hamster ovary (CHO) cells, which are the main production vehicles for therapeutic proteins. While a single access point is available at www.CHOgenome.org, the primary data is distributed over several databases at different institutions. Currently research is frequently hampered by a plethora of gene names and IDs that vary between published draft genomes and databases making systems biology analyses cumbersome and elaborate. Here we present CHOmine, an integrative data warehouse connecting data from various databases and links to other ones. Furthermore, we introduce CHOmodel, a web based resource that provides access to recently published CHO cell line specific metabolic reconstructions. Both resources allow to query CHO relevant data, find interconnections between different types of data and thus provides a simple, standardized entry point to the world of CHO systems biology. Database URL: http://www.chogenome.org PMID:28605771
The use of interactive graphical maps for browsing medical/health Internet information resources
Boulos, Maged N Kamel
2003-01-01
As online information portals accumulate metadata descriptions of Web resources, it becomes necessary to develop effective ways for visualising and navigating the resultant huge metadata repositories as well as the different semantic relationships and attributes of described Web resources. Graphical maps provide a good method to visualise, understand and navigate a world that is too large and complex to be seen directly like the Web. Several examples of maps designed as a navigational aid for Web resources are presented in this review with an emphasis on maps of medical and health-related resources. The latter include HealthCyberMap maps , which can be classified as conceptual information space maps, and the very abstract and geometric Visual Net maps of PubMed (for demos). Information resources can be also organised and navigated based on their geographic attributes. Some of the maps presented in this review use a Kohonen Self-Organising Map algorithm, and only HealthCyberMap uses a Geographic Information System to classify Web resource data and render the maps. Maps based on familiar metaphors taken from users' everyday life are much easier to understand. Associative and pictorial map icons that enable instant recognition and comprehension are preferred to geometric ones and are key to successful maps for browsing medical/health Internet information resources. PMID:12556244
Preferences of women for web-based nutritional information in pregnancy.
Kennedy, R A K; Mullaney, L; Reynolds, C M E; Cawley, S; McCartney, D M A; Turner, M J
2017-02-01
During pregnancy, women are increasingly turning to web-based resources for information. This study examined the use of web-based nutritional information by women during pregnancy and explored their preferences. Cross-sectional observational study. Women were enrolled at their convenience from a large maternity hospital. Clinical and sociodemographic details were collected and women's use of web-based resources was assessed using a detailed questionnaire. Of the 101 women, 41.6% were nulliparous and the mean age was 33.1 years (19-47 years). All women had internet access and only 3% did not own a smartphone. Women derived pregnancy-related nutritional information from a range of online resources, most commonly: What to Expect When You're Expecting (15.1%), Babycenter (12.9%), and Eumom (9.7%). However, 24.7% reported using Google searches. There was minimal use of publically funded or academically supported resources. The features women wanted in a web-based application were recipes (88%), exercise advice (71%), personalized dietary feedback (37%), social features (35%), videos (24%) and cooking demonstrations (23%). This survey highlights the risk that pregnant women may get nutritional information from online resources which are not evidence-based. It also identifies features that women want from a web-based nutritional resource. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Do You Ignore Information Security in Your Journal Website?
Dadkhah, Mehdi; Borchardt, Glenn; Lagzian, Mohammad
2017-08-01
Nowadays, web-based applications extend to all businesses due to their advantages and easy usability. The most important issue in web-based applications is security. Due to their advantages, most academic journals are now using these applications, with papers being submitted and published through their websites. As these websites are resources for knowledge, information security is primary for maintaining their integrity. In this opinion piece, we point out vulnerabilities in certain websites and introduce the potential for future threats. We intend to present how some journals are vulnerable and what will happen if a journal can be infected by attackers. This opinion is not a technical manual in information security, it is a short inspection that we did to improve the security of academic journals.
Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize
2010-01-01
Background Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. Results In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. Conclusions CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu. PMID:20946609
Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize.
Kelley, Rowena Y; Gresham, Cathy; Harper, Jonathan; Bridges, Susan M; Warburton, Marilyn L; Hawkins, Leigh K; Pechanova, Olga; Peethambaran, Bela; Pechan, Tibor; Luthe, Dawn S; Mylroie, J E; Ankala, Arunkanth; Ozkan, Seval; Henry, W B; Williams, W P
2010-10-07
Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu.
ERIC Educational Resources Information Center
Perez, Ernest
2000-01-01
Illustrates the possibilities of freely or inexpensively connecting a library to the Internet. Discusses the advertising-supported approach; local resources for free or budget Internet resources; public access catalogs; examples of free Web materials and of libraries that have achieved a low-budget Web presence; building an effective planning…
WikiHyperGlossary (WHG): an information literacy technology for chemistry documents.
Bauer, Michael A; Berleant, Daniel; Cornell, Andrew P; Belford, Robert E
2015-01-01
The WikiHyperGlossary is an information literacy technology that was created to enhance reading comprehension of documents by connecting them to socially generated multimedia definitions as well as semantically relevant data. The WikiHyperGlossary enhances reading comprehension by using the lexicon of a discipline to generate dynamic links in a document to external resources that can provide implicit information the document did not explicitly provide. Currently, the most common method to acquire additional information when reading a document is to access a search engine and browse the web. This may lead to skimming of multiple documents with the novice actually never returning to the original document of interest. The WikiHyperGlossary automatically brings information to the user within the current document they are reading, enhancing the potential for deeper document understanding. The WikiHyperGlossary allows users to submit a web URL or text to be processed against a chosen lexicon, returning the document with tagged terms. The selection of a tagged term results in the appearance of the WikiHyperGlossary Portlet containing a definition, and depending on the type of word, tabs to additional information and resources. Current types of content include multimedia enhanced definitions, ChemSpider query results, 3D molecular structures, and 2D editable structures connected to ChemSpider queries. Existing glossaries can be bulk uploaded, locked for editing and associated with multiple social generated definitions. The WikiHyperGlossary leverages both social and semantic web technologies to bring relevant information to a document. This can not only aid reading comprehension, but increases the users' ability to obtain additional information within the document. We have demonstrated a molecular editor enabled knowledge framework that can result in a semantic web inductive reasoning process, and integration of the WikiHyperGlossary into other software technologies, like the Jikitou Biomedical Question and Answer system. Although this work was developed in the chemical sciences and took advantage of open science resources and initiatives, the technology is extensible to other knowledge domains. Through the DeepLit (Deeper Literacy: Connecting Documents to Data and Discourse) startup, we seek to extend WikiHyperGlossary technologies to other knowledge domains, and integrate them into other knowledge acquisition workflows.
Semantic Web-based Vocabulary Broker for Open Science
NASA Astrophysics Data System (ADS)
Ritschel, B.; Neher, G.; Iyemori, T.; Murayama, Y.; Kondo, Y.; Koyama, Y.; King, T. A.; Galkin, I. A.; Fung, S. F.; Wharton, S.; Cecconi, B.
2016-12-01
Keyword vocabularies are used to tag and to identify data of science data repositories. Such vocabularies consist of controlled terms and the appropriate concepts, such as GCMD1 keywords or the ESPAS2 keyword ontology. The Semantic Web-based mash-up of domain-specific, cross- or even trans-domain vocabularies provides unique capabilities in the network of appropriate data resources. Based on a collaboration between GFZ3, the FHP4, the WDC for Geomagnetism5 and the NICT6 we developed the concept of a vocabulary broker for inter- and trans-disciplinary data detection and integration. Our prototype of the Semantic Web-based vocabulary broker uses OSF7 for the mash-up of geo and space research vocabularies, such as GCMD keywords, ESPAS keyword ontology and SPASE8 keyword vocabulary. The vocabulary broker starts the search with "free" keywords or terms of a specific vocabulary scheme. The vocabulary broker almost automatically connects the different science data repositories which are tagged by terms of the aforementioned vocabularies. Therefore the mash-up of the SKOS9 based vocabularies with appropriate metadata from different domains can be realized by addressing LOD10 resources or virtual SPARQL11 endpoints which maps relational structures into the RDF format12. In order to demonstrate such a mash-up approach in real life, we installed and use a D2RQ13 server for the integration of IUGONET14 data which are managed by a relational database. The OSF based vocabulary broker and the D2RQ platform are installed at virtual LINUX machines at the Kyoto University. The vocabulary broker meets the standard of a main component of the WDS15 knowledge network. The Web address of the vocabulary broker is http://wdcosf.kugi.kyoto-u.ac.jp 1 Global Change Master Directory2 Near earth space data infrastructure for e-science3 German Research Centre for Geosciences4 University of Applied Sciences Potsdam5 World Data Center for Geomagnetism Kyoto6 National Institute of Information and Communications Technology Tokyo7 Open Semantic Framework8 Space Physics Archive Search and Extract9 Simple Knowledge Organization System10 Linked Open Data11 SPARQL Protocol And RDF Query12 Resource Description Framework13 Database to RDF Query14 Inter-university Upper atmosphere Global Observation NETwork15 World Data System
Teaching with technology: free Web resources for teaching and learning.
Wink, Diane M; Smith-Stoner, Marilyn
2011-01-01
In this bimonthly series, the department editor examines how nurse educators can use Internet and Web-based computer technologies such as search, communication, collaborative writing tools; social networking, and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. In this article, the department editor and her coauthor describe free Web-based resources that can be used to support teaching and learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuli, J.K.; Sonzogni,A.
The National Nuclear Data Center has provided remote access to some of its resources since 1986. The major databases and other resources available currently through NNDC Web site are summarized. The National Nuclear Data Center (NNDC) has provided remote access to the nuclear physics databases it maintains and to other resources since 1986. With considerable innovation access is now mostly through the Web. The NNDC Web pages have been modernized to provide a consistent state-of-the-art style. The improved database services and other resources available from the NNOC site at www.nndc.bnl.govwill be described.
CartograTree: connecting tree genomes, phenotypes and environment.
Vasquez-Gross, Hans A; Yu, John J; Figueroa, Ben; Gessler, Damian D G; Neale, David B; Wegrzyn, Jill L
2013-05-01
Today, researchers spend a tremendous amount of time gathering, formatting, filtering and visualizing data collected from disparate sources. Under the umbrella of forest tree biology, we seek to provide a platform and leverage modern technologies to connect biotic and abiotic data. Our goal is to provide an integrated web-based workspace that connects environmental, genomic and phenotypic data via geo-referenced coordinates. Here, we connect the genomic query web-based workspace, DiversiTree and a novel geographical interface called CartograTree to data housed on the TreeGenes database. To accomplish this goal, we implemented Simple Semantic Web Architecture and Protocol to enable the primary genomics database, TreeGenes, to communicate with semantic web services regardless of platform or back-end technologies. The novelty of CartograTree lies in the interactive workspace that allows for geographical visualization and engagement of high performance computing (HPC) resources. The application provides a unique tool set to facilitate research on the ecology, physiology and evolution of forest tree species. CartograTree can be accessed at: http://dendrome.ucdavis.edu/cartogratree. © 2013 Blackwell Publishing Ltd.
UniPrime2: a web service providing easier Universal Primer design.
Boutros, Robin; Stokes, Nicola; Bekaert, Michaël; Teeling, Emma C
2009-07-01
The UniPrime2 web server is a publicly available online resource which automatically designs large sets of universal primers when given a gene reference ID or Fasta sequence input by a user. UniPrime2 works by automatically retrieving and aligning homologous sequences from GenBank, identifying regions of conservation within the alignment, and generating suitable primers that can be used to amplify variable genomic regions. In essence, UniPrime2 is a suite of publicly available software packages (Blastn, T-Coffee, GramAlign, Primer3), which reduces the laborious process of primer design, by integrating these programs into a single software pipeline. Hence, UniPrime2 differs from previous primer design web services in that all steps are automated, linked, saved and phylogenetically delimited, only requiring a single user-defined gene reference ID or input sequence. We provide an overview of the web service and wet-laboratory validation of the primers generated. The system is freely accessible at: http://uniprime.batlab.eu. UniPrime2 is licenced under a Creative Commons Attribution Noncommercial-Share Alike 3.0 Licence.