Moby and Moby 2: creatures of the deep (web).
Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D
2009-03-01
Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.
deepTools2: a next generation web server for deep-sequencing data analysis.
Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas
2016-07-08
We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
ERIC Educational Resources Information Center
Rouet, Jean-Francois; Ros, Christine; Goumi, Antonine; Macedo-Rouet, Monica; Dinet, Jerome
2011-01-01
Two experiments investigated primary and secondary school students' Web menu selection strategies using simulated Web search tasks. It was hypothesized that students' selections of websites depend on their perception and integration of multiple relevance cues. More specifically, students should be able to disentangle superficial cues (e.g.,…
Hybrid Schema Matching for Deep Web
NASA Astrophysics Data System (ADS)
Chen, Kerui; Zuo, Wanli; He, Fengling; Chen, Yongheng
Schema matching is the process of identifying semantic mappings, or correspondences, between two or more schemas. Schema matching is a first step and critical part of data integration. For schema matching of deep web, most researches only interested in query interface, while rarely pay attention to abundant schema information contained in query result pages. This paper proposed a mixed schema matching technique, which combines attributes that appeared in query structures and query results of different data sources, and mines the matched schemas inside. Experimental results prove the effectiveness of this method for improving the accuracy of schema matching.
Deep pelagic food web structure as revealed by in situ feeding observations.
Choy, C Anela; Haddock, Steven H D; Robison, Bruce H
2017-12-06
Food web linkages, or the feeding relationships between species inhabiting a shared ecosystem, are an ecological lens through which ecosystem structure and function can be assessed, and thus are fundamental to informing sustainable resource management. Empirical feeding datasets have traditionally been painstakingly generated from stomach content analysis, direct observations and from biochemical trophic markers (stable isotopes, fatty acids, molecular tools). Each approach carries inherent biases and limitations, as well as advantages. Here, using 27 years (1991-2016) of in situ feeding observations collected by remotely operated vehicles (ROVs), we quantitatively characterize the deep pelagic food web of central California within the California Current, complementing existing studies of diet and trophic interactions with a unique perspective. Seven hundred and forty-three independent feeding events were observed with ROVs from near-surface waters down to depths approaching 4000 m, involving an assemblage of 84 different predators and 82 different prey types, for a total of 242 unique feeding relationships. The greatest diversity of prey was consumed by narcomedusae, followed by physonect siphonophores, ctenophores and cephalopods. We highlight key interactions within the poorly understood 'jelly web', showing the importance of medusae, ctenophores and siphonophores as key predators, whose ecological significance is comparable to large fish and squid species within the central California deep pelagic food web. Gelatinous predators are often thought to comprise relatively inefficient trophic pathways within marine communities, but we build upon previous findings to document their substantial and integral roles in deep pelagic food webs. © 2017 The Authors.
DeepBase: annotation and discovery of microRNAs and other noncoding RNAs from deep-sequencing data.
Yang, Jian-Hua; Qu, Liang-Hu
2012-01-01
Recent advances in high-throughput deep-sequencing technology have produced large numbers of short and long RNA sequences and enabled the detection and profiling of known and novel microRNAs (miRNAs) and other noncoding RNAs (ncRNAs) at unprecedented sensitivity and depth. In this chapter, we describe the use of deepBase, a database that we have developed to integrate all public deep-sequencing data and to facilitate the comprehensive annotation and discovery of miRNAs and other ncRNAs from these data. deepBase provides an integrative, interactive, and versatile web graphical interface to evaluate miRBase-annotated miRNA genes and other known ncRNAs, explores the expression patterns of miRNAs and other ncRNAs, and discovers novel miRNAs and other ncRNAs from deep-sequencing data. deepBase also provides a deepView genome browser to comparatively analyze these data at multiple levels. deepBase is available at http://deepbase.sysu.edu.cn/.
None Available
2018-02-06
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Deep pelagic food web structure as revealed by in situ feeding observations
Haddock, Steven H. D.; Robison, Bruce H.
2017-01-01
Food web linkages, or the feeding relationships between species inhabiting a shared ecosystem, are an ecological lens through which ecosystem structure and function can be assessed, and thus are fundamental to informing sustainable resource management. Empirical feeding datasets have traditionally been painstakingly generated from stomach content analysis, direct observations and from biochemical trophic markers (stable isotopes, fatty acids, molecular tools). Each approach carries inherent biases and limitations, as well as advantages. Here, using 27 years (1991–2016) of in situ feeding observations collected by remotely operated vehicles (ROVs), we quantitatively characterize the deep pelagic food web of central California within the California Current, complementing existing studies of diet and trophic interactions with a unique perspective. Seven hundred and forty-three independent feeding events were observed with ROVs from near-surface waters down to depths approaching 4000 m, involving an assemblage of 84 different predators and 82 different prey types, for a total of 242 unique feeding relationships. The greatest diversity of prey was consumed by narcomedusae, followed by physonect siphonophores, ctenophores and cephalopods. We highlight key interactions within the poorly understood ‘jelly web’, showing the importance of medusae, ctenophores and siphonophores as key predators, whose ecological significance is comparable to large fish and squid species within the central California deep pelagic food web. Gelatinous predators are often thought to comprise relatively inefficient trophic pathways within marine communities, but we build upon previous findings to document their substantial and integral roles in deep pelagic food webs. PMID:29212727
Focused Crawling of the Deep Web Using Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Liu, L; Critchlow, T
2004-06-21
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address thesemore » challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Automating Mid- and Long-Range Scheduling for the NASA Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system was designed and deployed as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users of the DSN. These users represent not only NASA's deep space missions, but also international partners and ground-based science and calibration users. The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. This paper describes some key aspects of the S(sup 3) system and on the challenges of modeling complex scheduling requirements and the ongoing extension of S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
Providing Multi-Page Data Extraction Services with XWRAPComposer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ling; Zhang, Jianjun; Han, Wei
2008-04-30
Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Stratification-Based Outlier Detection over the Deep Web.
Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming
2016-01-01
For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.
Stratification-Based Outlier Detection over the Deep Web
Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming
2016-01-01
For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web. PMID:27313603
WARCProcessor: An Integrative Tool for Building and Management of Web Spam Corpora.
Callón, Miguel; Fdez-Glez, Jorge; Ruano-Ordás, David; Laza, Rosalía; Pavón, Reyes; Fdez-Riverola, Florentino; Méndez, Jose Ramón
2017-12-22
In this work we present the design and implementation of WARCProcessor, a novel multiplatform integrative tool aimed to build scientific datasets to facilitate experimentation in web spam research. The developed application allows the user to specify multiple criteria that change the way in which new corpora are generated whilst reducing the number of repetitive and error prone tasks related with existing corpus maintenance. For this goal, WARCProcessor supports up to six commonly used data sources for web spam research, being able to store output corpus in standard WARC format together with complementary metadata files. Additionally, the application facilitates the automatic and concurrent download of web sites from Internet, giving the possibility of configuring the deep of the links to be followed as well as the behaviour when redirected URLs appear. WARCProcessor supports both an interactive GUI interface and a command line utility for being executed in background.
WARCProcessor: An Integrative Tool for Building and Management of Web Spam Corpora
Callón, Miguel; Fdez-Glez, Jorge; Ruano-Ordás, David; Laza, Rosalía; Pavón, Reyes; Méndez, Jose Ramón
2017-01-01
In this work we present the design and implementation of WARCProcessor, a novel multiplatform integrative tool aimed to build scientific datasets to facilitate experimentation in web spam research. The developed application allows the user to specify multiple criteria that change the way in which new corpora are generated whilst reducing the number of repetitive and error prone tasks related with existing corpus maintenance. For this goal, WARCProcessor supports up to six commonly used data sources for web spam research, being able to store output corpus in standard WARC format together with complementary metadata files. Additionally, the application facilitates the automatic and concurrent download of web sites from Internet, giving the possibility of configuring the deep of the links to be followed as well as the behaviour when redirected URLs appear. WARCProcessor supports both an interactive GUI interface and a command line utility for being executed in background. PMID:29271913
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
On Building a Search Interface Discovery System
NASA Astrophysics Data System (ADS)
Shestakov, Denis
A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.
NASA Astrophysics Data System (ADS)
Cardellini, Carlo; Frigeri, Alessandro; Lehnert, Kerstin; Ash, Jason; McCormick, Brendan; Chiodini, Giovanni; Fischer, Tobias; Cottrell, Elizabeth
2015-04-01
The release of volatiles from the Earth's interior takes place in both volcanic and non-volcanic areas of the planet. The comprehension of such complex process and the improvement of the current estimates of global carbon emissions, will greatly benefit from the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing interoperability between three data systems that will make their data accessible via the DECADE portal: (1) the Smithsonian Institutionian's Global Volcanism Program database (VOTW) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. The DECADE web portal will create a powerful search engine of these databases from a single entry point and will return comprehensive multi-component datasets. A user will be able, for example, to obtain data relating to compositions of emitted gases, compositions and age of the erupted products and coincident activity, of a specific volcano. This level of capability requires a complete synergy between the databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process allowing to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.
A Framework for Transparently Accessing Deep Web Sources
ERIC Educational Resources Information Center
Dragut, Eduard Constantin
2010-01-01
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.
2016-12-01
The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.
An insight into the deep web; why it matters for addiction psychiatry?
Orsolini, Laura; Papanti, Duccio; Corkery, John; Schifano, Fabrizio
2017-05-01
Nowadays, the web is rapidly spreading, playing a significant role in the marketing or sale or distribution of "quasi" legal drugs, hence facilitating continuous changes in drug scenarios. The easily renewable and anarchic online drug-market is gradually transforming indeed the drug market itself, from a "street" to a "virtual" one, with customers being able to shop with a relative anonymity in a 24-hr marketplace. The hidden "deep web" is facilitating this phenomenon. The paper aims at providing an overview to mental health's and addiction's professionals on current knowledge about prodrug activities on the deep web. A nonparticipant netnographic qualitative study of a list of prodrug websites (blogs, fora, and drug marketplaces) located into the surface web was here carried out. A systematic Internet search was conducted on Duckduckgo® and Google® whilst including the following keywords: "drugs" or "legal highs" or "Novel Psychoactive Substances" or "NPS" combined with the word deep web. Four themes (e.g., "How to access into the deepweb"; "Darknet and the online drug trading sites"; "Grams-search engine for the deep web"; and "Cryptocurrencies") and 14 categories were here generated and properly discussed. This paper represents a complete or systematical guideline about the deep web, specifically focusing on practical information on online drug marketplaces, useful for addiction's professionals. Copyright © 2017 John Wiley & Sons, Ltd.
BUSCA: an integrative web server to predict subcellular localization of proteins.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Profiti, Giuseppe; Casadio, Rita
2018-04-30
Here, we present BUSCA (http://busca.biocomp.unibo.it), a novel web server that integrates different computational tools for predicting protein subcellular localization. BUSCA combines methods for identifying signal and transit peptides (DeepSig and TPpred3), GPI-anchors (PredGPI) and transmembrane domains (ENSEMBLE3.0 and BetAware) with tools for discriminating subcellular localization of both globular and membrane proteins (BaCelLo, MemLoci and SChloro). Outcomes from the different tools are processed and integrated for annotating subcellular localization of both eukaryotic and bacterial protein sequences. We benchmark BUSCA against protein targets derived from recent CAFA experiments and other specific data sets, reporting performance at the state-of-the-art. BUSCA scores better than all other evaluated methods on 2732 targets from CAFA2, with a F1 value equal to 0.49 and among the best methods when predicting targets from CAFA3. We propose BUSCA as an integrated and accurate resource for the annotation of protein subcellular localization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Haeryong; Lee, Eunyong; Jeong, YiYeong
Korea Radioactive-waste Management Corporation (KRMC) established in 2009 has started a new project to collect information on long-term stability of deep geological environments on the Korean Peninsula. The information has been built up in the integrated natural barrier database system available on web (www.deepgeodisposal.kr). The database system also includes socially and economically important information, such as land use, mining area, natural conservation area, population density, and industrial complex, because some of this information is used as exclusionary criteria during the site selection process for a deep geological repository for safe and secure containment and isolation of spent nuclear fuel andmore » other long-lived radioactive waste in Korea. Although the official site selection process has not been started yet in Korea, current integrated natural barrier database system and socio-economic database is believed that the database system will be effectively utilized to narrow down the number of sites where future investigation is most promising in the site selection process for a deep geological repository and to enhance public acceptance by providing readily-available relevant scientific information on deep geological environments in Korea. (authors)« less
The Research on Automatic Construction of Domain Model Based on Deep Web Query Interfaces
NASA Astrophysics Data System (ADS)
JianPing, Gu
The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.
Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen
2015-01-01
The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
NASA Astrophysics Data System (ADS)
Trumpy, Eugenio; Manzella, Adele
2017-02-01
The Italian National Geothermal Database (BDNG), is the largest collection of Italian Geothermal data and was set up in the 1980s. It has since been updated both in terms of content and management tools: information on deep wells and thermal springs (with temperature > 30 °C) are currently organized and stored in a PostgreSQL relational database management system, which guarantees high performance, data security and easy access through different client applications. The BDNG is the core of the Geothopica web site, whose webGIS tool allows different types of user to access geothermal data, to visualize multiple types of datasets, and to perform integrated analyses. The webGIS tool has been recently improved by two specially designed, programmed and implemented visualization tools to display data on well lithology and underground temperatures. This paper describes the contents of the database and its software and data update, as well as the webGIS tool including the new tools for data lithology and temperature visualization. The geoinformation organized in the database and accessible through Geothopica is of use not only for geothermal purposes, but also for any kind of georesource and CO2 storage project requiring the organization of, and access to, deep underground data. Geothopica also supports project developers, researchers, and decision makers in the assessment, management and sustainable deployment of georesources.
ERIC Educational Resources Information Center
Turner, Laura
2001-01-01
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
Hamed, Mohamed; Spaniol, Christian; Nazarieh, Maryam; Helms, Volkhard
2015-07-01
TFmiR is a freely available web server for deep and integrative analysis of combinatorial regulatory interactions between transcription factors, microRNAs and target genes that are involved in disease pathogenesis. Since the inner workings of cells rely on the correct functioning of an enormously complex system of activating and repressing interactions that can be perturbed in many ways, TFmiR helps to better elucidate cellular mechanisms at the molecular level from a network perspective. The provided topological and functional analyses promote TFmiR as a reliable systems biology tool for researchers across the life science communities. TFmiR web server is accessible through the following URL: http://service.bioinformatik.uni-saarland.de/tfmir. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Reconstructing a hydrogen-driven microbial metabolic network in Opalinus Clay rock.
Bagnoud, Alexandre; Chourey, Karuna; Hettich, Robert L; de Bruijn, Ino; Andersson, Anders F; Leupin, Olivier X; Schwyn, Bernhard; Bernier-Latmani, Rizlan
2016-10-14
The Opalinus Clay formation will host geological nuclear waste repositories in Switzerland. It is expected that gas pressure will build-up due to hydrogen production from steel corrosion, jeopardizing the integrity of the engineered barriers. In an in situ experiment located in the Mont Terri Underground Rock Laboratory, we demonstrate that hydrogen is consumed by microorganisms, fuelling a microbial community. Metagenomic binning and metaproteomic analysis of this deep subsurface community reveals a carbon cycle driven by autotrophic hydrogen oxidizers belonging to novel genera. Necromass is then processed by fermenters, followed by complete oxidation to carbon dioxide by heterotrophic sulfate-reducing bacteria, which closes the cycle. This microbial metabolic web can be integrated in the design of geological repositories to reduce pressure build-up. This study shows that Opalinus Clay harbours the potential for chemolithoautotrophic-based system, and provides a model of microbial carbon cycle in deep subsurface environments where hydrogen and sulfate are present.
Reconstructing a hydrogen-driven microbial metabolic network in Opalinus Clay rock
Bagnoud, Alexandre; Chourey, Karuna; Hettich, Robert L.; de Bruijn, Ino; Andersson, Anders F.; Leupin, Olivier X.; Schwyn, Bernhard; Bernier-Latmani, Rizlan
2016-01-01
The Opalinus Clay formation will host geological nuclear waste repositories in Switzerland. It is expected that gas pressure will build-up due to hydrogen production from steel corrosion, jeopardizing the integrity of the engineered barriers. In an in situ experiment located in the Mont Terri Underground Rock Laboratory, we demonstrate that hydrogen is consumed by microorganisms, fuelling a microbial community. Metagenomic binning and metaproteomic analysis of this deep subsurface community reveals a carbon cycle driven by autotrophic hydrogen oxidizers belonging to novel genera. Necromass is then processed by fermenters, followed by complete oxidation to carbon dioxide by heterotrophic sulfate-reducing bacteria, which closes the cycle. This microbial metabolic web can be integrated in the design of geological repositories to reduce pressure build-up. This study shows that Opalinus Clay harbours the potential for chemolithoautotrophic-based system, and provides a model of microbial carbon cycle in deep subsurface environments where hydrogen and sulfate are present. PMID:27739431
Parasites in food webs: the ultimate missing links
Lafferty, Kevin D; Allesina, Stefano; Arim, Matias; Briggs, Cherie J; De Leo, Giulio; Dobson, Andrew P; Dunne, Jennifer A; Johnson, Pieter T J; Kuris, Armand M; Marcogliese, David J; Martinez, Neo D; Memmott, Jane; Marquet, Pablo A; McLaughlin, John P; Mordecai, Erin A; Pascual, Mercedes; Poulin, Robert; Thieltges, David W
2008-01-01
Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists. PMID:18462196
Parasites in food webs: the ultimate missing links.
Lafferty, Kevin D; Allesina, Stefano; Arim, Matias; Briggs, Cherie J; De Leo, Giulio; Dobson, Andrew P; Dunne, Jennifer A; Johnson, Pieter T J; Kuris, Armand M; Marcogliese, David J; Martinez, Neo D; Memmott, Jane; Marquet, Pablo A; McLaughlin, John P; Mordecai, Erin A; Pascual, Mercedes; Poulin, Robert; Thieltges, David W
2008-06-01
Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.
Parasites in food webs: the ultimate missing links
Lafferty, Kevin D.; Allesina, Stefano; Arim, Matias; Briggs, Cherie J.; De Leo, Giulio A.; Dobson, Andrew P.; Dunne, Jennifer A.; Johnson, Pieter T.J.; Kuris, Armand M.; Marcogliese, David J.; Martinez, Neo D.; Memmott, Jane; Marquet, Pablo A.; McLaughlin, John P.; Mordecai, Eerin A.; Pascual, Mercedes; Poulin, Robert; Thieltges, David W.
2008-01-01
Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.
de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos
2018-07-01
The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.
DECADE Web Portal: Integrating MaGa, EarthChem and GVP Will Further Our Knowledge on Earth Degassing
NASA Astrophysics Data System (ADS)
Cardellini, C.; Frigeri, A.; Lehnert, K. A.; Ash, J.; McCormick, B.; Chiodini, G.; Fischer, T. P.; Cottrell, E.
2014-12-01
The release of gases from the Earth's interior to the exosphere takes place in both volcanic and non-volcanic areas of the planet. Fully understanding this complex process requires the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. We are developing interoperability between three of those, which will support more powerful synoptic studies of degassing. The three data systems that will make their data accessible via the DECADE portal are: (1) the Smithsonian Institution's Global Volcanism Program database (GVP) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. These databases are developed and maintained by institutions or groups of experts in a specific field, and data are archived in formats specific to these databases. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing a web portal that will create a powerful search engine of these databases from a single entry point. The portal will return comprehensive multi-component datasets, based on the search criteria selected by the user. For example, a single geographic or temporal search will return data relating to compositions of emitted gases and erupted products, the age of the erupted products, and coincident activity at the volcano. The development of this level of capability for the DECADE Portal requires complete synergy between these databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process. Other data systems can be easily plugged in using the existing framework. Our vision is to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.
Hoelzer, Simon; Schweiger, Ralf K; Rieger, Joerg; Meyer, Michael
2006-01-01
The organizational structures of web contents and electronic information resources must adapt to the demands of a growing volume of information and user requirements. Otherwise the information society will be threatened by disinformation. The biomedical sciences are especially vulnerable in this regard, since they are strongly oriented toward text-based knowledge sources. Here sustainable improvement can only be achieved by using a comprehensive, integrated approach that not only includes data management but also specifically incorporates the editorial processes, including structuring information sources and publication. The technical resources needed to effectively master these tasks are already available in the form of the data standards and tools of the Semantic Web. They include Rich Site Summaries (RSS), which have become an established means of distributing and syndicating conventional news messages and blogs. They can also provide access to the contents of the previously mentioned information sources, which are conventionally classified as 'deep web' content.
Gulf of Mexico Deep-Sea Coral Ecosystem Studies, 2008-2011
Kellogg, Christina A.
2009-01-01
Most people are familiar with tropical coral reefs, located in warm, well-illuminated, shallow waters. However, corals also exist hundreds and even thousands of meters below the ocean surface, where it is cold and completely dark. These deep-sea corals, also known as cold-water corals, have become a topic of interest due to conservation concerns over the impacts of trawling, exploration for oil and gas, and climate change. Although the existence of these corals has been known since the 1800s, our understanding of their distribution, ecology, and biology is limited due to the technical difficulties of conducting deep-sea research. DISCOVRE (DIversity, Systematics, and COnnectivity of Vulnerable Reef Ecosystems) is a new U.S. Geological Survey (USGS) program focused on deep-water coral ecosystems in the Gulf of Mexico. This integrated, multidisciplinary, international effort investigates a variety of topics related to unique and fragile deep-sea coral ecosystems from the microscopic level to the ecosystem level, including components of microbiology, population genetics, paleoecology, food webs, taxonomy, community ecology, physical oceanography, and mapping.
deepTools: a flexible platform for exploring deep-sequencing data.
Ramírez, Fidel; Dündar, Friederike; Diehl, Sarah; Grüning, Björn A; Manke, Thomas
2014-07-01
We present a Galaxy based web server for processing and visualizing deeply sequenced data. The web server's core functionality consists of a suite of newly developed tools, called deepTools, that enable users with little bioinformatic background to explore the results of their sequencing experiments in a standardized setting. Users can upload pre-processed files with continuous data in standard formats and generate heatmaps and summary plots in a straight-forward, yet highly customizable manner. In addition, we offer several tools for the analysis of files containing aligned reads and enable efficient and reproducible generation of normalized coverage files. As a modular and open-source platform, deepTools can easily be expanded and customized to future demands and developments. The deepTools webserver is freely available at http://deeptools.ie-freiburg.mpg.de and is accompanied by extensive documentation and tutorials aimed at conveying the principles of deep-sequencing data analysis. The web server can be used without registration. deepTools can be installed locally either stand-alone or as part of Galaxy. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Rossi, Elena; Rosa, Manuela; Rossi, Lorenzo; Priori, Alberto; Marceglia, Sara
2014-12-01
The web-based systems available for multi-centre clinical trials do not combine clinical data collection (Electronic Health Records, EHRs) with signal processing storage and analysis tools. However, in pathophysiological research, the correlation between clinical data and signals is crucial for uncovering the underlying neurophysiological mechanisms. A specific example is the investigation of the mechanisms of action for Deep Brain Stimulation (DBS) used for Parkinson's Disease (PD); the neurosignals recorded from the DBS target structure and clinical data must be investigated. The aim of this study is the development and testing of a new system dedicated to a multi-centre study of Parkinson's Disease that integrates biosignal analysis tools and data collection in a shared and secure environment. We designed a web-based platform (WebBioBank) for managing the clinical data and biosignals of PD patients treated with DBS in different clinical research centres. Homogeneous data collection was ensured in the different centres (Operative Units, OUs). The anonymity of the data was preserved using unique identifiers associated with patients (ID BAC). The patients' personal details and their equivalent ID BACs were archived inside the corresponding OU and were not uploaded on the web-based platform; data sharing occurred using the ID BACs. The system allowed researchers to upload different signal processing functions (in a .dll extension) onto the web-based platform and to combine them to define dedicated algorithms. Four clinical research centres used WebBioBank for 1year. The clinical data from 58 patients treated using DBS were managed, and 186 biosignals were uploaded and classified into 4 categories based on the treatment (pharmacological and/or electrical). The user's satisfaction mean score exceeded the satisfaction threshold. WebBioBank enabled anonymous data sharing for a clinical study conducted at multiple centres and demonstrated the capabilities of the signal processing chain configuration as well as its effectiveness and efficiency for integrating the neurophysiological results with clinical data in multi-centre studies, which will allow the future collection of homogeneous data in large cohorts of patients. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cowie, B. R.; Lim, D. S.; Pendery, R.; Laval, B.; Slater, G. F.; Brady, A. L.; Dearing, W. L.; Downs, M.; Forrest, A.; Lees, D. S.; Lind, R. A.; Marinova, M.; Reid, D.; Seibert, M. A.; Shepard, R.; Williams, D.
2009-12-01
The Pavilion Lake Research Project (PLRP) is an international multi-disciplinary science and exploration effort to explain the origin and preservation potential of freshwater microbialites in Pavilion Lake, British Columbia, Canada. Using multiple exploration platforms including one person DeepWorker submersibles, Autonomous Underwater Vehicles, and SCUBA divers, the PLRP acts as an analogue research site for conducting science in extreme environments, such as the Moon or Mars. In 2009, the PLRP integrated several Web 2.0 technologies to provide a pilot-scale Education and Public Outreach (EPO) program targeting the internet savvy generation. The seamless integration of multiple technologies including Google Earth, Wordpress, Youtube, Twitter and Facebook, facilitated the rapid distribution of exciting and accessible science and exploration information over multiple channels. Field updates, science reports, and multimedia including videos, interactive maps, and immersive visualization were rapidly available through multiple social media channels, partly due to the ease of integration of these multiple technologies. Additionally, the successful application of videoconferencing via a readily available technology (Skype) has greatly increased the capacity of our team to conduct real-time education and public outreach from remote locations. The improved communication afforded by Web 2.0 has increased the quality of EPO provided by the PLRP, and has enabled a higher level of interaction between the science team and the community at large. Feedback from these online interactions suggest that remote communication via Web 2.0 technologies were effective tools for increasing public discourse and awareness of the science and exploration activity at Pavilion Lake.
Cyanide Suicide After Deep Web Shopping: A Case Report.
Le Garff, Erwan; Delannoy, Yann; Mesli, Vadim; Allorge, Delphine; Hédouin, Valéry; Tournel, Gilles
2016-09-01
Cyanide is a product that is known for its use in industrial or laboratory processes, as well as for intentional intoxication. The toxicity of cyanide is well described in humans with rapid inhibition of cellular aerobic metabolism after ingestion or inhalation, leading to severe clinical effects that are frequently lethal. We report the case of a young white man found dead in a hotel room after self-poisoning with cyanide ordered in the deep Web. This case shows a probable complex suicide kit use including cyanide, as a lethal tool, and dextromethorphan, as a sedative and anxiolytic substance. This case is an original example of the emerging deep Web shopping in illegal drug procurement.
Semantic integration of data on transcriptional regulation.
Baitaluk, Michael; Ponomarenko, Julia
2010-07-01
Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a 'one-stop shop' experience for users seeking information essential for deciphering and modeling gene regulatory networks. IntegromeDB, a semantic graph-based 'deep-web' data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org baitaluk@sdsc.edu Supplementary data are available at Bioinformatics online.
Astrophysical data mining with GPU. A case study: Genetic classification of globular clusters
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Garofalo, M.; Brescia, M.; Paolillo, M.; Pescape', A.; Longo, G.; Ventre, G.
2014-01-01
We present a multi-purpose genetic algorithm, designed and implemented with GPGPU/CUDA parallel computing technology. The model was derived from our CPU serial implementation, named GAME (Genetic Algorithm Model Experiment). It was successfully tested and validated on the detection of candidate Globular Clusters in deep, wide-field, single band HST images. The GPU version of GAME will be made available to the community by integrating it into the web application DAMEWARE (DAta Mining Web Application REsource, http://dame.dsf.unina.it/beta_info.html), a public data mining service specialized on massive astrophysical data. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm leads to a speedup of a factor of 200× in the training phase with respect to the CPU based version.
2016-07-21
Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont
Research and Teaching About the Deep Earth
NASA Astrophysics Data System (ADS)
Williams, Michael L.; Mogk, David W.; McDaris, John
2010-08-01
Understanding the Deep Earth: Slabs, Drips, Plumes and More; Virtual Workshop, 17-19 February and 24-26 February 2010; Images and models of active faults, subducting plates, mantle drips, and rising plumes are spurring new excitement about deep-Earth processes and connections between Earth's internal systems and plate tectonics. The new results and the steady progress of Earthscope's USArray across the country are also providing a special opportunity to reach students and the general public. The pace of discoveries about the deep Earth is accelerating due to advances in experimental, modeling, and sensing technologies; new data processing capabilities; and installation of new networks, especially the EarthScope facility. EarthScope is an interdisciplinary program that combines geology and geophysics to study the structure and evolution of the North American continent. To explore the current state of deep-Earth science and ways in which it can be brought into the undergraduate classroom, 40 professors attended a virtual workshop given by On the Cutting Edge, a program that strives to improve undergraduate geoscience education through an integrated cooperative series of workshops and Web-based resources. The 6-day two-part workshop consisted of plenary talks, large and small group discussions, and development and review of new classroom and laboratory activities.
Xu, Huilei; Baroukh, Caroline; Dannenfelser, Ruth; Chen, Edward Y; Tan, Christopher M; Kou, Yan; Kim, Yujin E; Lemischka, Ihor R; Ma'ayan, Avi
2013-01-01
High content studies that profile mouse and human embryonic stem cells (m/hESCs) using various genome-wide technologies such as transcriptomics and proteomics are constantly being published. However, efforts to integrate such data to obtain a global view of the molecular circuitry in m/hESCs are lagging behind. Here, we present an m/hESC-centered database called Embryonic Stem Cell Atlas from Pluripotency Evidence integrating data from many recent diverse high-throughput studies including chromatin immunoprecipitation followed by deep sequencing, genome-wide inhibitory RNA screens, gene expression microarrays or RNA-seq after knockdown (KD) or overexpression of critical factors, immunoprecipitation followed by mass spectrometry proteomics and phosphoproteomics. The database provides web-based interactive search and visualization tools that can be used to build subnetworks and to identify known and novel regulatory interactions across various regulatory layers. The web-interface also includes tools to predict the effects of combinatorial KDs by additive effects controlled by sliders, or through simulation software implemented in MATLAB. Overall, the Embryonic Stem Cell Atlas from Pluripotency Evidence database is a comprehensive resource for the stem cell systems biology community. Database URL: http://www.maayanlab.net/ESCAPE
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
...-foot-wide, 20-foot-deep excavated power canal; (2) a 55-foot-long, 65-foot-wide, 8-foot-deep excavated... 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site http://www.ferc.gov/docs... Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket number (P-13743-000, 13753...
miRBase: integrating microRNA annotation and deep-sequencing data.
Kozomara, Ana; Griffiths-Jones, Sam
2011-01-01
miRBase is the primary online repository for all microRNA sequences and annotation. The current release (miRBase 16) contains over 15,000 microRNA gene loci in over 140 species, and over 17,000 distinct mature microRNA sequences. Deep-sequencing technologies have delivered a sharp rise in the rate of novel microRNA discovery. We have mapped reads from short RNA deep-sequencing experiments to microRNAs in miRBase and developed web interfaces to view these mappings. The user can view all read data associated with a given microRNA annotation, filter reads by experiment and count, and search for microRNAs by tissue- and stage-specific expression. These data can be used as a proxy for relative expression levels of microRNA sequences, provide detailed evidence for microRNA annotations and alternative isoforms of mature microRNAs, and allow us to revisit previous annotations. miRBase is available online at: http://www.mirbase.org/.
Dining in the Deep: The Feeding Ecology of Deep-Sea Fishes
NASA Astrophysics Data System (ADS)
Drazen, Jeffrey C.; Sutton, Tracey T.
2017-01-01
Deep-sea fishes inhabit ˜75% of the biosphere and are a critical part of deep-sea food webs. Diet analysis and more recent trophic biomarker approaches, such as stable isotopes and fatty-acid profiles, have enabled the description of feeding guilds and an increased recognition of the vertical connectivity in food webs in a whole-water-column sense, including benthic-pelagic coupling. Ecosystem modeling requires data on feeding rates; the available estimates indicate that deep-sea fishes have lower per-individual feeding rates than coastal and epipelagic fishes, but the overall predation impact may be high. A limited number of studies have measured the vertical flux of carbon by mesopelagic fishes, which appears to be substantial. Anthropogenic activities are altering deep-sea ecosystems and their services, which are mediated by trophic interactions. We also summarize outstanding data gaps.
[Oncologic gynecology and the Internet].
Gizler, Robert; Bielanów, Tomasz; Kulikiewicz, Krzysztof
2002-11-01
The strategy of World Wide Web searching for medical sites was presented in this article. The "deep web" and "surface web" resources were searched. The 10 best sites connected with the gynecological oncology, according to authors' opinion, were presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-30
... wide by 50 feet long by 30 feet deep; (3) the existing 50-foot-long by 20-foot-wide by 30-foot- deep... Commission's Web site ( http://www.ferc.gov/docs-filing/ferconline.asp ) under the ``eFiling'' link. For a... 20426. For more information on how to submit these types of filings please go to the Commission's Web...
DEEPWATER AND NEARSHORE FOOD WEB CHARACTERIZATIONS IN LAKE SUPERIOR
Due to the difficulty associated with sampling deep aquatic systems, food web relationships among deepwater fauna are often poorly known. We are characterizing nearshore versus offshore habitats in the Great Lakes and investigating food web linkages among profundal, pelagic, and ...
Semantically-enabled Knowledge Discovery in the Deep Carbon Observatory
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.; Ma, X.; Erickson, J. S.; West, P.; Fox, P. A.
2013-12-01
The Deep Carbon Observatory (DCO) is a decadal effort aimed at transforming scientific and public understanding of carbon in the complex deep earth system from the perspectives of Deep Energy, Deep Life, Extreme Physics and Chemistry, and Reservoirs and Fluxes. Over the course of the decade DCO scientific activities will generate a massive volume of data across a variety of disciplines, presenting significant challenges in terms of data integration, management, analysis and visualization, and ultimately limiting the ability of scientists across disciplines to make insights and unlock new knowledge. The DCO Data Science Team (DCO-DS) is applying Semantic Web methodologies to construct a knowledge representation focused on the DCO Earth science disciplines, and use it together with other technologies (e.g. natural language processing and data mining) to create a more expressive representation of the distributed corpus of DCO artifacts including datasets, metadata, instruments, sensors, platforms, deployments, researchers, organizations, funding agencies, grants and various awards. The embodiment of this knowledge representation is the DCO Data Science Infrastructure, in which unique entities within the DCO domain and the relations between them are recognized and explicitly identified. The DCO-DS Infrastructure will serve as a platform for more efficient and reliable searching, discovery, access, and publication of information and knowledge for the DCO scientific community and beyond.
NASA Astrophysics Data System (ADS)
Lu, H.; Yi, D.
2010-12-01
The Deep Exploration is one of the important approaches to the Geoscience research. Since 1980s we had started it and achieved a lot of data. Researchers usually integrate both data of space exploration and deep exploration to study geological structures and represent the Earth’s subsurface, and analyze and explain on the base of integrated data. Due to the different exploration approach it results the heterogeneity of data, and therefore the data achievement is always of the import issue to make the researchers confused. The problem of data share and interaction has to be solved during the development of the SinoProbe research project. Through the research of domestic and overseas well-known exploration project and geosciences data platform, the subject explores the solution of data share and interaction. Based on SOA we present the deep exploration data share framework which comprises three level: data level is used for the solution of data store and the integration of the heterogeneous data; medial level provides the data service of geophysics, geochemistry, etc. by the means of Web service, and carry out kinds of application combination by the use of GIS middleware and Eclipse RCP; interaction level provides professional and non-professional customer the access to different accuracy data. The framework adopts GeoSciML data interaction approach. GeoSciML is a geosciences information markup language, as an application of the OpenGIS Consortium’s (OGC) Geography Markup Language (GML). It transfers heterogeneous data into one earth frame and implements inter-operation. We dissertate in this article the solution how to integrate the heterogeneous data and share the data in the project of SinoProbe.
Dancing girl flap: a new flap suitable for web release.
Shinya, K
1999-12-01
To create a deep web, a flap must be designed to have a high elongation effect in one direction along the mid-lateral line of the finger and also to have a shortening effect in the other direction, crossing at a right angle to the mid-lateral line. The dancing girl flap is a modification of a four-flap Z-plasty with two additional Z-plasties. It has a high elongation effect in one direction (>550%) and a shortening effect in the other direction at a right angle (<33%), creating a deep, U-shaped surface. This new flap can be used to release severe scar contracture with a web, and is most suitable for incomplete syndactyly with webs as high as the proximal interphalangeal joint.
Domain-specific Web Service Discovery with Service Class Descriptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rocco, D; Caverlee, J; Liu, L
2005-02-14
This paper presents DynaBot, a domain-specific web service discovery system. The core idea of the DynaBot service discovery system is to use domain-specific service class descriptions powered by an intelligent Deep Web crawler. In contrast to current registry-based service discovery systems--like the several available UDDI registries--DynaBot promotes focused crawling of the Deep Web of services and discovers candidate services that are relevant to the domain of interest. It uses intelligent filtering algorithms to match services found by focused crawling with the domain-specific service class descriptions. We demonstrate the capability of DynaBot through the BLAST service discovery scenario and describe ourmore » initial experience with DynaBot.« less
NASA Astrophysics Data System (ADS)
Ruby, C.; Skarke, A. D.; Mesick, S.
2016-02-01
The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.
NOAA Operational Tsunameter Support for Research
NASA Astrophysics Data System (ADS)
Bouchard, R.; Stroker, K.
2008-12-01
In March 2008, the National Oceanic and Atmospheric Administration's (NOAA) National Data Buoy Center (NDBC) completed the deployment of the last of the 39-station network of deep-sea tsunameters. As part of NOAA's effort to strengthen tsunami warning capabilities, NDBC expanded the network from 6 to 39 stations and upgraded all stations to the second generation Deep-ocean Assessment and Reporting of Tsunamis technology (DART II). Consisting of a bottom pressure recorder (BPR) and a surface buoy, the tsunameters deliver water-column heights, estimated from pressure measurements at the sea floor, to Tsunami Warning Centers in less than 3 minutes. This network provides coastal communities in the Pacific, Atlantic, Caribbean, and the Gulf of Mexico with faster and more accurate tsunami warnings. In addition, both the coarse resolution real-time data and the high resolution (15-second) recorded data provide invaluable contributions to research, such as the detection of the 2004 Sumatran tsunami in the Northeast Pacific (Gower and González, 2006) and the experimental tsunami forecast system (Bernard et al., 2007). NDBC normally recovers the BPRs every 24 months and sends the recovered high resolution data to NOAA's National Geophysical Data Center (NGDC) for archive and distribution. NGDC edits and processes this raw binary format to obtain research-quality data. NGDC provides access to retrospective BPR data from 1986 to the present. The DART database includes pressure and temperature data from the ocean floor, stored in a relational database, enabling data integration with the global tsunami and significant earthquake databases. All data are accessible via the Web as tables, reports, interactive maps, OGC Web Map Services (WMS), and Web Feature Services (WFS) to researchers around the world. References: Gower, J. and F. González, 2006. U.S. Warning System Detected the Sumatra Tsunami, Eos Trans. AGU, 87(10). Bernard, E. N., C. Meinig, and A. Hilton, 2007. Deep Ocean Tsunami Detection: Third Generation DART, Eos Trans. AGU, 88(52), Fall Meet. Suppl., Abstract S51C-03.
The deep lymphatic anatomy of the hand.
Ma, Chuan-Xiang; Pan, Wei-Ren; Liu, Zhi-An; Zeng, Fan-Qiang; Qiu, Zhi-Qiang
2018-07-01
The deep lymphatic anatomy of the hand still remains the least described in medical literature. Eight hands were harvested from four nonembalmed human cadavers amputated above the wrist. A small amount of 6% hydrogen peroxide was employed to detect the lymphatic vessels around the superficial and deep palmar vascular arches, in webs from the index to little fingers, the thenar and hypothenar areas. A 30-gauge needle was inserted into the vessels and injected with a barium sulphate compound. Each specimen was dissected, photographed and radiographed to demonstrate deep lymphatic distribution of the hand. Five groups of deep collecting lymph vessels were found in the hand: superficial palmar arch lymph vessel (SPALV); deep palmar arch lymph vessel (DPALV); thenar lymph vessel (TLV); hypothenar lymph vessel (HTLV); deep finger web lymph vessel (DFWLV). Each group of vessels drained in different directions first, then all turned and ran towards the wrist in different layers. The deep lymphatic drainage of the hand has been presented. The results will provide an anatomical basis for clinical management, educational reference and scientific research. Copyright © 2018 Elsevier GmbH. All rights reserved.
Compilation and network analyses of cambrian food webs.
Dunne, Jennifer A; Williams, Richard J; Martinez, Neo D; Wood, Rachel A; Erwin, Douglas H
2008-04-29
A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid diversification of species, body plans, and trophic roles during the Cambrian radiation. More research is needed to explore the generality of food-web structure through deep time and across habitats, especially to investigate potential mechanisms that could give rise to similar structure, as well as any differences.
Colombo, Cinzia; Mosconi, Paola; Confalonieri, Paolo; Baroni, Isabella; Traversa, Silvia; Hill, Sophie J; Synnot, Anneliese J; Oprandi, Nadia; Filippini, Graziella
2014-07-24
Multiple sclerosis (MS) patients and their family members increasingly seek health information on the Internet. There has been little exploration of how MS patients integrate health information with their needs, preferences, and values for decision making. The INtegrating and Deriving Evidence, Experiences, and Preferences (IN-DEEP) project is a collaboration between Italian and Australian researchers and MS patients, aimed to make high-quality evidence accessible and meaningful to MS patients and families, developing a Web-based resource of evidence-based information starting from their information needs. The objective of this study was to analyze MS patients and their family members' experience about the Web-based health information, to evaluate how they asses this information, and how they integrate health information with personal values. We organized 6 focus groups, 3 with MS patients and 3 with family members, in the Northern, Central, and Southern parts of Italy (April-June 2011). They included 40 MS patients aged between 18 and 60, diagnosed as having MS at least 3 months earlier, and 20 family members aged 18 and over, being relatives of a person with at least a 3-months MS diagnosis. The focus groups were audio-recorded and transcribed verbatim (Atlas software, V 6.0). Data were analyzed from a conceptual point of view through a coding system. An online forum was hosted by the Italian MS society on its Web platform to widen the collection of information. Nine questions were posted covering searching behavior, use of Web-based information, truthfulness of Web information. At the end, posts were downloaded and transcribed. Information needs covered a comprehensive communication of diagnosis, prognosis, and adverse events of treatments, MS causes or risk factors, new drugs, practical, and lifestyle-related information. The Internet is considered useful by MS patients, however, at the beginning or in a later stage of the disease a refusal to actively search for information could occur. Participants used to search on the Web before or after their neurologist's visit or when a new therapy was proposed. Social networks are widely used to read others' stories and retrieve information about daily management. A critical issue was the difficulty of recognizing reliable information on the Web. Many sources were used but the neurologist was mostly the final source of treatment decisions. MS patients used the Internet as a tool to integrate information about the illness. Information needs covered a wide spectrum, the searched topics changed with progression of the disease. Criteria for evaluating Internet accuracy and credibility of information were often lacking or generic. This may limit the empowerment of patients in health care choices.
Colombo, Cinzia; Confalonieri, Paolo; Baroni, Isabella; Traversa, Silvia; Hill, Sophie J; Synnot, Anneliese J; Oprandi, Nadia; Filippini, Graziella
2014-01-01
Background Multiple sclerosis (MS) patients and their family members increasingly seek health information on the Internet. There has been little exploration of how MS patients integrate health information with their needs, preferences, and values for decision making. The INtegrating and Deriving Evidence, Experiences, and Preferences (IN-DEEP) project is a collaboration between Italian and Australian researchers and MS patients, aimed to make high-quality evidence accessible and meaningful to MS patients and families, developing a Web-based resource of evidence-based information starting from their information needs. Objective The objective of this study was to analyze MS patients and their family members’ experience about the Web-based health information, to evaluate how they asses this information, and how they integrate health information with personal values. Methods We organized 6 focus groups, 3 with MS patients and 3 with family members, in the Northern, Central, and Southern parts of Italy (April-June 2011). They included 40 MS patients aged between 18 and 60, diagnosed as having MS at least 3 months earlier, and 20 family members aged 18 and over, being relatives of a person with at least a 3-months MS diagnosis. The focus groups were audio-recorded and transcribed verbatim (Atlas software, V 6.0). Data were analyzed from a conceptual point of view through a coding system. An online forum was hosted by the Italian MS society on its Web platform to widen the collection of information. Nine questions were posted covering searching behavior, use of Web-based information, truthfulness of Web information. At the end, posts were downloaded and transcribed. Results Information needs covered a comprehensive communication of diagnosis, prognosis, and adverse events of treatments, MS causes or risk factors, new drugs, practical, and lifestyle-related information. The Internet is considered useful by MS patients, however, at the beginning or in a later stage of the disease a refusal to actively search for information could occur. Participants used to search on the Web before or after their neurologist’s visit or when a new therapy was proposed. Social networks are widely used to read others’ stories and retrieve information about daily management. A critical issue was the difficulty of recognizing reliable information on the Web. Many sources were used but the neurologist was mostly the final source of treatment decisions. Conclusions MS patients used the Internet as a tool to integrate information about the illness. Information needs covered a wide spectrum, the searched topics changed with progression of the disease. Criteria for evaluating Internet accuracy and credibility of information were often lacking or generic. This may limit the empowerment of patients in health care choices. PMID:25093374
NASA Astrophysics Data System (ADS)
Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.
2011-06-01
Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.
Geoinformatics in the public service: building a cyberinfrastructure across the geological surveys
Allison, M. Lee; Gundersen, Linda C.; Richard, Stephen M.; Keller, G. Randy; Baru, Chaitanya
2011-01-01
Advanced information technology infrastructure is increasingly being employed in the Earth sciences to provide researchers with efficient access to massive central databases and to integrate diversely formatted information from a variety of sources. These geoinformatics initiatives enable manipulation, modeling and visualization of data in a consistent way, and are helping to develop integrated Earth models at various scales, and from the near surface to the deep interior. This book uses a series of case studies to demonstrate computer and database use across the geosciences. Chapters are thematically grouped into sections that cover data collection and management; modeling and community computational codes; visualization and data representation; knowledge management and data integration; and web services and scientific workflows. Geoinformatics is a fascinating and accessible introduction to this emerging field for readers across the solid Earth sciences and an invaluable reference for researchers interested in initiating new cyberinfrastructure projects of their own.
Automating Mid- and Long-Range Scheduling for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Sorensen, Sugi; Tay, Peter; Carruth, Butch; Coffman, Adam; Wallace, Mike
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system is architected as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users who utilize the DSN (representing 37 projects including international partners and ground-based science and calibration users). The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. S(sup 3) has been used for negotiating schedules since April 2011, including the baseline schedules for three launching missions in late 2011. S(sup 3) supports a distributed scheduling model, in which changes can potentially be made by multiple users based on multiple schedule "workspaces" or versions of the schedule. This has led to several challenges in the design of the scheduling database, and of a change proposal workflow that allows users to concur with or to reject proposed schedule changes, and then counter-propose with alternative or additional suggested changes. This paper describes some key aspects of the S(sup 3) system and lessons learned from its operational deployment to date, focusing on the challenges of multi-user collaborative scheduling in a practical and mission-critical setting. We will also describe the ongoing project to extend S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
DCO-VIVO: A Collaborative Data Platform for the Deep Carbon Science Communities
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.; West, P.; Erickson, J. S.; Ma, X.; Fox, P. A.
2014-12-01
Deep Carbon Observatory (DCO) is a decade-long scientific endeavor to understand carbon in the complex deep Earth system. Thousands of DCO scientists from institutions across the globe are organized into communities representing four domains of exploration: Extreme Physics and Chemistry, Reservoirs and Fluxes, Deep Energy, and Deep Life. Cross-community and cross-disciplinary collaboration is one of the most distinctive features in DCO's flexible research framework. VIVO is an open-source Semantic Web platform that facilitates cross-institutional researcher and research discovery. it includes a number of standard ontologies that interconnect people, organizations, publications, activities, locations, and other entities of research interest to enable browsing, searching, visualizing, and generating Linked Open (research) Data. The DCO-VIVO solution expedites research collaboration between DCO scientists and communities. Based on DCO's specific requirements, the DCO Data Science team developed a series of extensions to the VIVO platform including extending the VIVO information model, extended query over the semantic information within VIVO, integration with other open source collaborative environments and data management systems, using single sign-on, assigning of unique Handles to DCO objects, and publication and dataset ingesting extensions using existing publication systems. We present here the iterative development of these requirements that are now in daily use by the DCO community of scientists for research reporting, information sharing, and resource discovery in support of research activities and program management.
BIOMETORE Project - Studying the Biodiversity in the Northeastern Atlantic Seamounts
NASA Astrophysics Data System (ADS)
Dos Santos, A.; Biscoito, M.; Campos, A.; Tuaty Guerra, M.; Meneses, G.; Santos, A. M. P. A.
2016-02-01
Understanding the deep-sea ecosystem functioning is a key issue in the study of ocean sciences. Bringing together researchers from several scientific domains, the BIOMETORE project aims to the increase knowledge on deep-sea ecosystems and biodiversity at the Atlantic seamounts of the Madeira-Tore and Great Meteor geological complexes. The project outputs will provide important information for the understanding and sustainable management of the target seamount ecosystems, thus contributing to fulfill knowledge gaps on their biodiversity, from bacteria to mammals, and food webs, as well as to promote future sustainable fisheries and sea-floor integrity. The plan includes the realization of eight multidisciplinary surveys, four done during the summer of 2015 and another four planned for the same season of 2016, in target seamounts: the Gorringe bank, the Josephine, and others in the Madeira-Tore, and selected ones in the Greta Meteor (northeastern Atlantic Ocean). The surveys cover a number of scientific areas in the domains of oceanography, ecology, integrative taxonomy, geology, fisheries and spatial mapping. We present and discuss BIOMETORE developments, the preliminary results from the four 2015 summer surveys, and the planning of the next four surveys.
ERIC Educational Resources Information Center
Lagoze, Carl; Neylon, Eamonn; Mooney, Stephen; Warnick, Walter L.; Scott, R. L.; Spence, Karen J.; Johnson, Lorrie A.; Allen, Valerie S.; Lederman, Abe
2001-01-01
Includes four articles that discuss Dublin Core metadata, digital rights management and electronic books, including interoperability; and directed query engines, a type of search engine designed to access resources on the deep Web that is being used at the Department of Energy. (LRW)
76 FR 67456 - Common Formats for Patient Safety Data Collection and Event Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
... Common Formats, can be accessed electronically at the following HHS Web site: http://www.PSO.AHRQ.gov... Thromboembolism (VTE), which includes Deep Vein Thrombosis (DVT) and Pulmonary Embolism (PE), will apply to both... available at the PSO Privacy Protection Center (PPC) Web site: https://www.psoppc.org/web/patientsafety...
Marčan, Marija; Pavliha, Denis; Kos, Bor; Forjanič, Tadeja; Miklavčič, Damijan
2015-01-01
Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed.
2015-01-01
Background Treatments based on electroporation are a new and promising approach to treating tumors, especially non-resectable ones. The success of the treatment is, however, heavily dependent on coverage of the entire tumor volume with a sufficiently high electric field. Ensuring complete coverage in the case of deep-seated tumors is not trivial and can in best way be ensured by patient-specific treatment planning. The basis of the treatment planning process consists of two complex tasks: medical image segmentation, and numerical modeling and optimization. Methods In addition to previously developed segmentation algorithms for several tissues (human liver, hepatic vessels, bone tissue and canine brain) and the algorithms for numerical modeling and optimization of treatment parameters, we developed a web-based tool to facilitate the translation of the algorithms and their application in the clinic. The developed web-based tool automatically builds a 3D model of the target tissue from the medical images uploaded by the user and then uses this 3D model to optimize treatment parameters. The tool enables the user to validate the results of the automatic segmentation and make corrections if necessary before delivering the final treatment plan. Results Evaluation of the tool was performed by five independent experts from four different institutions. During the evaluation, we gathered data concerning user experience and measured performance times for different components of the tool. Both user reports and performance times show significant reduction in treatment-planning complexity and time-consumption from 1-2 days to a few hours. Conclusions The presented web-based tool is intended to facilitate the treatment planning process and reduce the time needed for it. It is crucial for facilitating expansion of electroporation-based treatments in the clinic and ensuring reliable treatment for the patients. The additional value of the tool is the possibility of easy upgrade and integration of modules with new functionalities as they are developed. PMID:26356007
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
... following methods: Government-wide rulemaking Web site: http://www.regulations.gov . Follow the instructions... irrigation system improvements outlined in this plan will provide more efficient use of this water. Deep... reduction of excess deep percolation passing below the plant root zone. Deep percolation of irrigation water...
ERIC Educational Resources Information Center
Wighting, Mervyn J.; Lucking, Robert A.; Christmann, Edwin P.
2004-01-01
Teachers search for ways to enhance oceanography units in the classroom. There are many online resources available to help one explore the mysteries of the deep. This article describes a collection of Web sites on this topic appropriate for middle level classrooms.
Semantic Annotations and Querying of Web Data Sources
NASA Astrophysics Data System (ADS)
Hornung, Thomas; May, Wolfgang
A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.
NASA Astrophysics Data System (ADS)
Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme
2016-04-01
We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).
A Holistic, Similarity-Based Approach for Personalized Ranking in Web Databases
ERIC Educational Resources Information Center
Telang, Aditya
2011-01-01
With the advent of the Web, the notion of "information retrieval" has acquired a completely new connotation and currently encompasses several disciplines ranging from traditional forms of text and data retrieval in unstructured and structured repositories to retrieval of static and dynamic information from the contents of the surface and deep Web.…
Inside the Web: A Look at Digital Libraries and the Invisible/Deep Web
ERIC Educational Resources Information Center
Su, Mila C.
2009-01-01
The evolution of the Internet and the World Wide Web continually exceeds expectations with the "swift pace" of technological innovations. Information is added, and just as quickly becomes outdated at a rapid pace. Researchers have found that Digital materials can provide access to primary source materials and connect the researcher to institutions…
NASA Technical Reports Server (NTRS)
Royster, D. M.; Davis, R. C.; Shinn, J. M., Jr.; Bales, T. T.; Wiant, H. R.
1985-01-01
A study was made to investigate the feasibility of superplastically forming corrugated panels with beaded webs and to demonstrate the structural integrity of these panels by testing. The test panels in the study consist of superplastically formed titanium alloy Ti-6Al-4V half-hat elements that are joined by weld-brazing to titanium alloy Ti-6Al-4V caps to form either single-corrugation compression panels or multiple-corrugation compression panels. Stretching and subsequent thinning of the titanium sheet during superplastic forming is reduced by approximately 35 percent with a shallow half-hat die concept instead of a deep die concept and results in a more uniform thickness across the beaded webs. The complete panels are tested in end compression at room temperature and the results compared with analysis. The heavily loaded panels failed at loads approaching the yield strength of the titanium material. At maximum load, the caps wrinkled locally accompanied with separation of the weld-braze joint in the wrinkle. None of the panels tested, however, failed catastrophically in the weld-braze joint. Experimental test results are in good agreement with structural analysis of the panels.
Korvigo, Ilia; Afanasyev, Andrey; Romashchenko, Nikolay; Skoblov, Mikhail
2018-01-01
Many automatic classifiers were introduced to aid inference of phenotypical effects of uncategorised nsSNVs (nonsynonymous Single Nucleotide Variations) in theoretical and medical applications. Lately, several meta-estimators have been proposed that combine different predictors, such as PolyPhen and SIFT, to integrate more information in a single score. Although many advances have been made in feature design and machine learning algorithms used, the shortage of high-quality reference data along with the bias towards intensively studied in vitro models call for improved generalisation ability in order to further increase classification accuracy and handle records with insufficient data. Since a meta-estimator basically combines different scoring systems with highly complicated nonlinear relationships, we investigated how deep learning (supervised and unsupervised), which is particularly efficient at discovering hierarchies of features, can improve classification performance. While it is believed that one should only use deep learning for high-dimensional input spaces and other models (logistic regression, support vector machines, Bayesian classifiers, etc) for simpler inputs, we still believe that the ability of neural networks to discover intricate structure in highly heterogenous datasets can aid a meta-estimator. We compare the performance with various popular predictors, many of which are recommended by the American College of Medical Genetics and Genomics (ACMG), as well as available deep learning-based predictors. Thanks to hardware acceleration we were able to use a computationally expensive genetic algorithm to stochastically optimise hyper-parameters over many generations. Overfitting was hindered by noise injection and dropout, limiting coadaptation of hidden units. Although we stress that this work was not conceived as a tool comparison, but rather an exploration of the possibilities of deep learning application in ensemble scores, our results show that even relatively simple modern neural networks can significantly improve both prediction accuracy and coverage. We provide open-access to our finest model via the web-site: http://score.generesearch.ru/services/badmut/.
The MIND PALACE: A Multi-Spectral Imaging and Spectroscopy Database for Planetary Science
NASA Astrophysics Data System (ADS)
Eshelman, E.; Doloboff, I.; Hara, E. K.; Uckert, K.; Sapers, H. M.; Abbey, W.; Beegle, L. W.; Bhartia, R.
2017-12-01
The Multi-Instrument Database (MIND) is the web-based home to a well-characterized set of analytical data collected by a suite of deep-UV fluorescence/Raman instruments built at the Jet Propulsion Laboratory (JPL). Samples derive from a growing body of planetary surface analogs, mineral and microbial standards, meteorites, spacecraft materials, and other astrobiologically relevant materials. In addition to deep-UV spectroscopy, datasets stored in MIND are obtained from a variety of analytical techniques obtained over multiple spatial and spectral scales including electron microscopy, optical microscopy, infrared spectroscopy, X-ray fluorescence, and direct fluorescence imaging. Multivariate statistical analysis techniques, primarily Principal Component Analysis (PCA), are used to guide interpretation of these large multi-analytical spectral datasets. Spatial co-referencing of integrated spectral/visual maps is performed using QGIS (geographic information system software). Georeferencing techniques transform individual instrument data maps into a layered co-registered data cube for analysis across spectral and spatial scales. The body of data in MIND is intended to serve as a permanent, reliable, and expanding database of deep-UV spectroscopy datasets generated by this unique suite of JPL-based instruments on samples of broad planetary science interest.
Structure, functioning, and cumulative stressors of Mediterranean deep-sea ecosystems
NASA Astrophysics Data System (ADS)
Tecchio, Samuele; Coll, Marta; Sardà, Francisco
2015-06-01
Environmental stressors, such as climate fluctuations, and anthropogenic stressors, such as fishing, are of major concern for the management of deep-sea ecosystems. Deep-water habitats are limited by primary productivity and are mainly dependent on the vertical input of organic matter from the surface. Global change over the latest decades is imparting variations in primary productivity levels across oceans, and thus it has an impact on the amount of organic matter landing on the deep seafloor. In addition, anthropogenic impacts are now reaching the deep ocean. The Mediterranean Sea, the largest enclosed basin on the planet, is not an exception. However, ecosystem-level studies of response to varying food input and anthropogenic stressors on deep-sea ecosystems are still scant. We present here a comparative ecological network analysis of three food webs of the deep Mediterranean Sea, with contrasting trophic structure. After modelling the flows of these food webs with the Ecopath with Ecosim approach, we compared indicators of network structure and functioning. We then developed temporal dynamic simulations varying the organic matter input to evaluate its potential effect. Results show that, following the west-to-east gradient in the Mediterranean Sea of marine snow input, organic matter recycling increases, net production decreases to negative values and trophic organisation is overall reduced. The levels of food-web activity followed the gradient of organic matter availability at the seafloor, confirming that deep-water ecosystems directly depend on marine snow and are therefore influenced by variations of energy input, such as climate-driven changes. In addition, simulations of varying marine snow arrival at the seafloor, combined with the hypothesis of a possible fishery expansion on the lower continental slope in the western basin, evidence that the trawling fishery may pose an impact which could be an order of magnitude stronger than a climate-driven reduction of marine snow.
Why Is My Voice Changing? (For Teens)
... enter puberty earlier or later than others. How Deep Will My Voice Get? How deep a guy's voice gets depends on his genes: ... of Use Notice of Nondiscrimination Visit the Nemours Web site. Note: All information on TeensHealth® is for ...
33 CFR 401.2 - Interpretation.
Code of Federal Regulations, 2014 CFR
2014-07-01
...: (a) Corporation means the Saint Lawrence Seaway Development Corporation; (b) E-business means web applications on the St. Lawrence Seaway Management Corporation Web site which provides direct electronic...) Seaway means the deep waterway between the Port of Montreal and Lake Erie and includes all locks, canals...
RaptorX-Property: a web server for protein structure property prediction.
Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo
2016-07-08
RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
ERIC Educational Resources Information Center
Rodicio, Héctor García
2015-01-01
When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…
50 CFR 679.21 - Prohibited species bycatch management.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Region Web site (http://alaskafisheries.noaa.gov/). (c) Salmon taken in the BS pollock fisheries... GOA groundfish species or species group. (B) Deep-water species fishery. Fishing with trawl gear... the NMFS Alaska Region Web site (http://alaskafisheries.noaa.gov/): (A) The Chinook salmon PSC...
50 CFR 679.21 - Prohibited species bycatch management.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Region Web site (http://alaskafisheries.noaa.gov/). (c) Salmon taken in the BS pollock fisheries... GOA groundfish species or species group. (B) Deep-water species fishery. Fishing with trawl gear... the NMFS Alaska Region Web site (http://alaskafisheries.noaa.gov/): (A) The Chinook salmon PSC...
[Study on Information Extraction of Clinic Expert Information from Hospital Portals].
Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li
2015-12-01
Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.
Bioinformatics data distribution and integration via Web Services and XML.
Li, Xiao; Zhang, Yizheng
2003-11-01
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
50 CFR 679.21 - Prohibited species bycatch management.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Region Web site (http://alaskafisheries.noaa.gov/). (c) Salmon taken in the BS pollock fisheries... GOA groundfish species or species group. (B) Deep-water species fishery. Fishing with trawl gear... combine management of available trawl halibut PSC limits in the second season deep-water and shallow-water...
Deep Lake Explorer: Bringing citizen scientists to the underwater world of the Great Lakes
Deep Lake Explorer is a web application hosted on the Zooniverse platform that allows the public to interpret underwater video collected in the Great Lakes. Crowdsourcing image interpretation using the Zooniverse platform has proven successful for many projects, but few projects ...
50 CFR 679.21 - Prohibited species bycatch management.
Code of Federal Regulations, 2012 CFR
2012-10-01
... to 907-586-7465. Forms are available on the NMFS Alaska Region Web site (http://alaskafisheries.noaa... the retained aggregate amount of other GOA groundfish species or species group. (B) Deep-water species... the NMFS Alaska Region Web site (http://alaskafisheries.noaa.gov/): (A) The Chinook salmon PSC...
Asynchronous Discourse in a Web-Assisted Mathematics Education Course
ERIC Educational Resources Information Center
Li, Zhongxiao
2009-01-01
Fall term of 2006, a web-assisted undergraduate mathematics course was taught at the University of Idaho: Math 235 Mathematics for Elementary Teachers I. The course goals were: To foster a deep understanding of critical mathematical content; and to promote the development of mathematical communication and collaboration concepts, skills, and…
Search Interface Design Using Faceted Indexing for Web Resources.
ERIC Educational Resources Information Center
Devadason, Francis; Intaraksa, Neelawat; Patamawongjariya, Pornprapa; Desai, Kavita
2001-01-01
Describes an experimental system designed to organize and provide access to Web documents using a faceted pre-coordinate indexing system based on the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. (AEF)
ERIC Educational Resources Information Center
Gupta, Amardeep
2005-01-01
Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2008-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Distributed spatial information integration based on web service
NASA Astrophysics Data System (ADS)
Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng
2009-10-01
Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.
Automatic Generation of Data Types for Classification of Deep Web Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngu, A H; Buttler, D J; Critchlow, T J
2005-02-14
A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less
Integrating WebQuests in Preservice Teacher Education
ERIC Educational Resources Information Center
Wang, Feng; Hannafin, Michael J.
2008-01-01
During the past decade, WebQuests have been widely used by teachers to integrate technology into teaching and learning. Recently, teacher educators have applied the WebQuest model with preservice teachers in order to develop technology integration skills akin to those used in everyday schools. Scaffolding, used to support the gradual acquisition…
Integrating the Web and continuous media through distributed objects
NASA Astrophysics Data System (ADS)
Labajo, Saul P.; Garcia, Narciso N.
1998-09-01
The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.
Implementing Distributed Operations: A Comparison of Two Deep Space Missions
NASA Technical Reports Server (NTRS)
Mishkin, Andrew; Larsen, Barbara
2006-01-01
Two very different deep space exploration missions--Mars Exploration Rover and Cassini--have made use of distributed operations for their science teams. In the case of MER, the distributed operations capability was implemented only after the prime mission was completed, as the rovers continued to operate well in excess of their expected mission lifetimes; Cassini, designed for a mission of more than ten years, had planned for distributed operations from its inception. The rapid command turnaround timeline of MER, as well as many of the operations features implemented to support it, have proven to be conducive to distributed operations. These features include: a single science team leader during the tactical operations timeline, highly integrated science and engineering teams, processes and file structures designed to permit multiple team members to work in parallel to deliver sequencing products, web-based spacecraft status and planning reports for team-wide access, and near-elimination of paper products from the operations process. Additionally, MER has benefited from the initial co-location of its entire operations team, and from having a single Principal Investigator, while Cassini operations have had to reconcile multiple science teams distributed from before launch. Cassini has faced greater challenges in implementing effective distributed operations. Because extensive early planning is required to capture science opportunities on its tour and because sequence development takes significantly longer than sequence execution, multiple teams are contributing to multiple sequences concurrently. The complexity of integrating inputs from multiple teams is exacerbated by spacecraft operability issues and resource contention among the teams, each of which has their own Principal Investigator. Finally, much of the technology that MER has exploited to facilitate distributed operations was not available when the Cassini ground system was designed, although later adoption of web-based and telecommunication tools has been critical to the success of Cassini operations.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.
Hohman, Fred; Hodas, Nathan; Chau, Duen Horng
2017-05-01
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
A Query Integrator and Manager for the Query Web
Brinkley, James F.; Detwiler, Landon T.
2012-01-01
We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831
49 CFR 575.106 - Tire fuel efficiency consumer information program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... tires, deep tread, winter-type snow tires, space-saver or temporary use spare tires, tires with nominal... deep tread, winter-type snow tires and limited production tires that it manufactures which are exempt... to have included in the database of information available to consumers on NHTSA's Web site. (ii...
49 CFR 575.106 - Tire fuel efficiency consumer information program.
Code of Federal Regulations, 2014 CFR
2014-10-01
... tires, deep tread, winter-type snow tires, space-saver or temporary use spare tires, tires with nominal... deep tread, winter-type snow tires and limited production tires that it manufactures which are exempt... to have included in the database of information available to consumers on NHTSA's Web site. (ii...
49 CFR 575.106 - Tire fuel efficiency consumer information program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... tires, deep tread, winter-type snow tires, space-saver or temporary use spare tires, tires with nominal... deep tread, winter-type snow tires and limited production tires that it manufactures which are exempt... to have included in the database of information available to consumers on NHTSA's Web site. (ii...
Enable Web-Based Tracking and Guiding by Integrating Location-Awareness with the World Wide Web
ERIC Educational Resources Information Center
Zhou, Rui
2008-01-01
Purpose: The aim of this research is to enable web-based tracking and guiding by integrating location-awareness with the Worldwide Web so that the users can use various location-based applications without installing extra software. Design/methodology/approach: The concept of web-based tracking and guiding is introduced and the relevant issues are…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof
The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions andmore » 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit. DEEP will be migrated into the DEnCity - DOE’s Energy City, which integrates large-scale energy data for multi-purpose, open, and dynamic database leveraging diverse source of existing simulation data.« less
Flexible Web services integration: a novel personalised social approach
NASA Astrophysics Data System (ADS)
Metrouh, Abdelmalek; Mokhati, Farid
2018-05-01
Dynamic composition or integration remains one of the key objectives of Web services technology. This paper aims to propose an innovative approach of dynamic Web services composition based on functional and non-functional attributes and individual preferences. In this approach, social networks of Web services are used to maintain interactions between Web services in order to select and compose Web services that are more tightly related to user's preferences. We use the concept of Web services community in a social network of Web services to reduce considerably their search space. These communities are created by the direct involvement of Web services providers.
NASA Astrophysics Data System (ADS)
Hornung, Thomas; Simon, Kai; Lausen, Georg
Combining information from different Web sources often results in a tedious and repetitive process, e.g. even simple information requests might require to iterate over a result list of one Web query and use each single result as input for a subsequent query. One approach for this chained queries are data-centric mashups, which allow to visually model the data flow as a graph, where the nodes represent the data source and the edges the data flow.
Comparison of Physics Frameworks for WebGL-Based Game Engine
NASA Astrophysics Data System (ADS)
Yogya, Resa; Kosala, Raymond
2014-03-01
Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.
Development of anomaly detection models for deep subsurface monitoring
NASA Astrophysics Data System (ADS)
Sun, A. Y.
2017-12-01
Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.
Integrating DXplain into a clinical information system using the World Wide Web.
Elhanan, G; Socratous, S A; Cimino, J J
1996-01-01
The World Wide Web(WWW) offers a cross-platform environment and standard protocols that enable integration of various applications available on the Internet. The authors use the Web to facilitate interaction between their Web-based Clinical Information System and a decision-support system-DXplain, at the Massachusetts General Hospital-using local architecture and Common Gateway Interface programs. The current application translates patients laboratory test results into DXplain's terms to generate diagnostic hypotheses. Two different access methods are utilized for this model; Hypertext Transfer Protocol (HTTP) and TCP/IP function calls. While clinical aspects cannot be evaluated as yet, the model demonstrates the potential of Web-based applications for interaction and integration and how local architecture, with a controlled vocabulary server, can further facilitate such integration. This model serves to demonstrate some of the limitations of the current WWW technology and identifies issues such as control over Web resources and their utilization and liability issues as possible obstacles for further integration.
A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016
A framework for sharing and integrating remote sensing and GIS models based on Web service.
Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin
2014-01-01
Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.
Customer Decision Making in Web Services with an Integrated P6 Model
NASA Astrophysics Data System (ADS)
Sun, Zhaohao; Sun, Junqing; Meredith, Grant
Customer decision making (CDM) is an indispensable factor for web services. This article examines CDM in web services with a novel P6 model, which consists of the 6 Ps: privacy, perception, propensity, preference, personalization and promised experience. This model integrates the existing 6 P elements of marketing mix as the system environment of CDM in web services. The new integrated P6 model deals with the inner world of the customer and incorporates what the customer think during the DM process. The proposed approach will facilitate the research and development of web services and decision support systems.
Integrating Mathematics, Science, and Language Arts Instruction Using the World Wide Web.
ERIC Educational Resources Information Center
Clark, Kenneth; Hosticka, Alice; Kent, Judi; Browne, Ron
1998-01-01
Addresses issues of access to World Wide Web sites, mathematics and science content-resources available on the Web, and methods for integrating mathematics, science, and language arts instruction. (Author/ASK)
Construction of a Virginia short-span bridge with the Strongwell 36-inch double-web I-beam.
DOT National Transportation Integrated Search
2005-01-01
The Route 601 Bridge in Sugar Grove, VA, spans 39 ft over Dickey Creek. The bridge is the first to use the Strongwell 36-in-deep fiber-reinforced polymer (FRP) double-web beam (DWB) in a vehicular bridge superstructure. Construction of the new bridge...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-03
..., 50-foot-deep frame module fitted with a trash rack and containing 10 low-head bulb turbines each... electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site... Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket number (P-13500-002...
ERIC Educational Resources Information Center
Mitsuhara, Hiroyuki; Kurose, Yoshinobu; Ochi, Youji; Yano, Yoneo
The authors developed a Web-based Adaptive Educational System (Web-based AES) named ITMS (Individualized Teaching Material System). ITMS adaptively integrates knowledge on the distributed Web pages and generates individualized teaching material that has various contents. ITMS also presumes the learners' knowledge levels from the states of their…
mirEX: a platform for comparative exploration of plant pri-miRNA expression data.
Bielewicz, Dawid; Dolata, Jakub; Zielezinski, Andrzej; Alaba, Sylwia; Szarzynska, Bogna; Szczesniak, Michal W; Jarmolowski, Artur; Szweykowska-Kulinska, Zofia; Karlowski, Wojciech M
2012-01-01
mirEX is a comprehensive platform for comparative analysis of primary microRNA expression data. RT-qPCR-based gene expression profiles are stored in a universal and expandable database scheme and wrapped by an intuitive user-friendly interface. A new way of accessing gene expression data in mirEX includes a simple mouse operated querying system and dynamic graphs for data mining analyses. In contrast to other publicly available databases, the mirEX interface allows a simultaneous comparison of expression levels between various microRNA genes in diverse organs and developmental stages. Currently, mirEX integrates information about the expression profile of 190 Arabidopsis thaliana pri-miRNAs in seven different developmental stages: seeds, seedlings and various organs of mature plants. Additionally, by providing RNA structural models, publicly available deep sequencing results, experimental procedure details and careful selection of auxiliary data in the form of web links, mirEX can function as a one-stop solution for Arabidopsis microRNA information. A web-based mirEX interface can be accessed at http://bioinfo.amu.edu.pl/mirex.
Walsh, Maureen G.; Boscarino, Brent T.; Marty, Jérôme; Johannsson, Ora E.
2012-01-01
Mysis diluviana and Hemimysis anomala are the only two species of mysid shrimps in the order Mysidacea that are present in the Laurentian Great Lakes of North America. M. diluviana has inhabited the deep, cold waters of this region since Pleistocene-era glacial retreat and is widely considered to have a central role in the functioning of offshore food webs in systems they inhabit. More recently, the Great Lakes were invaded by the Ponto-Caspian native Hemimysis, a species that inhabits warmer water and shallower depths relative to M. diluviana. Hemimysis has rapidly expanded throughout the Great Lakes region and has become integrated into nearshore food webs as both food for planktivorous fish and predators and competitors of zooplankton. This special issue is composed of 14 papers that represent the most recent advances in our understanding of the ecological importance of both species of mysids to lake and river ecosystems in the Great Lakes region of North America. Topics discussed in this special issue will inform future research in all systems influenced by mysid ecology.
FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN
NASA Astrophysics Data System (ADS)
Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando
2014-06-01
The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.
DyNAMiC Workbench: an integrated development environment for dynamic DNA nanotechnology
Grun, Casey; Werfel, Justin; Zhang, David Yu; Yin, Peng
2015-01-01
Dynamic DNA nanotechnology provides a promising avenue for implementing sophisticated assembly processes, mechanical behaviours, sensing and computation at the nanoscale. However, design of these systems is complex and error-prone, because the need to control the kinetic pathway of a system greatly increases the number of design constraints and possible failure modes for the system. Previous tools have automated some parts of the design workflow, but an integrated solution is lacking. Here, we present software implementing a three ‘tier’ design process: a high-level visual programming language is used to describe systems, a molecular compiler builds a DNA implementation and nucleotide sequences are generated and optimized. Additionally, our software includes tools for analysing and ‘debugging’ the designs in silico, and for importing/exporting designs to other commonly used software systems. The software we present is built on many existing pieces of software, but is integrated into a single package—accessible using a Web-based interface at http://molecular-systems.net/workbench. We hope that the deep integration between tools and the flexibility of this design process will lead to better experimental results, fewer experimental design iterations and the development of more complex DNA nanosystems. PMID:26423437
The Evolvable Advanced Multi-Mission Operations System (AMMOS): Making Systems Interoperable
NASA Technical Reports Server (NTRS)
Ko, Adans Y.; Maldague, Pierre F.; Bui, Tung; Lam, Doris T.; McKinney, John C.
2010-01-01
The Advanced Multi-Mission Operations System (AMMOS) provides a common Mission Operation System (MOS) infrastructure to NASA deep space missions. The evolution of AMMOS has been driven by two factors: increasingly challenging requirements from space missions, and the emergence of new IT technology. The work described in this paper focuses on three key tasks related to IT technology requirements: first, to eliminate duplicate functionality; second, to promote the use of loosely coupled application programming interfaces, text based file interfaces, web-based frameworks and integrated Graphical User Interfaces (GUI) to connect users, data, and core functionality; and third, to build, develop, and deploy AMMOS services that are reusable, agile, adaptive to project MOS configurations, and responsive to industrially endorsed information technology standards.
AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images.
Albarqouni, Shadi; Baur, Christoph; Achilles, Felix; Belagiannis, Vasileios; Demirci, Stefanie; Navab, Nassir
2016-05-01
The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-15
... project (Project No. 13780-000) would consist of: (1) An 85-foot-long, 100-foot-wide, 14-foot-deep excavated power canal; (2) a 95-foot-long, 100-foot-wide, 10-foot-deep excavated tailrace; (3) a 100-foot...)(iii) and the instructions on the Commission's Web site http://www.ferc.gov/docs-filing/efiling.asp...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-16
... Advisory Board can be found at the EPA SAB Web site at http://www.epa.gov/sab . SUPPLEMENTARY INFORMATION..., and human health effects. The Deep Water Horizon spill identified the need for additional research on alternative spill response technologies; environmental impacts of chemical dispersants under deep sea...
NASA Astrophysics Data System (ADS)
Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.
2013-12-01
The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.
Pathview Web: user friendly pathway visualization and data integration
Pant, Gaurav; Bhavnasi, Yeshvant K.; Blanchard, Steven G.; Brouwer, Cory
2017-01-01
Abstract Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. PMID:28482075
Design for Connecting Spatial Data Infrastructures with Sensor Web (sensdi)
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; M., M.
2016-06-01
Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS); 'Sensor Planning Service' (SPS); 'Sensor Alert Service' (SAS); a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS). Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
ERIC Educational Resources Information Center
Hao, Yungwei; Wang, Shiou-ling; Chang, Su-jen; Hsu, Yin-hung; Tang, Ren-yen
2013-01-01
Studies indicated Web 2.0 technologies can support learning. Then, integration of innovation may create concerns among teachers because of the innovative features. In this study, the innovation refers to Web 2.0 technology integration into instruction. To help pre-service teachers make the best use of the innovation in their future instruction, it…
Semantic web for integrated network analysis in biomedicine.
Chen, Huajun; Ding, Li; Wu, Zhaohui; Yu, Tong; Dhanapalan, Lavanya; Chen, Jake Y
2009-03-01
The Semantic Web technology enables integration of heterogeneous data on the World Wide Web by making the semantics of data explicit through formal ontologies. In this article, we survey the feasibility and state of the art of utilizing the Semantic Web technology to represent, integrate and analyze the knowledge in various biomedical networks. We introduce a new conceptual framework, semantic graph mining, to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Through four case studies, we demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology category cross-talks, drug efficacy analysis and herb-drug interactions analysis.
Marceglia, Sara; Rossi, Elena; Rosa, Manuela; Cogiamanian, Filippo; Rossi, Lorenzo; Bertolasi, Laura; Vogrig, Alberto; Pinciroli, Francesco; Barbieri, Sergio; Priori, Alberto
2015-03-06
The increasing number of patients, the high costs of management, and the chronic progress of the disease that prevents patients from performing even simple daily activities make Parkinson disease (PD) a complex pathology with a high impact on society. In particular, patients implanted with deep brain stimulation (DBS) electrodes face a highly fragile stabilization period, requiring specific support at home. However, DBS patients are followed usually by untrained personnel (caregivers or family), without specific care pathways and supporting systems. This projects aims to (1) create a reference consensus guideline and a shared requirements set for the homecare and monitoring of DBS patients, (2) define a set of biomarkers that provides alarms to caregivers for continuous home monitoring, and (3) implement an information system architecture allowing communication between health care professionals and caregivers and improving the quality of care for DBS patients. The definitions of the consensus care pathway and of caregiver needs will be obtained by analyzing the current practices for patient follow-up through focus groups and structured interviews involving health care professionals, patients, and caregivers. The results of this analysis will be represented in a formal graphical model of the process of DBS patient care at home. To define the neurophysiological biomarkers to be used to raise alarms during the monitoring process, neurosignals will be acquired from DBS electrodes through a new experimental system that records while DBS is turned ON and transmits signals by radiofrequency. Motor, cognitive, and behavioral protocols will be used to study possible feedback/alarms to be provided by the system. Finally, a set of mobile apps to support the caregiver at home in managing and monitoring the patient will be developed and tested in the community of caregivers that participated in the focus groups. The set of developed apps will be connected to the already existing WebBioBank Web-based platform allowing health care professionals to manage patient electronic health records and neurophysiological signals. New modules in the WebBioBank platform will be implemented to allow integration and data exchange with mobile health apps. The results of this project will provide a novel approach to long-term evaluation of patients with chronic, severe conditions in the homecare environment, based on caregiver empowerment and tailored applications developed according to consensus care pathways established by clinicians. The creation of a direct communication channel between health care professionals and caregivers can benefit large communities of patients and would represent a scalable experience in integrating data and information coming from a clinical setting to those in home monitoring.
Rossi, Elena; Rosa, Manuela; Cogiamanian, Filippo; Rossi, Lorenzo; Bertolasi, Laura; Vogrig, Alberto; Pinciroli, Francesco; Barbieri, Sergio; Priori, Alberto
2015-01-01
Background The increasing number of patients, the high costs of management, and the chronic progress of the disease that prevents patients from performing even simple daily activities make Parkinson disease (PD) a complex pathology with a high impact on society. In particular, patients implanted with deep brain stimulation (DBS) electrodes face a highly fragile stabilization period, requiring specific support at home. However, DBS patients are followed usually by untrained personnel (caregivers or family), without specific care pathways and supporting systems. Objective This projects aims to (1) create a reference consensus guideline and a shared requirements set for the homecare and monitoring of DBS patients, (2) define a set of biomarkers that provides alarms to caregivers for continuous home monitoring, and (3) implement an information system architecture allowing communication between health care professionals and caregivers and improving the quality of care for DBS patients. Methods The definitions of the consensus care pathway and of caregiver needs will be obtained by analyzing the current practices for patient follow-up through focus groups and structured interviews involving health care professionals, patients, and caregivers. The results of this analysis will be represented in a formal graphical model of the process of DBS patient care at home. To define the neurophysiological biomarkers to be used to raise alarms during the monitoring process, neurosignals will be acquired from DBS electrodes through a new experimental system that records while DBS is turned ON and transmits signals by radiofrequency. Motor, cognitive, and behavioral protocols will be used to study possible feedback/alarms to be provided by the system. Finally, a set of mobile apps to support the caregiver at home in managing and monitoring the patient will be developed and tested in the community of caregivers that participated in the focus groups. The set of developed apps will be connected to the already existing WebBioBank Web-based platform allowing health care professionals to manage patient electronic health records and neurophysiological signals. New modules in the WebBioBank platform will be implemented to allow integration and data exchange with mobile health apps. Results The results of this project will provide a novel approach to long-term evaluation of patients with chronic, severe conditions in the homecare environment, based on caregiver empowerment and tailored applications developed according to consensus care pathways established by clinicians. Conclusions The creation of a direct communication channel between health care professionals and caregivers can benefit large communities of patients and would represent a scalable experience in integrating data and information coming from a clinical setting to those in home monitoring. PMID:25803512
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-16
.... Therefore, you should always check the Agency's Web site and call the appropriate advisory committee hot... currently approved for mid- to deep- dermal implantation for the correction of moderate to severe facial... material on its Web site prior to the meeting, the background material will be made publicly available at...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-05
...-deep, 3-mile-long canal carrying flows diverted from Cottonwood Creek by an existing diversion... on the Commission's Web site ( http://www.ferc.gov/docs-filing/ferconline.asp ) under the ``eFiling...Library'' link of Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket...
78 FR 72060 - Chimney Rock National Monument Management Plan; San Juan National Forest; Colorado
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-02
..., as well as objects of deep cultural and educational value. The plan will also provide for continued... Ranger District office in Pagosa Springs, Colorado, and on the San Juan National Forest Web site at www..., direct mailings, emails, and will be posted on the San Juan National Forest Web site. It is important...
78 FR 27405 - Anesthetic and Analgesic Drug Products Advisory Committee; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-10
... check the Agency's Web site at http://www.fda.gov/AdvisoryCommittees/default.htm and scroll down to the... proposed indications of routine reversal of moderate and deep neuromuscular blockade (NMB) induced by... meeting. If FDA is unable to post the background material on its Web site prior to the meeting, the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... provide timely notice. Therefore, you should always check the Agency's Web site at http://www.fda.gov...]derm Voluma XC is indicated for deep (dermal/subcutaneous and/or submuscular/ supraperiosteal... the background material on its Web site prior to the meeting, the background material will be made...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-11
... deep saline geologic formations for permanent geologic storage. DATES: DOE invites the public to...; or by fax (304) 285-4403. The Draft EIS is available on DOE's NEPA Web page at: http://nepa.energy.gov/DOE_NEPA_documents.htm ; and on the National Energy Technology Laboratory's Web page at: http...
ERIC Educational Resources Information Center
Gomez, Fabinton Sotelo; Ordóñez, Armando
2016-01-01
Previously a framework for integrating web resources providing educational services in dotLRN was presented. The present paper describes the application of this framework in a rural school in Cauca--Colombia. The case study includes two web resources about the topic of waves (physics) which is oriented in secondary education. Web classes and…
SeWeR: a customizable and integrated dynamic HTML interface to bioinformatics services.
Basu, M K
2001-06-01
Sequence analysis using Web Resources (SeWeR) is an integrated, Dynamic HTML (DHTML) interface to commonly used bioinformatics services available on the World Wide Web. It is highly customizable, extendable, platform neutral, completely server-independent and can be hosted as a web page as well as being used as stand-alone software running within a web browser.
ERIC Educational Resources Information Center
Fraser, Landon; Locatis, Craig
2001-01-01
Investigated the effects of link annotations on high school user search performance in Web hypertext environments having deep (layered) and shallow link structures. Results confirmed previous research that shallow link structures are better than deep (layered) link structures, and also showed that annotations had virtually no effect on search…
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
La Cono, Violetta; Ruggeri, Gioachino; Azzaro, Maurizio; Crisafi, Francesca; Decembrini, Franco; Denaro, Renata; La Spada, Gina; Maimone, Giovanna; Monticelli, Luis S; Smedile, Francesco; Giuliano, Laura; Yakimov, Michail M
2018-01-01
Covering two-thirds of our planet, the global deep ocean plays a central role in supporting life on Earth. Among other processes, this biggest ecosystem buffers the rise of atmospheric CO 2 . Despite carbon sequestration in the deep ocean has been known for a long time, microbial activity in the meso- and bathypelagic realm via the " assimilation of bicarbonate in the dark " (ABD) has only recently been described in more details. Based on recent findings, this process seems primarily the result of chemosynthetic and anaplerotic reactions driven by different groups of deep-sea prokaryoplankton. We quantified bicarbonate assimilation in relation to total prokaryotic abundance, prokaryotic heterotrophic production and respiration in the meso- and bathypelagic Mediterranean Sea. The measured ABD values, ranging from 133 to 370 μg C m -3 d -1 , were among the highest ones reported worldwide for similar depths, likely due to the elevated temperature of the deep Mediterranean Sea (13-14°C also at abyssal depths). Integrated over the dark water column (≥200 m depth), bicarbonate assimilation in the deep-sea ranged from 396 to 873 mg C m -2 d -1 . This quantity of produced de novo organic carbon amounts to about 85-424% of the phytoplankton primary production and covers up to 62% of deep-sea prokaryotic total carbon demand. Hence, the ABD process in the meso- and bathypelagic Mediterranean Sea might substantially contribute to the inorganic and organic pool and significantly sustain the deep-sea microbial food web. To elucidate the ABD key-players, we established three actively nitrifying and CO 2 -fixing prokaryotic enrichments. Consortia were characterized by the co-occurrence of chemolithoautotrophic Thaumarchaeota and chemoheterotrophic proteobacteria. One of the enrichments, originated from Ionian bathypelagic waters (3,000 m depth) and supplemented with low concentrations of ammonia, was dominated by the Thaumarchaeota "low-ammonia-concentration" deep-sea ecotype, an enigmatic and ecologically important group of organisms, uncultured until this study.
La Cono, Violetta; Ruggeri, Gioachino; Azzaro, Maurizio; Crisafi, Francesca; Decembrini, Franco; Denaro, Renata; La Spada, Gina; Maimone, Giovanna; Monticelli, Luis S.; Smedile, Francesco; Giuliano, Laura; Yakimov, Michail M.
2018-01-01
Covering two-thirds of our planet, the global deep ocean plays a central role in supporting life on Earth. Among other processes, this biggest ecosystem buffers the rise of atmospheric CO2. Despite carbon sequestration in the deep ocean has been known for a long time, microbial activity in the meso- and bathypelagic realm via the “assimilation of bicarbonate in the dark” (ABD) has only recently been described in more details. Based on recent findings, this process seems primarily the result of chemosynthetic and anaplerotic reactions driven by different groups of deep-sea prokaryoplankton. We quantified bicarbonate assimilation in relation to total prokaryotic abundance, prokaryotic heterotrophic production and respiration in the meso- and bathypelagic Mediterranean Sea. The measured ABD values, ranging from 133 to 370 μg C m−3 d−1, were among the highest ones reported worldwide for similar depths, likely due to the elevated temperature of the deep Mediterranean Sea (13–14°C also at abyssal depths). Integrated over the dark water column (≥200 m depth), bicarbonate assimilation in the deep-sea ranged from 396 to 873 mg C m−2 d−1. This quantity of produced de novo organic carbon amounts to about 85–424% of the phytoplankton primary production and covers up to 62% of deep-sea prokaryotic total carbon demand. Hence, the ABD process in the meso- and bathypelagic Mediterranean Sea might substantially contribute to the inorganic and organic pool and significantly sustain the deep-sea microbial food web. To elucidate the ABD key-players, we established three actively nitrifying and CO2-fixing prokaryotic enrichments. Consortia were characterized by the co-occurrence of chemolithoautotrophic Thaumarchaeota and chemoheterotrophic proteobacteria. One of the enrichments, originated from Ionian bathypelagic waters (3,000 m depth) and supplemented with low concentrations of ammonia, was dominated by the Thaumarchaeota “low-ammonia-concentration” deep-sea ecotype, an enigmatic and ecologically important group of organisms, uncultured until this study. PMID:29403458
Biomagnification of persistent organic pollutants in a deep-sea, temperate food web.
Romero-Romero, Sonia; Herrero, Laura; Fernández, Mario; Gómara, Belén; Acuña, José Luis
2017-12-15
Polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and -furans (PCDD/Fs) were measured in a temperate, deep-sea ecosystem, the Avilés submarine Canyon (AC; Cantabrian Sea, Southern Bay of Biscay). There was an increase of contaminant concentration with the trophic level of the organisms, as calculated from stable nitrogen isotope data (δ 15 N). Such biomagnification was only significant for the pelagic food web and its magnitude was highly dependent on the type of top predators included in the analysis. The trophic magnification factor (TMF) for PCB-153 in the pelagic food web (spanning four trophic levels) was 6.2 or 2.2, depending on whether homeotherm top predators (cetaceans and seabirds) were included or not in the analysis, respectively. Since body size is significantly correlated with δ 15 N, it can be used as a proxy to estimate trophic magnification, what can potentially lead to a simple and convenient method to calculate the TMF. In spite of their lower biomagnification, deep-sea fishes showed higher concentrations than their shallower counterparts, although those differences were not significant. In summary, the AC fauna exhibits contaminant levels comparable or lower than those reported in other systems. Copyright © 2017 Elsevier B.V. All rights reserved.
DeepSig: deep learning improves signal peptide detection in proteins.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita
2018-05-15
The identification of signal peptides in protein sequences is an important step toward protein localization and function characterization. Here, we present DeepSig, an improved approach for signal peptide detection and cleavage-site prediction based on deep learning methods. Comparative benchmarks performed on an updated independent dataset of proteins show that DeepSig is the current best performing method, scoring better than other available state-of-the-art approaches on both signal peptide detection and precise cleavage-site identification. DeepSig is available as both standalone program and web server at https://deepsig.biocomp.unibo.it. All datasets used in this study can be obtained from the same website. pierluigi.martelli@unibo.it. Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macumber, Daniel L; Horowitz, Scott G; Schott, Marjorie
Across most industries, desktop applications are being rapidly migrated to web applications for a variety of reasons. Web applications are inherently cross platform, mobile, and easier to distribute than desktop applications. Fueling this trend are a wide range of free, open source libraries and frameworks that make it incredibly easy to develop powerful web applications. The building energy modeling community is just beginning to pick up on these larger trends, with a small but growing number of building energy modeling applications starting on or moving to the web. This paper presents a new, open source, web based geometry editor formore » Building Energy Modeling (BEM). The editor is written completely in JavaScript and runs in a modern web browser. The editor works on a custom JSON file format and is designed to be integrated into a variety of web and desktop applications. The web based editor is available to use as a standalone web application at: https://nrel.github.io/openstudio-geometry-editor/. An example integration is demonstrated with the OpenStudio desktop application. Finally, the editor can be easily integrated with a wide range of possible building energy modeling web applications.« less
Pathview Web: user friendly pathway visualization and data integration.
Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory
2017-07-03
Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Web conferencing in online classrooms.
Hart, Leigh
2014-01-01
Web conferencing is a promising tool for online education. A well-developed teaching strategy can lead to effective use of this technology to create a sense of community, engage students, and promote academic integrity in online courses. This article presents strategies for integrating Web conferencing into online nursing courses.
77 FR 74470 - Intent to Prepare an Environmental Impact Statement (EIS) for the Donlin Gold Project
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... added to the project mailing list and for additional information, please visit the following web site... miles long by 1 mile wide by 1,850 feet deep; a waste treatment facility (tailings impoundment... description of the proposed project will be posted on the project web site prior to these meetings to help the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
... capacity of 450 kilowatts; (4) an existing 10- foot-wide, 8-foot-deep intake canal; (5) new trash racks... Commission's Web site under the ``eFiling'' link. If unable to be filed electronically, documents may be... information on how to submit these types of filings please go to the Commission's Web site located at http...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... would consist of: (1) A new approximately 135-acre, 30-foot-deep upper reservoir constructed of enclosed... 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site under the ``eFiling... filings please go to the Commission's Web site located at http://www.ferc.gov/filing-comments.asp . More...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-05
... from http://www.regulations.gov or from the Alaska Region Web site at http://alaskafisheries.noaa.gov...) at 605 West 4th Avenue, Suite 306, Anchorage, AK 99501, phone 907-271-2809, or from the Council's Web... biomass trends for the following species are relatively stable: shallow-water flatfish, deep-water...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
... consist of: (1) A new approximately 135-acre, 30-foot-deep upper reservoir constructed of enclosed earth... Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site under the ``e... filings please go to the Commission's Web site located at http://www.ferc.gov/filing-comments.asp . More...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
...-foot-wide, 98-foot-deep concrete lined vertical shaft containing 10-foot-diameter siphon piping and a... via the Internet. See 18 CFR Sec. 385.2001(a)(1)(iii) and the instructions on the Commission's Web... Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket number (P-14360) in...
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-01-01
Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-11-19
Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.
Graham, Amanda L; Papandonatos, George D; Cha, Sarah; Erar, Bahar; Amato, Michael S; Cobb, Nathan K; Niaura, Raymond S; Abrams, David B
2017-03-01
Web-based smoking cessation interventions can deliver evidence-based treatments to a wide swath of the population, but effectiveness is often limited by insufficient adherence to proven treatment components. This study evaluated the impact of a social network (SN) intervention and free nicotine replacement therapy (NRT) on adherence to evidence-based components of smoking cessation treatment in the context of a Web-based intervention. A sample of adult U.S. smokers (N = 5290) was recruited via BecomeAnEX.org, a free smoking cessation Web site. Smokers were randomized to one of four arms: (1) an interactive, evidence-based smoking cessation Web site (WEB) alone; (2) WEB in conjunction with an SN intervention designed to integrate participants into the online community (WEB+SN); (3) WEB plus free NRT (WEB+NRT); and (4) the combination of all treatments (WEB+SN+NRT). Adherence outcomes assessed at 3-month follow-up were as follows: Web site utilization metrics, use of skills training components, intratreatment social support, and pharmacotherapy use. WEB+SN+NRT outperformed all others on Web site utilization metrics, use of practical counseling tools, intratreatment social support, and NRT use. It was the only intervention to promote the sending of private messages and the viewing of community pages over WEB alone. Both social network arms outperformed WEB on most metrics of online community engagement. Both NRT arms showed higher medication use compared to WEB alone. This study demonstrated the effectiveness of two approaches for improving adherence to evidence-based components of smoking cessation treatment. Integrated approaches to medication provision and social network engagement can enhance adherence to components known to improve cessation. This study demonstrated that an integrated approach to medication provision and social network integration, when delivered through an online program, can enhance adherence across all three recommended components of an evidence-based smoking cessation program (skills training, social support, and pharmacotherapy use). Nicotine replacement therapy-when provided as part of an integrated program-increases adherence to other program elements, which in turn augment its own therapeutic effects. An explicit focus on approaches to improve treatment adherence is an important first step to identifying leverage points for optimizing intervention effectiveness. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Painho, M.
2017-09-01
The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.
STEPPE: Supporting collaborative research and education on Earth's deep-time sedimentary crust.
NASA Astrophysics Data System (ADS)
Smith, D. M.
2014-12-01
STEPPE—Sedimentary geology, Time, Environment, Paleontology, Paleoclimate, and Energy—is a National Science Foundation supported consortium whose mission is to promote multidisciplinary research and education on Earth's deep-time sedimentary crust. Deep-time sedimentary crust research includes many specialty areas—biology, geography, ecology, paleontology, sedimentary geology, stratigraphy, geochronology, paleoclimatology, sedimentary geochemistry, and more. In fact, the diversity of disciplines and size of the community (roughly one-third of Earth-science faculty in US universities) itself has been a barrier to the formation of collaborative, multidisciplinary teams in the past. STEPPE has been working to support new research synergies and the development of infrastructure that will encourage the community to think about the big problems that need to be solved and facilitate the formation of collaborative research teams to tackle these problems. Toward this end, STEPPE is providing opportunities for workshops, working groups and professional development training sessions, web-hosting and database services and an online collaboration platform that facilitates interaction among participants, the sharing of documentation and workflows and an ability to push news and reports to group participants and beyond using social media tools. As such, STEPPE is working to provide an interactive space that will serve as both a gathering place and clearinghouse for information, allowing for broader integration of research and education across all STEPPE-related sub disciplines.
Waagmeester, Andra; Pico, Alexander R.
2016-01-01
The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457
Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R
2016-06-01
The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... implemented, this rule would remove the harvest and possession prohibition of six deep-water snapper-grouper... intent of this rule is to reduce the socio-economic impacts to fishermen harvesting deep-water snapper... obtained from the Southeast Regional Office Web site at http://sero.nmfs.noaa.gov . FOR FURTHER INFORMATION...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-21
... to 10-foot-deep canal extending between the head gates and the powerhouse; (3) a gate structure in...-foot-wide, 8 to 10-foot- deep canal extending between the head gates and the powerhouse; (3) a gate... Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site http://www.ferc...
78 FR 5421 - Mid-Atlantic Fishery Management Council (MAFMC); Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... 5 p.m. there will be a Scoping Hearing for the Deep Sea Corals Amendment. On Thursday February 14--A presentation on the new Council Web site will be held from 9 a.m. until 9:30 a.m. The Council will hold its... Fishery Management Plan (Deep Sea Corals Amendment) and review alternatives to be included in the...
Proposal for a Web Encoding Service (wes) for Spatial Data Transactio
NASA Astrophysics Data System (ADS)
Siew, C. B.; Peters, S.; Rahman, A. A.
2015-10-01
Web services utilizations in Spatial Data Infrastructure (SDI) have been well established and standardized by Open Geospatial Consortium (OGC). Similar web services for 3D SDI are also being established in recent years, with extended capabilities to handle 3D spatial data. The increasing popularity of using City Geographic Markup Language (CityGML) for 3D city modelling applications leads to the needs for large spatial data handling for data delivery. This paper revisits the available web services in OGC Web Services (OWS), and propose the background concepts and requirements for encoding spatial data via Web Encoding Service (WES). Furthermore, the paper discusses the data flow of the encoder within web service, e.g. possible integration with Web Processing Service (WPS) or Web 3D Services (W3DS). The integration with available web service could be extended to other available web services for efficient handling of spatial data, especially 3D spatial data.
From projected species distribution to food-web structure under climate change.
Albouy, Camille; Velez, Laure; Coll, Marta; Colloca, Francesco; Le Loc'h, François; Mouillot, David; Gravel, Dominique
2014-03-01
Climate change is inducing deep modifications in species geographic ranges worldwide. However, the consequences of such changes on community structure are still poorly understood, particularly the impacts on food-web properties. Here, we propose a new framework, coupling species distribution and trophic models, to predict climate change impacts on food-web structure across the Mediterranean Sea. Sea surface temperature was used to determine the fish climate niches and their future distributions. Body size was used to infer trophic interactions between fish species. Our projections reveal that 54 fish species of 256 endemic and native species included in our analysis would disappear by 2080-2099 from the Mediterranean continental shelf. The number of feeding links between fish species would decrease on 73.4% of the continental shelf. However, the connectance of the overall fish web would increase on average, from 0.26 to 0.29, mainly due to a differential loss rate of feeding links and species richness. This result masks a systematic decrease in predator generality, estimated here as the number of prey species, from 30.0 to 25.4. Therefore, our study highlights large-scale impacts of climate change on marine food-web structure with potential deep consequences on ecosystem functioning. However, these impacts will likely be highly heterogeneous in space, challenging our current understanding of climate change impact on local marine ecosystems. © 2013 John Wiley & Sons Ltd.
Collaborative Resource Allocation
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester
2007-01-01
Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.
A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services
NASA Astrophysics Data System (ADS)
Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.
2015-12-01
Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.
The Effectiveness of Lecture-Integrated, Web-Supported Case Studies in Large Group Teaching
ERIC Educational Resources Information Center
Azzawi, May; Dawson, Maureen M.
2007-01-01
The effectiveness of lecture-integrated and web-supported case studies in supporting a large and academically diverse group of undergraduate students was evaluated in the present study. Case studies and resource (web)-based learning were incorporated as two complementary interactive learning strategies into the traditional curriculum. A truncated…
Evaluation of Webquest in Biology: Teachers' Perception
ERIC Educational Resources Information Center
Osman, Kamisah
2014-01-01
Teaching and learning based on web or web-based learning is a concept which integrates information and technology in education. Teachers and instructors have to assist their learners to learn to function in this information environment. However, teacher trainers and instructors have limited experience in the integration of ICT by using web in…
Cross-Cultural Language Learning and Web Design Complexity
ERIC Educational Resources Information Center
Park, Ji Yong
2015-01-01
Accepting the fact that culture and language are interrelated in second language learning (SLL), the web sites should be designed to integrate with the cultural aspects. Yet many SLL web sites fail to integrate with the cultural aspects and/or focus on language acquisition only. This study identified three issues: (1) anthropologists'…
The Acquisition of Integrated Science Process Skills in a Web-Based Learning Environment
ERIC Educational Resources Information Center
Saat, Rohaida Mohd
2004-01-01
Web-based learning is becoming prevalent in science learning. Some use specially designed programs, while others use materials available on the Internet. This qualitative case study examined the process of acquisition of integrated science process skills, particularly the skill of controlling variables, in a web-based learning environment among…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-29
...--County on January 28, 2011. The public notice is available on Charleston District's public Web site at... eight open mining pits over a twelve-year period, with pit depths ranging from 110 to 840 feet deep. The... of January 28, 2011, and are available on Charleston District's public Web site at http://www.sac...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
... electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site...-6-foot-deep, 50-to-200-foot-wide headrace canal; (4) an existing 25-foot-long, 49-foot wide... the Web at http://www.ferc.gov using the ``eLibrary'' link. Enter the docket number excluding the last...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-11
...-deep intake canal; (5) new trash racks, head gates, and stop log structure; (6) an existing 6-foot... Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site http://www.ferc... copy of the application, can be viewed or printed on the ``eLibrary'' link of the Commission's Web site...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-15
... above mean sea level (msl); (3) an existing 12-foot-long, 16-foot-wide, 10-foot-deep head box and intake....2001(a)(1)(iii) and the instructions on the Commission's Web site http:[sol][sol]www.ferc.gov/docs... Commission's Web site at http:[sol][sol]www.ferc.gov/docs-filing/ elibrary.asp. Enter the docket number (P...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
...-deep, 24-foot-diameter vertical shaft to connect the upper and lower reservoir to the power tunnel; (6... electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site...Library'' link of Commission's Web site at http:[sol][sol]www.ferc.gov/docs-filing/ elibrary.asp. Enter...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
...) a new 130-foot-long, 20-foot-wide, 6-foot-deep concrete intake channel; (4) a new 10-foot-high, 20... on the Commission's Web site http://www.ferc.gov/docs-filing/efiling.asp . Commenters can submit... viewed or printed on the ``eLibrary'' link of Commission's Web site at http://www.ferc.gov/docs-filing...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-07
... Agency's Web site and call the appropriate advisory committee hot line/phone line to learn about possible... wrinkles in the face. The AQUAMID dermal filler is intended for use in mid-to-deep sub-dermal implantation... before the meeting. If FDA is unable to post the background material on its Web site prior to the meeting...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-10
....2001(a)(l)(iii) and the instructions on the Commission's Web site under the ``eFiling'' link. If unable... the Commission's Web site located at http://www.ferc.gov/filing-comments.asp . Please include the..., which will be dropped into a 8-foot-long, 6-foot-wide, and 6-foot-deep concrete diversion chamber that...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-16
...) filed an application for transfer of license of the Worthville Dam Project No. 3156, located on the Deep.... See 18 CFR 385.2001(a)(1)(iii)(2008) and the instructions on the Commission's Web site under the ``e... filings please go to the Commission's Web site located at http://www.ferc.gov/filing-comments.asp . More...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... structure; (9) a 12-foot-diameter, 2,842- foot-long concrete tunnel; (10) a 73-foot-deep forebay; (11) three... do not need to refile. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web...Library'' link of Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-26
...-foot-wide and 10 to 12-foot-deep; (3) a new powerhouse equipped with a single 0.9 megawatt Kaplan... electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site...Library'' link of the Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-09
... always check the Agency's Web site at http://www.fda.gov/AdvisoryCommittees/default.htm and scroll down... conditions by means other than the generation of deep heat within body tissues. On July 6, 2012 (77 FR 39953... than 2 business days before the meeting. If FDA is unable to post the background material on its Web...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... within the proposed project boundary; (3) an existing 12-foot-long, 6.6-foot-wide, 6.6-foot-deep head box....2001(a)(1)(iii) and the instructions on the Commission's Web site http://www.ferc.gov/docs-filing... the application, can be viewed or printed on the ``eLibrary'' link of Commission's Web site at http...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-01
... powerhouse adjacent to the training walls; (7) a new 25-foot-wide, 5-foot-deep crest gate adjacent to the... electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site... Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket number (P-13944) in...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-20
... Ramseur Project No. 11392 located on the Deep River in Randolph County, North Carolina. The transferor and...)(iii)(2009) and the instructions on the Commission's Web site under the ``e-Filing'' link. If unable to... the Commission's Web site located at http://www.ferc.gov/filing-comments.asp . More information about...
IntegratedMap: a Web interface for integrating genetic map data.
Yang, Hongyu; Wang, Hongyu; Gingle, Alan R
2005-05-01
IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp
Urbach, E.; Vergin, K.L.; Larson, G.L.; Giovannoni, S.J.
2007-01-01
The distribution of bacterial and archaeal species in Crater Lake plankton varies dramatically over depth and with time, as assessed by hybridization of group-specific oligonucleotides to RNA extracted from lakewater. Nonmetric, multidimensional scaling (MDS) analysis of relative bacterial phylotype densities revealed complex relationships among assemblages sampled from depth profiles in July, August and September of 1997 through 1999. CL500-11 green nonsulfur bacteria (Phylum Chloroflexi) and marine Group I crenarchaeota are consistently dominant groups in the oxygenated deep waters at 300 and 500 m. Other phylotypes found in the deep waters are similar to surface and mid-depth populations and vary with time. Euphotic zone assemblages are dominated either by ??-proteobacteria or CL120-10 verrucomicrobia, and ACK4 actinomycetes. MDS analyses of euphotic zone populations in relation to environmental variables and phytoplankton and zooplankton population structures reveal apparent links between Daphnia pulicaria zooplankton population densities and microbial community structure. These patterns may reflect food web interactions that link kokanee salmon population densities to community structure of the bacterioplankton, via fish predation on Daphnia with cascading consequences to Daphnia bacterivory and predation on bacterivorous protists. These results demonstrate a stable bottom-water microbial community. They also extend previous observations of food web-driven changes in euphotic zone bacterioplankton community structure to an oligotrophic setting. ?? 2007 Springer Science+Business Media B.V.
Insect-damaged fossil leaves record food web response to ancient climate change and extinction.
Wilf, P
2008-01-01
Plants and herbivorous insects have dominated terrestrial ecosystems for over 300 million years. Uniquely in the fossil record, foliage with well-preserved insect damage offers abundant and diverse information both about producers and about ecological and sometimes taxonomic groups of consumers. These data are ideally suited to investigate food web response to environmental perturbations, and they represent an invaluable deep-time complement to neoecological studies of global change. Correlations between feeding diversity and temperature, between herbivory and leaf traits that are modulated by climate, and between insect diversity and plant diversity can all be investigated in deep time. To illustrate, I emphasize recent work on the time interval from the latest Cretaceous through the middle Eocene (67-47 million years ago (Ma)), including two significant events that affected life: the end-Cretaceous mass extinction (65.5 Ma) and its ensuing recovery; and globally warming temperatures across the Paleocene-Eocene boundary (55.8 Ma). Climatic effects predicted from neoecology generally hold true in these deep-time settings. Rising temperature is associated with increased herbivory in multiple studies, a result with major predictive importance for current global warming. Diverse floras are usually associated with diverse insect damage; however, recovery from the end-Cretaceous extinction reveals uncorrelated plant and insect diversity as food webs rebuilt chaotically from a drastically simplified state. Calibration studies from living forests are needed to improve interpretation of the fossil data.
50 CFR 679.28 - Equipment and operational requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... observer must be able to stand upright and have a work area at least 0.9 m deep in the area in front of the table and scale. (4) Table. The observer sampling station must include a table at least 0.6 m deep, 1.2... Station available on the NMFS Alaska Region Web site at http://www.fakr.noaa.gov. Inspections will be...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-03
...-wide, and 10-foot-deep head box and intake channel; (4) a new 6-foot-high, 14-foot-wide sluice gate...) an existing 375-foot-long, 20- foot-wide, and 4-foot-deep tailrace; (8) a new above ground 300-foot... instructions on the Commission's Web site http://www.ferc.gov/docs-filing/efiling.asp . Commenters can submit...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
...-kilowatt (kW) power recovery turbine; (4) a 25-foot-long, 8-foot- wide, 3-foot-deep cobble-lined tailrace... 150-foot-long, 8- foot-wide, 3-foot-deep cobble-lined tailrace discharging flows into Port Althorp... electronically via the Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-21
... approximately 350 acres and would include a three- berth, deep-water wharf. The proposed wharf would be 3,000 feet long and 105 feet wide, with access to suitably deep water provided by an approximately 1,100 foot... and at the Web site www.eisgatewaypacificwa.gov or can be requested by contacting the Corps, Seattle...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... about 416.7 feet above mean sea level; (3) an existing 31- foot-long, 12.9-foot-wide, and 10-foot-deep...-foot-long, 30-foot-wide, and 4-foot-deep tailrace; (8) a new above-ground 365-foot-long, 35-kilovolt... Commission's Web site http://www.ferc.gov/docs-filing/efiling.asp . Commenters can submit brief comments up...
Scientists as Communicators: Inclusion of a Science/Education Liaison on Research Expeditions
NASA Astrophysics Data System (ADS)
Sautter, L. R.
2004-12-01
Communication of research and scientific results to an audience outside of one's field poses a challenge to many scientists. Many research scientists have a natural ability to address the challenge, while others may chose to seek assistance. Research cruise PIs maywish to consider including a Science/Education Liaison (SEL) on future grants. The SEL is a marine scientist whose job before, during and after the cruise is to work with the shipboard scientists to document the science conducted. The SEL's role is three-fold: (1) to communicate shipboard science activities near-real-time to the public via the web; (2) to develop a variety of web-based resources based on the scientific operations; and (3) to assist educators with the integration of these resources into classroom curricula. The first role involves at-sea writing and relaying from ship-to-shore (via email) a series of Daily Logs. NOAA Ocean Exploration (OE) has mastered the use of web-posted Daily Logs for their major expeditions (see their OceanExplorer website), introducing millions of users to deep sea exploration. Project Oceanica uses the OE daily log model to document research expeditions. In addition to writing daily logs and participating on OE expeditions, Oceanica's SEL also documents the cruise's scientific operations and preliminary findings using video and photos, so that web-based resources (photo galleries, video galleries, and PhotoDocumentaries) can be developed during and following the cruise, and posted on the expedition's home page within the Oceanica web site (see URL). We have created templates for constructing these science resources which allow the shipboard scientists to assist with web resource development. Bringing users to the site is achieved through email communications to a growing list of educators, scientists, and students, and through collaboration with the COSEE network. With a large research expedition-based inventory of web resources now available, Oceanica is training teachers and college faculty on the use and incorporation of these resources into middle school, high school and introductory college classrooms. Support for a SEL on shipboard expeditions serves to catalyze the dissemination of the scientific operations to a broad audience of users.
HCLS 2.0/3.0: health care and life sciences data mashup using Web 2.0/3.0.
Cheung, Kei-Hoi; Yip, Kevin Y; Townsend, Jeffrey P; Scotch, Matthew
2008-10-01
We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies.
HCLS 2.0/3.0: Health Care and Life Sciences Data Mashup Using Web 2.0/3.0
Cheung, Kei-Hoi; Yip, Kevin Y.; Townsend, Jeffrey P.; Scotch, Matthew
2010-01-01
We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies. PMID:18487092
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Frossard, Victor; Verneaux, Valérie; Millet, Laurent; Magny, Michel; Perga, Marie-Elodie
2015-06-01
Stable C isotope ratio (δ(13)C) values of chironomid remains (head capsules; HC) were used to infer changes in benthic C sources over the last 150 years for two French sub-Alpine lakes. The HCs were retrieved from a series of sediment cores from different depths. The HC δ(13)C values started to decrease with the onset of eutrophication. The HC δ(13)C temporal patterns varied among depths, which revealed spatial differences in the contribution of methanotrophic bacteria to the benthic secondary production. The estimates of the methane (CH4)-derived C contribution to chironomid biomass ranged from a few percent prior to the 1930s to up to 30 % in recent times. The chironomid fluxes increased concomitantly with changes in HC δ(13)C values before a drastic decrease due to the development of hypoxic conditions. The hypoxia reinforced the implication for CH4-derived C transfer to chironomid production. In Lake Annecy, the HC δ(13)C values were negatively correlated to total organic C (TOC) content in the sediment (Corg), whereas no relationship was found in Lake Bourget. In Lake Bourget, chironomid abundances reached their maximum with TOC contents between 1 and 1.5 % Corg, which could constitute a threshold for change in chironomid abundance and consequently for the integration of CH4-derived C into the lake food webs. Our results indicated that the CH4-derived C contribution to the benthic food webs occurred at different depths in these two large, deep lakes (deep waters and sublittoral zone), and that the trophic transfer of this C was promoted in sublittoral zones where O2 gradients were dynamic.
An Integrative Model of "Information Visibility" and "Information Seeking" on the Web
ERIC Educational Resources Information Center
Mansourian, Yazdan; Ford, Nigel; Webber, Sheila; Madden, Andrew
2008-01-01
Purpose: This paper aims to encapsulate the main procedure and key findings of a qualitative research on end-users' interactions with web-based search tools in order to demonstrate how the concept of "information visibility" emerged and how an integrative model of information visibility and information seeking on the web was constructed.…
Integrating Web 2.0-Based Informal Learning with Workplace Training
ERIC Educational Resources Information Center
Zhao, Fang; Kemp, Linzi J.
2012-01-01
Informal learning takes place in the workplace through connection and collaboration mediated by Web 2.0 applications. However, little research has yet been published that explores informal learning and how to integrate it with workplace training. We aim to address this research gap by developing a conceptual Web 2.0-based workplace learning and…
Evaluation of the Kloswall longwall mining system
NASA Astrophysics Data System (ADS)
Guay, P. J.
1982-04-01
A new longwal mining system specifically designed to extract a very deep web (48 inches or deeper) from a longwall panel was studied. Productivity and cost analysis comparing the new mining system with a conventional longwall operation taking a 30 inch wide web is presented. It is shown that the new system will increase annual production and return on investment in most cases. Conceptual drawings and specifications for a high capacity three drum shearer and a unique shield type of roof support specifically designed for very wide web operation are reported. The advantages and problems associated with wide web mining in general and as they relate specifically to the equipment selected for the new mining system are discussed.
NASA Astrophysics Data System (ADS)
Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.
2016-12-01
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive track makes users experience a virtual dive survey. In addition, by synchronizing a virtual dive track with videos, it is easy to understand living organisms and geological environments of a dive point. Therefore, these functions will visually support understanding of deep-sea environments in lectures and educational activities.
NASA Astrophysics Data System (ADS)
Preciado, Izaskun; Cartes, Joan E.; Punzón, Antonio; Frutos, Inmaculada; López-López, Lucía; Serrano, Alberto
2017-03-01
Trophic interactions in the deep-sea fish community of the Galicia Bank seamount (NE Atlantic) were inferred by using stomach contents analyses (SCA) and stable isotope analyses (SIA) of 27 fish species and their main prey items. Samples were collected during three surveys performed in 2009, 2010 and 2011 between 625 and 1800 m depth. Three main trophic guilds were determined using SCA data: pelagic, benthopelagic and benthic feeders, respectively. Vertically migrating macrozooplankton and meso-bathypelagic shrimps were identified to play a key role as pelagic prey for the deep sea fish community of the Galicia Bank. Habitat overlap was hardly detected; as a matter of fact, when species coexisted most of them evidenced a low dietary overlap, indicating a high degree of resource partitioning. A high potential competition, however, was observed among benthopelagic feeders, i.e.: Etmopterus spinax, Hoplostethus mediterraneus and Epigonus telescopus. A significant correlation was found between δ15N and δ13C for all the analysed species. When calculating Trophic Levels (TLs) for the main fish species, using both the SCA and SIA approaches, some discrepancies arose: TLs calculated from SIA were significantly higher than those obtained from SCA, probably indicating a higher consumption of benthic-suprabenthic prey in the previous months. During the summer, food web functioning in the Galicia Bank was more influenced by the assemblages dwelling in the water column than by deep-sea benthos, which was rather scarce in the summer samples. These discrepancies demonstrate the importance of using both approaches, SCA (snapshot of diet) and SIA (assimilated food in previous months), when attempting trophic studies, if an overview of food web dynamics in different compartments of the ecosystem is to be obtained.
Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience
Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi
2011-01-01
Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477
Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.
Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi
2010-01-01
Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.
75 FR 37783 - DOE/NSF Nuclear Science Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... Science Foundation's Nuclear Physics Office. Technical Talk on Deep Underground Science and Engineering... Energy's Office of Nuclear Physics Web site for viewing. Rachel Samuel, Deputy Committee Management...
NASA Astrophysics Data System (ADS)
Kürten, Benjamin; Al-Aidaroos, Ali M.; Kürten, Saskia; El-Sherbiny, Mohsen M.; Devassy, Reny P.; Struck, Ulrich; Zarokanellos, Nikolaos; Jones, Burton H.; Hansen, Thomas; Bruss, Gerd; Sommer, Ulrich
2016-01-01
Although zooplankton occupy key roles in aquatic biogeochemical cycles, little is known about the pelagic food web and trophodynamics of zooplankton in the Red Sea. Natural abundance stable isotope analysis (SIA) of carbon (δ13C) and N (δ15N) is one approach to elucidating pelagic food web structures and diet assimilation. Integrating the combined effects of ecological processes and hydrography, ecohydrographic features often translate into geographic patterns in δ13C and δ15N values at the base of food webs. This is due, for example, to divergent 15N abundances in source end-members (deep water sources: high δ15N, diazotrophs: low δ15N). Such patterns in the spatial distributions of stable isotope values were coined isoscapes. Empirical data of atmospheric, oceanographic, and biological processes, which drive the ecohydrographic gradients of the oligotrophic Red Sea, are under-explored and some rather anticipated than proven. Specifically, five processes underpin Red Sea gradients: (a) monsoon-related intrusions of nutrient-rich Indian Ocean water; (b) basin scale thermohaline circulation; (c) mesoscale eddy activity that causes up-welling of deep water nutrients into the upper layer; (d) the biological fixation of atmospheric nitrogen (N2) by diazotrophs; and (e) the deposition of dust and aerosol-derived N. This study assessed relationships between environmental samples (nutrients, chlorophyll a), oceanographic data (temperature, salinity, current velocity [ADCP]), particulate organic matter (POM), and net-phytoplankton, with the δ13C and δ15N values of zooplankton collected in spring 2012 from 16°28‧ to 26°57‧N along the central axis of the Red Sea. The δ15N of bulk POM and most zooplankton taxa increased from North (Duba) to South (Farasan). The potential contribution of deep water nutrient-fueled phytoplankton, POM, and diazotrophs varied among sites. Estimates suggested higher diazotroph contributions in the North, a greater contribution of POM in the South, and of small phytoplankton in the central Red Sea. Consistent variation across taxonomic and trophic groups at latitudinal scale, corresponding with patterns of nutrient stoichiometry and phytoplankton composition, indicates that the zooplankton ecology in the Red Sea is largely influenced by hydrographic features. It suggests that the primary ecohydrography of the Red Sea is driven not only by the thermohaline circulation, but also by mesoscale activities that transports nutrients to the upper water layers and interact with the general circulation pattern. Ecohydrographic features of the Red Sea, therefore, aid in explaining the observed configuration of its isoscape at the macroecological scale.
Academic Research Integration System
ERIC Educational Resources Information Center
Surugiu, Iula; Velicano, Manole
2008-01-01
This paper comprises results concluding the research activity done so far regarding enhanced web services and system integration. The objective of the paper is to define the software architecture for a coherent framework and methodology for enhancing existing web services into an integrated system. This document presents the research work that has…
NASA Technical Reports Server (NTRS)
Sanders, David B.
1997-01-01
Several important milestones were passed during the past year of our ISO observing program: (1) Our first ISO data were successfully obtained. ISOCAM data were taken for our primary deep field target in the 'Lockman Hole'. Thirteen hours of integration (taken over 4 contiguous orbits) were obtained in the LW2 filter of a 3 ft x 3 ft region centered on the position of minimum HI column density in the Lockman Hole. The data were obtained in microscanning mode. This is the deepest integration attempted to date (by almost a factor of 4 in time) with ISOCAM. (2) The deep survey data obtained for the Lockman Hole were received by the Japanese P.I. (Yoshi Taniguchi) in early December, 1996 (following release of the improved pipeline formatted data from Vilspa), and a copy was forwarded to Hawaii shortly thereafter. These data were processed independently by the Japan and Hawaii groups during the latter part of December 1996, and early January, 1997. The Hawaii group made use of the U.S. ISO data center at IPAC/Caltech in Pasadena to carry out their data reduction, while the Japanese group used a copy of the ISOCAM data analysis package made available to them through an agreement with the head of the ISOCAM team, Catherine Cesarsky. (3) Results of our LW2 Deep Survey in the Lockman Hole were first reported at the ISO Workshop "Taking ISO to the Limits: Exploring the Faintest Sources in the Infrared" held at the ISO Science Operations Center in Villafranca, Spain (VILSPA) on 3-4 February, 1997. Yoshi Taniguchi gave an invited presentation summarizing the results of the U.S.-Japan team, and Dave Sanders gave an invited talk summarizing the results of the Workshop at the conclusion of the two day meeting. The text of the talks by Taniguchi and Sanders are included in the printed Workshop Proceedings, and are published in full on the Web. By several independent accounts, the U.S.-Japan Deep Survey results were one of the highlights of the Workshop; these data showed conclusively that the ISOCAM S/N continues to decrease as the square root of time for periods as long as 13 hours.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-20
... Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site http:[sol][sol... following new features: (1) A 8-foot-long, 3-foot-wide, 3-foot-deep drop inlet structure; (2) a 2-foot... available for review at the Commission in the Public Reference Room or may be viewed on the Commission's Web...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-14
.... See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site at http://www.ferc..., 3.75-foot-wide, 2-foot-deep pond; (2) a 1-foot-high lumber diversion into 2.5-foot-high, 3.75-foot... public inspection. This filing may be viewed on the web at http://www.ferc.gov using the ``eLibrary...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-26
...) a 12-foot-diameter, 2,842-foot-long concrete tunnel; (10) a 73-foot-deep forebay; (11) three 5.4- to... Internet. See 18 CFR 385.2001(a)(1)(iii) and the instructions on the Commission's Web site http://www.ferc... Commission's Web site at http://www.ferc.gov/docs-filing/elibrary.asp . Enter the docket number (P-13804) in...
Integrating Thematic Web Portal Capabilities into the NASA Earthdata Web Infrastructure
NASA Technical Reports Server (NTRS)
Wong, Minnie; Baynes, Kathleen E.; Huang, Thomas; McLaughlin, Brett
2015-01-01
This poster will present the process of integrating thematic web portal capabilities into the NASA Earth data web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators.
Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2007-01-01
This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.
Approaches to Linked Open Data at data.oceandrilling.org
NASA Astrophysics Data System (ADS)
Fils, D.
2012-12-01
The data.oceandrilling.org web application applies Linked Open Data (LOD) patterns to expose Deep Sea Drilling Project (DSDP), Ocean Drilling Program (ODP) and Integrated Ocean Drilling Program (IODP) data. Ocean drilling data is represented in a rich range of data formats: high resolution images, file based data sets and sample based data. This richness of data types has been well met by semantic approaches and will be demonstrated. Data has been extracted from CSV, HTML and RDBMS through custom software and existing packages for loading into a SPARQL 1.1 compliant triple store. Practices have been developed to streamline the maintenance of the RDF graphs and properly expose them using LOD approaches like VoID and HTML embedded structured data. Custom and existing vocabularies are used to allow semantic relations between resources. Use of the W3c draft RDF Data Cube Vocabulary and other approaches for encoding time scales, taxonomic fossil data and other graphs will be shown. A software layer written in Google Go mediates the RDF to web pipeline. The approach used is general and can be applied to other similar environments like node.js or Python Twisted. To facilitate communication user interface software libraries such as D3 and packages such as S2S and LodLive have been used. Additionally OpenSearch API's, structured data in HTML and SPARQL endpoints provide various access methods for applications. The data.oceandrilling.org is not viewed as a web site but as an application that communicate with a range of clients. This approach helps guide the development more along software practices than along web site authoring approaches.
Sweetman, Andrew K; Smith, Craig R; Dale, Trine; Jones, Daniel O B
2014-12-07
Jellyfish blooms are common in many oceans, and anthropogenic changes appear to have increased their magnitude in some regions. Although mass falls of jellyfish carcasses have been observed recently at the deep seafloor, the dense necrophage aggregations and rapid consumption rates typical for vertebrate carrion have not been documented. This has led to a paradigm of limited energy transfer to higher trophic levels at jelly falls relative to vertebrate organic falls. We show from baited camera deployments in the Norwegian deep sea that dense aggregations of deep-sea scavengers (more than 1000 animals at peak densities) can rapidly form at jellyfish baits and consume entire jellyfish carcasses in 2.5 h. We also show that scavenging rates on jellyfish are not significantly different from fish carrion of similar mass, and reveal that scavenging communities typical for the NE Atlantic bathyal zone, including the Atlantic hagfish, galatheid crabs, decapod shrimp and lyssianasid amphipods, consume both types of carcasses. These rapid jellyfish carrion consumption rates suggest that the contribution of gelatinous material to organic fluxes may be seriously underestimated in some regions, because jelly falls may disappear much more rapidly than previously thought. Our results also demonstrate that the energy contained in gelatinous carrion can be efficiently incorporated into large numbers of deep-sea scavengers and food webs, lessening the expected impacts (e.g. smothering of the seafloor) of enhanced jellyfish production on deep-sea ecosystems and pelagic-benthic coupling. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
ERIC Educational Resources Information Center
Tsai, Chia-Wen
2014-01-01
Innovative teaching methods integrated with web technologies have been increasingly used in higher education. However, there are few studies discussing effective web-mediated teaching methods for both students and teachers. To help students learn and develop their academic involvement in a blended course, and improve their thoughts regarding this…
Integration of Web 2.0 Tools in Learning a Programming Course
ERIC Educational Resources Information Center
Majid, Nazatul Aini Abd
2014-01-01
Web 2.0 tools are expected to assist students to acquire knowledge effectively in their university environment. However, the lack of effort from lecturers in planning the learning process can make it difficult for the students to optimize their learning experiences. The aim of this paper is to integrate Web 2.0 tools with learning strategy in…
Motivating Pre-Service Teachers in Technology Integration of Web 2.0 for Teaching Internships
ERIC Educational Resources Information Center
Kim, Hye Jeong; Jang, Hwan Young
2015-01-01
The aim of this study was to examine the predictors of pre-service teachers' use of Web 2.0 tools during a teaching internship, after a course that emphasized the use of the tools for instructional activities. Results revealed that integrating Web 2.0 tools during their teaching internship was strongly predicted by participants' perceived…
Integrating geo web services for a user driven exploratory analysis
NASA Astrophysics Data System (ADS)
Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate
2016-04-01
In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.
High-performance spider webs: integrating biomechanics, ecology and behaviour
Harmer, Aaron M. T.; Blackledge, Todd A.; Madin, Joshua S.; Herberstein, Marie E.
2011-01-01
Spider silks exhibit remarkable properties, surpassing most natural and synthetic materials in both strength and toughness. Orb-web spider dragline silk is the focus of intense research by material scientists attempting to mimic these naturally produced fibres. However, biomechanical research on spider silks is often removed from the context of web ecology and spider foraging behaviour. Similarly, evolutionary and ecological research on spiders rarely considers the significance of silk properties. Here, we highlight the critical need to integrate biomechanical and ecological perspectives on spider silks to generate a better understanding of (i) how silk biomechanics and web architectures interacted to influence spider web evolution along different structural pathways, and (ii) how silks function in an ecological context, which may identify novel silk applications. An integrative, mechanistic approach to understanding silk and web function, as well as the selective pressures driving their evolution, will help uncover the potential impacts of environmental change and species invasions (of both spiders and prey) on spider success. Integrating these fields will also allow us to take advantage of the remarkable properties of spider silks, expanding the range of possible silk applications from single threads to two- and three-dimensional thread networks. PMID:21036911
Schäuble, Sascha; Stavrum, Anne-Kristin; Bockwoldt, Mathias; Puntervoll, Pål; Heiland, Ines
2017-06-24
Systems Biology Markup Language (SBML) is the standard model representation and description language in systems biology. Enriching and analysing systems biology models by integrating the multitude of available data, increases the predictive power of these models. This may be a daunting task, which commonly requires bioinformatic competence and scripting. We present SBMLmod, a Python-based web application and service, that automates integration of high throughput data into SBML models. Subsequent steady state analysis is readily accessible via the web service COPASIWS. We illustrate the utility of SBMLmod by integrating gene expression data from different healthy tissues as well as from a cancer dataset into a previously published model of mammalian tryptophan metabolism. SBMLmod is a user-friendly platform for model modification and simulation. The web application is available at http://sbmlmod.uit.no , whereas the WSDL definition file for the web service is accessible via http://sbmlmod.uit.no/SBMLmod.wsdl . Furthermore, the entire package can be downloaded from https://github.com/MolecularBioinformatics/sbml-mod-ws . We envision that SBMLmod will make automated model modification and simulation available to a broader research community.
NASA Astrophysics Data System (ADS)
Arabshahi, P.; Chao, Y.; Chien, S.; Gray, A.; Howe, B. M.; Roy, S.
2008-12-01
In many areas of Earth science, including climate change research, there is a need for near real-time integration of data from heterogeneous and spatially distributed sensors, in particular in-situ and space- based sensors. The data integration, as provided by a smart sensor web, enables numerous improvements, namely, 1) adaptive sampling for more efficient use of expensive space-based sensing assets, 2) higher fidelity information gathering from data sources through integration of complementary data sets, and 3) improved sensor calibration. The specific purpose of the smart sensor web development presented here is to provide for adaptive sampling and calibration of space-based data via in-situ data. Our ocean-observing smart sensor web presented herein is composed of both mobile and fixed underwater in-situ ocean sensing assets and Earth Observing System (EOS) satellite sensors providing larger-scale sensing. An acoustic communications network forms a critical link in the web between the in-situ and space-based sensors and facilitates adaptive sampling and calibration. After an overview of primary design challenges, we report on the development of various elements of the smart sensor web. These include (a) a cable-connected mooring system with a profiler under real-time control with inductive battery charging; (b) a glider with integrated acoustic communications and broadband receiving capability; (c) satellite sensor elements; (d) an integrated acoustic navigation and communication network; and (e) a predictive model via the Regional Ocean Modeling System (ROMS). Results from field experiments, including an upcoming one in Monterey Bay (October 2008) using live data from NASA's EO-1 mission in a semi closed-loop system, together with ocean models from ROMS, are described. Plans for future adaptive sampling demonstrations using the smart sensor web are also presented.
Sea Level Station Metadata for Tsunami Detection, Warning and Research
NASA Astrophysics Data System (ADS)
Stroker, K. J.; Marra, J.; Kari, U. S.; Weinstein, S. A.; Kong, L.
2007-12-01
The devastating earthquake and tsunami of December 26, 2004 has greatly increased recognition of the need for water level data both from the coasts and the deep-ocean. In 2006, the National Oceanic and Atmospheric Administration (NOAA) completed a Tsunami Data Management Report describing the management of data required to minimize the impact of tsunamis in the United States. One of the major gaps defined in this report is the access to global coastal water level data. NOAA's National Geophysical Data Center (NGDC) and National Climatic Data Center (NCDC) are working cooperatively to bridge this gap. NOAA relies on a network of global data, acquired and processed in real-time to support tsunami detection and warning, as well as high-quality global databases of archived data to support research and advanced scientific modeling. In 2005, parties interested in enhancing the access and use of sea level station data united under the NOAA NCDC's Integrated Data and Environmental Applications (IDEA) Center's Pacific Region Integrated Data Enterprise (PRIDE) program to develop a distributed metadata system describing sea level stations (Kari et. al., 2006; Marra et.al., in press). This effort started with pilot activities in a regional framework and is targeted at tsunami detection and warning systems being developed by various agencies. It includes development of the components of a prototype sea level station metadata web service and accompanying Google Earth-based client application, which use an XML-based schema to expose, at a minimum, information in the NOAA National Weather Service (NWS) Pacific Tsunami Warning Center (PTWC) station database needed to use the PTWC's Tide Tool application. As identified in the Tsunami Data Management Report, the need also exists for long-term retention of the sea level station data. NOAA envisions that the retrospective water level data and metadata will also be available through web services, using an XML-based schema. Five high-priority metadata requirements identified at a water level workshop held at the XXIV IUGG Meeting in Perugia will be addressed: consistent, validated, and well defined numbers (e.g. amplitude); exact location of sea level stations; a complete record of sea level data stored in the archive; identifying high-priority sea level stations; and consistent definitions. NOAA's National Geophysical Data Center (NGDC) and co-located World Data Center for Solid Earth Geophysics (including tsunamis) would hold the archive of the sea level station data and distribute the standard metadata. Currently, NGDC is also archiving and distributing the DART buoy deep-ocean water level data and metadata in standards based formats. Kari, Uday S., John J. Marra, Stuart A. Weinstein, 2006 A Tsunami Focused Data Sharing Framework For Integration of Databases that Describe Water Level Station Specifications. AGU Fall Meeting, 2006. San Francisco, California. Marra, John, J., Uday S. Kari, and Stuart A. Weinstein (in press). A Tsunami Detection and Warning-focused Sea Level Station Metadata Web Service. IUGG XXIV, July 2-13, 2007. Perugia, Italy.
50 CFR 679.28 - Equipment and operational requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... sampling table. The observer must be able to stand upright and have a work area at least 0.9 m deep in the... least 0.6 m deep, 1.2 m wide and 0.9 m high and no more than 1.1 m high. The entire surface area of the... Station available on the NMFS Alaska Region Web site at http://www.fakr.noaa.gov. Inspections will be...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... new 3,000-foot-long, 1,000-foot- wide, 50- to 75-foot-deep upper reservoir, with a surface area of 50...-foot-long, 1,000-foot- wide, 50- to 75-foot-deep lower reservoir with a surface area of 80 acres and a... instructions on the Commission's Web site http://www.ferc.gov/docs-filing/efiling.asp . Commenters can submit...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
... the drill pad would measure 4 by 20 feet and be approximately 5 feet deep. An estimated 1.45 acres of... the drill pad would measure 8 by 10 feet and be approximately 6 feet deep. An estimated 42.64 acres of... the proposal will be posted on the project Web site at http://www.fs.fed.us/nepa/nepa_project_exp.php...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-30
...)(1)(iii) and the instructions on the Commission's Web site http://www.ferc.gov/docs-filing/efiling... conduit equipped with a 7-foot-high, 7-foot-wide gate; (e) a 16-foot-wide, 4-foot-deep, 200-foot-long... a 198 kW turbine generating unit; (d) a 14-foot-wide, 9-foot-deep, 100-foot-long tailrace; and (e...
Public health, GIS, and the internet.
Croner, Charles M
2003-01-01
Internet access and use of georeferenced public health information for GIS application will be an important and exciting development for the nation's Department of Health and Human Services and other health agencies in this new millennium. Technological progress toward public health geospatial data integration, analysis, and visualization of space-time events using the Web portends eventual robust use of GIS by public health and other sectors of the economy. Increasing Web resources from distributed spatial data portals and global geospatial libraries, and a growing suite of Web integration tools, will provide new opportunities to advance disease surveillance, control, and prevention, and insure public access and community empowerment in public health decision making. Emerging supercomputing, data mining, compression, and transmission technologies will play increasingly critical roles in national emergency, catastrophic planning and response, and risk management. Web-enabled public health GIS will be guided by Federal Geographic Data Committee spatial metadata, OpenGIS Web interoperability, and GML/XML geospatial Web content standards. Public health will become a responsive and integral part of the National Spatial Data Infrastructure.
Web services as applications' integration tool: QikProp case study.
Laoui, Abdel; Polyakov, Valery R
2011-07-15
Web services are a new technology that enables to integrate applications running on different platforms by using primarily XML to enable communication among different computers over the Internet. Large number of applications was designed as stand alone systems before the concept of Web services was introduced and it is a challenge to integrate them into larger computational networks. A generally applicable method of wrapping stand alone applications into Web services was developed and is described. To test the technology, it was applied to the QikProp for DOS (Windows). Although performance of the application did not change when it was delivered as a Web service, this form of deployment had offered several advantages like simplified and centralized maintenance, smaller number of licenses, and practically no training for the end user. Because by using the described approach almost any legacy application can be wrapped as a Web service, this form of delivery may be recommended as a global alternative to traditional deployment solutions. Copyright © 2011 Wiley Periodicals, Inc.
About Kennedy's Disease: Symptoms
... multilingual website and translation delivery network Close "The web site acted as a central organizing influence for ... certain areas. Loss of sensation. Decreased or Absent Deep Tendon Reflexes When a doctor taps the knee ...
76 FR 24923 - National Science Board; Sunshine Act Meetings; Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-03
...: Some portions open, some portions closed. UPDATES: Please refer to the National Science Board Web site... Information Item: Status Deep Underground Science and Engineering Laboratory Information Item: High...
The Deep Impact Network Experiment Operations Center Monitor and Control System
NASA Technical Reports Server (NTRS)
Wang, Shin-Ywan (Cindy); Torgerson, J. Leigh; Schoolcraft, Joshua; Brenman, Yan
2009-01-01
The Interplanetary Overlay Network (ION) software at JPL is an implementation of Delay/Disruption Tolerant Networking (DTN) which has been proposed as an interplanetary protocol to support space communication. The JPL Deep Impact Network (DINET) is a technology development experiment intended to increase the technical readiness of the JPL implemented ION suite. The DINET Experiment Operations Center (EOC) developed by JPL's Protocol Technology Lab (PTL) was critical in accomplishing the experiment. EOC, containing all end nodes of simulated spaces and one administrative node, exercised publish and subscribe functions for payload data among all end nodes to verify the effectiveness of data exchange over ION protocol stacks. A Monitor and Control System was created and installed on the administrative node as a multi-tiered internet-based Web application to support the Deep Impact Network Experiment by allowing monitoring and analysis of the data delivery and statistics from ION. This Monitor and Control System includes the capability of receiving protocol status messages, classifying and storing status messages into a database from the ION simulation network, and providing web interfaces for viewing the live results in addition to interactive database queries.
Leprosy: ongoing medical and social struggle in Vietnam.
Nguyen, Nhiem; Tat Nguyen, Thang; Hong Phan, Hai; Tam Tran, Tinh
2008-01-01
Until recently, leprosy had been prominent in 33 countries worldwide, and Vietnam was ranked among the top 14 endemic countries. The leprosy situation in Vietnam was reviewed as a sample of the worldwide ongoing medical and social struggle to assess the need for continued support for leprosy control activities and for social programs of integration of leprosy victims into the community. A search was conducted of official Vietnamese publications, World Health Organization documents, major electronic databases, and popular leprosy Web sites; as well, notes from visits to local leprosy clinics and interviews with health workers were checked. Important achievements were realized through national determination and international collaboration. In contrast with the impressive performance statistics at the national level, and despite strong government efforts for leprosy control, the results obtained at the province-city and district-commune levels still exhibit deficiencies in case detection, treatment, and socioeconomic integration of leprosy victims. The struggle to eliminate such a complex and destructive infectious disease as leprosy does not end with the cure. Deep-seated medical and social problems remain. These problems are best solved through community-based approaches.
Granulomatosis with Polyangiitis (GPA): Symptoms and Causes
... of the nose (saddling) caused by weakened cartilage Deep vein thrombosis By Mayo Clinic ... is a not-for-profit organization and proceeds from Web advertising help support our mission. Mayo Clinic does ...
Saint: a lightweight integration environment for model annotation.
Lister, Allyson L; Pocock, Matthew; Taschuk, Morgan; Wipat, Anil
2009-11-15
Saint is a web application which provides a lightweight annotation integration environment for quantitative biological models. The system enables modellers to rapidly mark up models with biological information derived from a range of data sources. Saint is freely available for use on the web at http://www.cisban.ac.uk/saint. The web application is implemented in Google Web Toolkit and Tomcat, with all major browsers supported. The Java source code is freely available for download at http://saint-annotate.sourceforge.net. The Saint web server requires an installation of libSBML and has been tested on Linux (32-bit Ubuntu 8.10 and 9.04).
Research on Web Search Behavior: How Online Query Data Inform Social Psychology.
Lai, Kaisheng; Lee, Yan Xin; Chen, Hao; Yu, Rongjun
2017-10-01
The widespread use of web searches in daily life has allowed researchers to study people's online social and psychological behavior. Using web search data has advantages in terms of data objectivity, ecological validity, temporal resolution, and unique application value. This review integrates existing studies on web search data that have explored topics including sexual behavior, suicidal behavior, mental health, social prejudice, social inequality, public responses to policies, and other psychosocial issues. These studies are categorized as descriptive, correlational, inferential, predictive, and policy evaluation research. The integration of theory-based hypothesis testing in future web search research will result in even stronger contributions to social psychology.
Assessing the Integrity of Web Sites Providing Data and Information on Corporate Behavior
ERIC Educational Resources Information Center
McLaughlin, Josetta; Pavelka, Deborah; McLaughlin, Gerald
2005-01-01
A significant trend in higher education evolving from the wide accessibility to the Internet is the availability of an ever-increasing supply of data on Web sites for use by professors, students, and researchers. As this usage by a wider variety of users grows, the ability to judge the integrity of the data, the related findings, and the Web site…
Bringing Web 2.0 to bioinformatics.
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
2009-01-01
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
1991-01-01
patterns and water circulations, normal water fluctuations, salinity, threatened and endangered species, fish or other aquatic organisms in the food web ...velocity and anchoring of sediments; * habitat for aquatic organisms in the food web ; 0 habitat for resident and transient wildlife species; and...The remaining anomalous areas, RA-2 and RA-5 (3 to 5 + feet deep ), coincide with the location of former possible landfilling activities. However, it
Deep-Sea Microbes: Linking Biogeochemical Rates to -Omics Approaches
NASA Astrophysics Data System (ADS)
Herndl, G. J.; Sintes, E.; Bayer, B.; Bergauer, K.; Amano, C.; Hansman, R.; Garcia, J.; Reinthaler, T.
2016-02-01
Over the past decade substantial progress has been made in determining deep ocean microbial activity and resolving some of the enigmas in understanding the deep ocean carbon flux. Also, metagenomics approaches have shed light onto the dark ocean's microbes but linking -omics approaches to biogeochemical rate measurements are generally rare in microbial oceanography and even more so for the deep ocean. In this presentation, we will show by combining metagenomics, -proteomics and biogeochemical rate measurements on the bulk and single-cell level that deep-sea microbes exhibit characteristics of generalists with a large genome repertoire, versatile in utilizing substrate as revealed by metaproteomics. This is in striking contrast with the apparently rather uniform dissolved organic matter pool in the deep ocean. Combining the different -omics approaches with metabolic rate measurements, we will highlight some major inconsistencies and enigmas in our understanding of the carbon cycling and microbial food web structure in the dark ocean.
Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences
Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi
2006-01-01
Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384
A Semantic Sensor Web for Environmental Decision Support Applications
Gray, Alasdair J. G.; Sadler, Jason; Kit, Oles; Kyzirakos, Kostis; Karpathiotakis, Manos; Calbimonte, Jean-Paul; Page, Kevin; García-Castro, Raúl; Frazer, Alex; Galpin, Ixent; Fernandes, Alvaro A. A.; Paton, Norman W.; Corcho, Oscar; Koubarakis, Manolis; De Roure, David; Martinez, Kirk; Gómez-Pérez, Asunción
2011-01-01
Sensing devices are increasingly being deployed to monitor the physical world around us. One class of application for which sensor data is pertinent is environmental decision support systems, e.g., flood emergency response. For these applications, the sensor readings need to be put in context by integrating them with other sources of data about the surrounding environment. Traditional systems for predicting and detecting floods rely on methods that need significant human resources. In this paper we describe a semantic sensor web architecture for integrating multiple heterogeneous datasets, including live and historic sensor data, databases, and map layers. The architecture provides mechanisms for discovering datasets, defining integrated views over them, continuously receiving data in real-time, and visualising on screen and interacting with the data. Our approach makes extensive use of web service standards for querying and accessing data, and semantic technologies to discover and integrate datasets. We demonstrate the use of our semantic sensor web architecture in the context of a flood response planning web application that uses data from sensor networks monitoring the sea-state around the coast of England. PMID:22164110
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study
Hosseinyalamdary, Siavash
2018-01-01
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.
Hosseinyalamdary, Siavash
2018-04-24
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.
Monitor and Control of the Deep-Space network via Secure Web
NASA Technical Reports Server (NTRS)
Lamarra, N.
1997-01-01
(view graph) NASA lead center for robotic space exploration. Operating division of Caltech/Jet Propulsion Laboratory. Current missions, Voyagers, Galileo, Pathfinder, Global Surveyor. Upcoming missions, Cassini, Mars and New Millennium.
75 FR 55617 - National Science Board; Sunshine Act Meetings Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-13
... to the National Science Board Web site http://www.nsf.gov/nsb for additional information and schedule... of Deep Underground Science and Engineering Laboratory (DUSEL) on South Dakota Graduate Education in...
... Things to Help You Express the Pain and Deep Emotion Some people cut because the emotions that ... of Use Notice of Nondiscrimination Visit the Nemours Web site. Note: All information on TeensHealth® is for ...
ComplexContact: a web server for inter-protein contact prediction using deep learning.
Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo
2018-05-22
ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.
Lepori, Fabio; Roberts, James J.
2017-01-01
We used monitoring data from Lake Lugano (Switzerland and Italy) to assess key ecosystem responses to three decades of nutrient management (1983–2014). We investigated whether reductions in external phosphorus loadings (Lext) caused declines in lake phosphorus concentrations (P) and phytoplankton biomass (Chl a), as assumed by the predictive models that underpinned the management plan. Additionally, we examined the hypothesis that deep lakes respond quickly to Lext reductions. During the study period, nutrient management reduced Lext by approximately a half. However, the effects of such reduction on P and Chl a were complex. Far from the scenarios predicted by classic nutrient-management approaches, the responses of P and Chl a did not only reflect changes in Lext, but also variation in internal P loadings (Lint) and food-web structure. In turn, Lint varied depending on basin morphometry and climatic effects, whereas food-web structure varied due to apparently stochastic events of colonization and near-extinction of key species. Our results highlight the complexity of the trajectory of deep-lake ecosystems undergoing nutrient management. From an applied standpoint, they also suggest that [i] the recovery of warm monomictic lakes may be slower than expected due to the development of Lint, and that [ii] classic P and Chl a models based on Lext may be useful in nutrient management programs only if their predictions are used as starting points within adaptive frameworks.
Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems
ERIC Educational Resources Information Center
Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul
2009-01-01
Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…
Hapgood, Jenny; Smucker Barnwell, Sara; McAfee, Tim
2008-01-01
Background Phone-based tobacco cessation programs have been proven effective and widely adopted. Web-based solutions exist; however, the evidence base is not yet well established. Many cessation treatments are commercially available, but few integrate the phone and Web for delivery and no published studies exist for integrated programs. Objective This paper describes a comprehensive integrated phone/Web tobacco cessation program and the characteristics, experience, and outcomes of smokers enrolled in this program from a real-world evaluation. Methods We tracked program utilization (calls completed, Web log-ins), quit status, satisfaction, and demographics of 11,143 participants who enrolled in the Free & Clear Quit For Life Program between May 2006 and October 2007. All participants received up to five proactive phone counseling sessions with Quit Coaches, unlimited access to an interactive website, up to 20 tailored emails, printed Quit Guides, and cessation medication information. The program was designed to encourage use of all program components rather than asking participants to choose which components they wanted to use while quitting. Results We found that participants tended to use phone services more than Web services. On average, participants completed 2-2.5 counseling calls and logged in to the online program 1-2 times. Women were more adherent to the overall program; women utilized Web and phone services significantly (P = .003) more than men. Older smokers (> 26 years) and moderate smokers (15-20 cigarettes/day) utilized services more (P < .001) than younger (< 26 years) and light or heavy smokers. Satisfaction with services was high (92% to 95%) and varied somewhat with Web utilization. Thirty-day quit rates at the 6-month follow-up were 41% using responder analysis and 21% using intent-to-treat analysis. Web utilization was significantly associated with increased call completion and tobacco abstinence rates at the 6-month follow-up evaluation. Conclusions This paper expands our understanding of a real-world treatment program combining two mediums, phone and Web. Greater adherence to the program, as defined by using both the phone and Web components, is associated with higher quit rates. This study has implications for reaching and treating tobacco users with an integrated phone/Web program and offers evidence regarding the effectiveness of integrated cessation programs. PMID:19017583
Two Stage Data Augmentation for Low Resourced Speech Recognition (Author’s Manuscript)
2016-09-12
speech recognition, deep neural networks, data augmentation 1. Introduction When training data is limited—whether it be audio or text—the obvious...Schwartz, and S. Tsakalidis, “Enhancing low resource keyword spotting with au- tomatically retrieved web documents,” in Interspeech, 2015, pp. 839–843. [2...and F. Seide, “Feature learning in deep neural networks - a study on speech recognition tasks,” in International Conference on Learning Representations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
... 385.2001(a)(1)(iii) and the instructions on the Commission's Web site http://www.ferc.gov/docs-filing... conduit equipped with a 7-foot-high, 7-foot-wide gate; (e) a 16-foot-wide, 4-foot-deep, 200-foot-long... generating unit; (d) and a 14- foot-wide, 9-foot-deep, 100-foot-long tailrace (e) six 900-foot-long, 600 volt...
ERIC Educational Resources Information Center
Jang, Syh-Jong
2009-01-01
The purpose of the study was to investigate how web-based technology could be utilized and integrated with real-life scientific materials to stimulate the creativity of secondary school students. One certified science teacher and 31 seventh graders participated in this study. Several real-life experience science sessions integrated with online…
Inquiry of Pre-Service Teachers' Concern about Integrating Web 2.0 into Instruction
ERIC Educational Resources Information Center
Hao, Yungwei; Lee, Kathryn S.
2017-01-01
To promote technology integration, it is essential to address pre-service teacher (PST) concerns about facilitating technology-enhanced learning environments. This study adopted the Concerns-Based Adoption Model to investigate PST concern on Web 2.0 integration. Four hundred and eighty-nine PSTs in a teacher education university in north Taiwan…
ERIC Educational Resources Information Center
Wang, Ling
2013-01-01
Web 2.0 tools may be able to close the digital gap between teachers and students if teachers can integrate the tools and change their pedagogy. The TPACK framework has outlined the elements needed to effect change, and research on Web 2.0 tools shows its potential as a change agent, but little research has looked at how the two interrelate. Using…
NASA Astrophysics Data System (ADS)
Giuliano, Jackie Alan
1998-11-01
This work presents a process for teaching environmental studies that is based on active engagement and participation with the world around us. In particular, the importance of recognizing our intimate connection to the natural world is stressed as an effective tool to learn about humans' role in the environment. Understanding our place in the natural world may be a pivotal awareness that must be developed if we are to heal the many wounds we are experiencing today. This work contains approaches to teaching that are based on critical thinking, problem solving, and nonlinear, non-patriarchal approaches to thinking, reasoning, and learning. With these tools, a learner is challenged to think and to understand diverse cultural, social, and intellectual perspectives and to perceive the natural world as an intimate and integral part of our lives. To develop this Deep Teaching Process principles were drawn from many elements including deep ecology, ecofeminism, despairwork, spiritual ecology, bioregionalism, critical thinking, movement therapy, and the author's own teaching experience with learners of all ages. The need for a deep teaching process is demonstrated through a discussion of a number of the environmental challenges we face today and how they affect a learner's perceptions. Two key items are vital to this process. First, 54 experiential learning experiences are presented that the author has developed or adapted to enhance the teaching of our relationship to the natural world. These experiences move the body and activate the creative impulses. Secondly, the author has developed workbooks for each class he has designed that provide foundational notes for each course. These workbooks insure that the student is present for the experience and not immersed in taking notes. The deep teaching process is a process to reawaken our senses. A reawakening of the senses and an intimate awareness of our connections to the natural world and the web of life may be the primary goal of any deep environmental studies educator.
Integrative and Deep Learning through a Learning Community: A Process View of Self
ERIC Educational Resources Information Center
Mahoney, Sandra; Schamber, Jon
2011-01-01
This study investigated deep learning produced in a community of general education courses. Student speeches on liberal education were analyzed for discovering a grounded theory of ideas about self. The study found that learning communities cultivate deep, integrative learning that makes the value of a liberal education relevant to students.…
AGU Launches Web Site for New Scientific Integrity and Professional Ethics Policy
NASA Astrophysics Data System (ADS)
Townsend, Randy
2013-03-01
AGU's Scientific Integrity and Professional Ethics policy, approved by the AGU Board of Directors and Council in December 2012, is now available online on a new Web site, http://ethics.agu.org. As the Web site states, the policy embodies a "set of guidelines for scientific integrity and professional ethics for the actions of the members and the governance of the Union in its internal activities; in its public persona; and most importantly, in the research and peer review processes of its scientific publications, its communications and outreach, and its scientific meetings."
46 CFR 154.440 - Allowable stress.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: (1) For tank web frames, stringers, or girders of carbon manganese steel or aluminum alloys, meet σB... in appendix A of this part. (c) Tank plating must meet the American Bureau of Shipping's deep tank...
46 CFR 154.440 - Allowable stress.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: (1) For tank web frames, stringers, or girders of carbon manganese steel or aluminum alloys, meet σB... in appendix A of this part. (c) Tank plating must meet the American Bureau of Shipping's deep tank...
46 CFR 154.440 - Allowable stress.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: (1) For tank web frames, stringers, or girders of carbon manganese steel or aluminum alloys, meet σB... in appendix A of this part. (c) Tank plating must meet the American Bureau of Shipping's deep tank...
jORCA: easily integrating bioinformatics Web Services.
Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo
2010-02-15
Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .
Devitt, Brian Meldan; Baker, Joseph F; Fitzgerald, Eilis; McCarthy, Conor
2010-01-01
A case of injury to the third web space of the right hand of a rugby player, as a result of buddy strapping with electrical insulating tape of the little and ring finger, is presented. A deep laceration of the web space and distal palmar fascia resulted, necessitating wound exploration and repair. This case highlights the danger of using electrical insulating tape as a means to buddy strap fingers. PMID:22736733
miRanalyzer: a microRNA detection and analysis tool for next-generation sequencing experiments.
Hackenberg, Michael; Sturm, Martin; Langenberger, David; Falcón-Pérez, Juan Manuel; Aransay, Ana M
2009-07-01
Next-generation sequencing allows now the sequencing of small RNA molecules and the estimation of their expression levels. Consequently, there will be a high demand of bioinformatics tools to cope with the several gigabytes of sequence data generated in each single deep-sequencing experiment. Given this scene, we developed miRanalyzer, a web server tool for the analysis of deep-sequencing experiments for small RNAs. The web server tool requires a simple input file containing a list of unique reads and its copy numbers (expression levels). Using these data, miRanalyzer (i) detects all known microRNA sequences annotated in miRBase, (ii) finds all perfect matches against other libraries of transcribed sequences and (iii) predicts new microRNAs. The prediction of new microRNAs is an especially important point as there are many species with very few known microRNAs. Therefore, we implemented a highly accurate machine learning algorithm for the prediction of new microRNAs that reaches AUC values of 97.9% and recall values of up to 75% on unseen data. The web tool summarizes all the described steps in a single output page, which provides a comprehensive overview of the analysis, adding links to more detailed output pages for each analysis module. miRanalyzer is available at http://web.bioinformatics.cicbiogune.es/microRNA/.
Web Instruction with the LBO Model.
ERIC Educational Resources Information Center
Agarwal, Rajshree; Day, A. Edward
2000-01-01
Presents a Web site that utilizes the Learning-by-Objective (LBO) model that integrates Internet tools for knowledge transmission, communication, and assessment of learning. Explains that the LBO model has been used in creating micro and macroeconomic course Web sites with WebCT software. (CMK)
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández
2012-01-01
Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643
Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.
Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández
2012-01-01
Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.
Web Services Integration on the Fly
2008-12-01
NETBEANS 6.1 AND VERSION CONTROL............................................28 1. NetBeans Integrated Development Environment (IDE) ................28 2...Forward and Reverse Engineering...................................................28 3. Implementation using NetBeans ...29 4. Subversion (SVN) for Version Control in NetBeans ......................29 O. PROTÉGÉ AUTHORING TOOL FOR SEMANTIC WEB
NASA Astrophysics Data System (ADS)
Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.
2017-12-01
Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.
AlzPharm: integration of neurodegeneration data using RDF.
Lam, Hugo Y K; Marenco, Luis; Clark, Tim; Gao, Yong; Kinoshita, June; Shepherd, Gordon; Miller, Perry; Wu, Elizabeth; Wong, Gwendolyn T; Liu, Nian; Crasto, Chiquito; Morse, Thomas; Stephens, Susie; Cheung, Kei-Hoi
2007-05-09
Neuroscientists often need to access a wide range of data sets distributed over the Internet. These data sets, however, are typically neither integrated nor interoperable, resulting in a barrier to answering complex neuroscience research questions. Domain ontologies can enable the querying heterogeneous data sets, but they are not sufficient for neuroscience since the data of interest commonly span multiple research domains. To this end, e-Neuroscience seeks to provide an integrated platform for neuroscientists to discover new knowledge through seamless integration of the very diverse types of neuroscience data. Here we present a Semantic Web approach to building this e-Neuroscience framework by using the Resource Description Framework (RDF) and its vocabulary description language, RDF Schema (RDFS), as a standard data model to facilitate both representation and integration of the data. We have constructed a pilot ontology for BrainPharm (a subset of SenseLab) using RDFS and then converted a subset of the BrainPharm data into RDF according to the ontological structure. We have also integrated the converted BrainPharm data with existing RDF hypothesis and publication data from a pilot version of SWAN (Semantic Web Applications in Neuromedicine). Our implementation uses the RDF Data Model in Oracle Database 10g release 2 for data integration, query, and inference, while our Web interface allows users to query the data and retrieve the results in a convenient fashion. Accessing and integrating biomedical data which cuts across multiple disciplines will be increasingly indispensable and beneficial to neuroscience researchers. The Semantic Web approach we undertook has demonstrated a promising way to semantically integrate data sets created independently. It also shows how advanced queries and inferences can be performed over the integrated data, which are hard to achieve using traditional data integration approaches. Our pilot results suggest that our Semantic Web approach is suitable for realizing e-Neuroscience and generic enough to be applied in other biomedical fields.
AlzPharm: integration of neurodegeneration data using RDF
Lam, Hugo YK; Marenco, Luis; Clark, Tim; Gao, Yong; Kinoshita, June; Shepherd, Gordon; Miller, Perry; Wu, Elizabeth; Wong, Gwendolyn T; Liu, Nian; Crasto, Chiquito; Morse, Thomas; Stephens, Susie; Cheung, Kei-Hoi
2007-01-01
Background Neuroscientists often need to access a wide range of data sets distributed over the Internet. These data sets, however, are typically neither integrated nor interoperable, resulting in a barrier to answering complex neuroscience research questions. Domain ontologies can enable the querying heterogeneous data sets, but they are not sufficient for neuroscience since the data of interest commonly span multiple research domains. To this end, e-Neuroscience seeks to provide an integrated platform for neuroscientists to discover new knowledge through seamless integration of the very diverse types of neuroscience data. Here we present a Semantic Web approach to building this e-Neuroscience framework by using the Resource Description Framework (RDF) and its vocabulary description language, RDF Schema (RDFS), as a standard data model to facilitate both representation and integration of the data. Results We have constructed a pilot ontology for BrainPharm (a subset of SenseLab) using RDFS and then converted a subset of the BrainPharm data into RDF according to the ontological structure. We have also integrated the converted BrainPharm data with existing RDF hypothesis and publication data from a pilot version of SWAN (Semantic Web Applications in Neuromedicine). Our implementation uses the RDF Data Model in Oracle Database 10g release 2 for data integration, query, and inference, while our Web interface allows users to query the data and retrieve the results in a convenient fashion. Conclusion Accessing and integrating biomedical data which cuts across multiple disciplines will be increasingly indispensable and beneficial to neuroscience researchers. The Semantic Web approach we undertook has demonstrated a promising way to semantically integrate data sets created independently. It also shows how advanced queries and inferences can be performed over the integrated data, which are hard to achieve using traditional data integration approaches. Our pilot results suggest that our Semantic Web approach is suitable for realizing e-Neuroscience and generic enough to be applied in other biomedical fields. PMID:17493287
Effects of web-based electrocardiography simulation on strategies and learning styles.
Granero-Molina, José; Fernández-Sola, Cayetano; López-Domene, Esperanza; Hernández-Padilla, José Manuel; Preto, Leonel São Romão; Castro-Sánchez, Adelaida María
2015-08-01
To identify the association between the use of web simulation electrocardiography and the learning approaches, strategies and styles of nursing degree students. A descriptive and correlational design with a one-group pretest-posttest measurement was used. The study sample included 246 students in a Basic and Advanced Cardiac Life Support nursing class of nursing degree. No significant differences between genders were found in any dimension of learning styles and approaches to learning. After the introduction of web simulation electrocardiography, significant differences were found in some item scores of learning styles: theorist (p < 0.040), pragmatic (p < 0.010) and approaches to learning. The use of a web electrocardiogram (ECG) simulation is associated with the development of active and reflexive learning styles, improving motivation and a deep approach in nursing students.
Applying Semantic Web Services and Wireless Sensor Networks for System Integration
NASA Astrophysics Data System (ADS)
Berkenbrock, Gian Ricardo; Hirata, Celso Massaki; de Oliveira Júnior, Frederico Guilherme Álvares; de Oliveira, José Maria Parente
In environments like factories, buildings, and homes automation services tend to often change during their lifetime. Changes are concerned to business rules, process optimization, cost reduction, and so on. It is important to provide a smooth and straightforward way to deal with these changes so that could be handled in a faster and low cost manner. Some prominent solutions use the flexibility of Wireless Sensor Networks and the meaningful description of Semantic Web Services to provide service integration. In this work, we give an overview of current solutions for machinery integration that combine both technologies as well as a discussion about some perspectives and open issues when applying Wireless Sensor Networks and Semantic Web Services for automation services integration.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration.
Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602
Leveraging Web 2.0 in the Redesign of a Graduate-Level Technology Integration Course
ERIC Educational Resources Information Center
Oliver, Kevin
2007-01-01
In the emerging era of the "read-write" web, students can not only research and collect information from existing web resources, but also collaborate and create new information on the web in a surprising number of ways. Web 2.0 is an umbrella term for many individual tools that have been created with web collaboration, sharing, and/or new…
Semantic Web meets Integrative Biology: a survey.
Chen, Huajun; Yu, Tong; Chen, Jake Y
2013-01-01
Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB.
49 CFR 575.106 - Tire fuel efficiency consumer information program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... tires, deep tread, winter-type snow tires, space-saver or temporary use spare tires, tires with nominal... Web site. (ii) Requirements for tire retailers. Subject to paragraph (e)(1)(iii) of this section, each...
49 CFR 575.106 - Tire fuel efficiency consumer information program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... tires, deep tread, winter-type snow tires, space-saver or temporary use spare tires, tires with nominal... Web site. (ii) Requirements for tire retailers. Subject to paragraph (e)(1)(iii) of this section, each...
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Schrock, Mitchell; Baldwin, John R.; Borden, Charles S.
2010-01-01
The Ground Resource Allocation and Planning Environment (GRAPE 1.0) is a Web-based, collaborative team environment based on the Microsoft SharePoint platform, which provides Deep Space Network (DSN) resource planners tools and services for sharing information and performing analysis.
An Integrated Web-Based Assessment Tool for Assessing Pesticide Exposure and Risks
Background/Question/Methods We have created an integrated web-based tool designed to estimate exposure doses and ecological risks under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Endangered Species Act. This involved combining a number of disparat...
Oxygen, ecology, and the Cambrian radiation of animals
NASA Astrophysics Data System (ADS)
Sperling, Erik A.; Frieder, Christina A.; Raman, Akkur V.; Girguis, Peter R.; Levin, Lisa A.; Knoll, Andrew H.
2013-08-01
The Proterozoic-Cambrian transition records the appearance of essentially all animal body plans (phyla), yet to date no single hypothesis adequately explains both the timing of the event and the evident increase in diversity and disparity. Ecological triggers focused on escalatory predator-prey "arms races" can explain the evolutionary pattern but not its timing, whereas environmental triggers, particularly ocean/atmosphere oxygenation, do the reverse. Using modern oxygen minimum zones as an analog for Proterozoic oceans, we explore the effect of low oxygen levels on the feeding ecology of polychaetes, the dominant macrofaunal animals in deep-sea sediments. Here we show that low oxygen is clearly linked to low proportions of carnivores in a community and low diversity of carnivorous taxa, whereas higher oxygen levels support more complex food webs. The recognition of a physiological control on carnivory therefore links environmental triggers and ecological drivers, providing an integrated explanation for both the pattern and timing of Cambrian animal radiation.
Deep Learning Improves Antimicrobial Peptide Recognition.
Veltri, Daniel; Kamath, Uday; Shehu, Amarda
2018-03-24
Bacterial resistance to antibiotics is a growing concern. Antimicrobial peptides (AMPs), natural components of innate immunity, are popular targets for developing new drugs. Machine learning methods are now commonly adopted by wet-laboratory researchers to screen for promising candidates. In this work we utilize deep learning to recognize antimicrobial activity. We propose a neural network model with convolutional and recurrent layers that leverage primary sequence composition. Results show that the proposed model outperforms state-of-the-art classification models on a comprehensive data set. By utilizing the embedding weights, we also present a reduced-alphabet representation and show that reasonable AMP recognition can be maintained using nine amino-acid types. Models and data sets are made freely available through the Antimicrobial Peptide Scanner vr.2 web server at: www.ampscanner.com. amarda@gmu.edu for general inquiries and dan.veltri@gmail.com for web server information. Supplementary data are available at Bioinformatics online.
A case study of data integration for aquatic resources using semantic web technologies
Gordon, Janice M.; Chkhenkeli, Nina; Govoni, David L.; Lightsom, Frances L.; Ostroff, Andrea C.; Schweitzer, Peter N.; Thongsavanh, Phethala; Varanka, Dalia E.; Zednik, Stephan
2015-01-01
Use cases, information modeling, and linked data techniques are Semantic Web technologies used to develop a prototype system that integrates scientific observations from four independent USGS and cooperator data systems. The techniques were tested with a use case goal of creating a data set for use in exploring potential relationships among freshwater fish populations and environmental factors. The resulting prototype extracts data from the BioData Retrieval System, the Multistate Aquatic Resource Information System, the National Geochemical Survey, and the National Hydrography Dataset. A prototype user interface allows a scientist to select observations from these data systems and combine them into a single data set in RDF format that includes explicitly defined relationships and data definitions. The project was funded by the USGS Community for Data Integration and undertaken by the Community for Data Integration Semantic Web Working Group in order to demonstrate use of Semantic Web technologies by scientists. This allows scientists to simultaneously explore data that are available in multiple, disparate systems beyond those they traditionally have used.
Web 2.0 and Marketing Education: Explanations and Experiential Applications
ERIC Educational Resources Information Center
Granitz, Neil; Koernig, Stephen K.
2011-01-01
Although both experiential learning and Web 2.0 tools focus on creativity, sharing, and collaboration, sparse research has been published integrating a Web 2.0 paradigm with experiential learning in marketing. In this article, Web 2.0 concepts are explained. Web 2.0 is then positioned as a philosophy that can advance experiential learning through…
Bell, James B; Woulds, Clare; Oevelen, Dick van
2017-09-20
Hydrothermal vents are highly dynamic ecosystems and are unusually energy rich in the deep-sea. In situ hydrothermal-based productivity combined with sinking photosynthetic organic matter in a soft-sediment setting creates geochemically diverse environments, which remain poorly studied. Here, we use comprehensive set of new and existing field observations to develop a quantitative ecosystem model of a deep-sea chemosynthetic ecosystem from the most southerly hydrothermal vent system known. We find evidence of chemosynthetic production supplementing the metazoan food web both at vent sites and elsewhere in the Bransfield Strait. Endosymbiont-bearing fauna were very important in supporting the transfer of chemosynthetic carbon into the food web, particularly to higher trophic levels. Chemosynthetic production occurred at all sites to varying degrees but was generally only a small component of the total organic matter inputs to the food web, even in the most hydrothermally active areas, owing in part to a low and patchy density of vent-endemic fauna. Differences between relative abundance of faunal functional groups, resulting from environmental variability, were clear drivers of differences in biogeochemical cycling and resulted in substantially different carbon processing patterns between habitats.
Biodiversity maintenance in food webs with regulatory environmental feedbacks.
Bagdassarian, Carey K; Dunham, Amy E; Brown, Christopher G; Rauscher, Daniel
2007-04-21
Although the food web is one of the most fundamental and oldest concepts in ecology, elucidating the strategies and structures by which natural communities of species persist remains a challenge to empirical and theoretical ecologists. We show that simple regulatory feedbacks between autotrophs and their environment when embedded within complex and realistic food-web models enhance biodiversity. The food webs are generated through the niche-model algorithm and coupled with predator-prey dynamics, with and without environmental feedbacks at the autotroph level. With high probability and especially at lower, more realistic connectance levels, regulatory environmental feedbacks result in fewer species extinctions, that is, in increased species persistence. These same feedback couplings, however, also sensitize food webs to environmental stresses leading to abrupt collapses in biodiversity with increased forcing. Feedback interactions between species and their material environments anchor food-web persistence, adding another dimension to biodiversity conservation. We suggest that the regulatory features of two natural systems, deep-sea tubeworms with their microbial consortia and a soil ecosystem manifesting adaptive homeostatic changes, can be embedded within niche-model food-web dynamics.
Creating a course-based web site in a university environment
NASA Astrophysics Data System (ADS)
Robin, Bernard R.; Mcneil, Sara G.
1997-06-01
The delivery of educational materials is undergoing a remarkable change from the traditional lecture method to dissemination of courses via the World Wide Web. This paradigm shift from a paper-based structure to an electronic one has profound implications for university faculty. Students are enrolling in classes with the expectation of using technology and logging on to the Internet, and professors are realizing that the potential of the Web can have a significant impact on classroom activities. An effective method of integrating electronic technologies into teaching and learning is to publish classroom materials on the World Wide Web. Already, many faculty members are creating their own home pages and Web sites for courses that include syllabi, handouts, and student work. Additionally, educators are finding value in adding hypertext links to a wide variety of related Web resources from online research and electronic journals to government and commercial sites. A number of issues must be considered when developing course-based Web sites. These include meeting the needs of a target audience, designing effective instructional materials, and integrating graphics and other multimedia components. There are also numerous technical issues that must be addressed in developing, uploading and maintaining HTML documents. This article presents a model for a university faculty who want to begin using the Web in their teaching and is based on the experiences of two College of Education professors who are using the Web as an integral part of their graduate courses.
Testing Web Applications with Mutation Analysis
ERIC Educational Resources Information Center
Praphamontripong, Upsorn
2017-01-01
Web application software uses new technologies that have novel methods for integration and state maintenance that amount to new control flow mechanisms and new variables scoping. While modern web development technologies enhance the capabilities of web applications, they introduce challenges that current testing techniques do not adequately test…
SSWAP: A Simple Semantic Web Architecture and Protocol for Semantic Web Services
USDA-ARS?s Scientific Manuscript database
SSWAP (Simple Semantic Web Architecture and Protocol) is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP is the driving technology behind the Virtual Plant Information Network, an NSF-funded semantic w...
Managing the Web-Enhanced Geographic Information Service.
ERIC Educational Resources Information Center
Stephens, Denise
1997-01-01
Examines key management issues involved in delivering geographic information services on the World Wide Web, using the Geographic Information Center (GIC) program at the University of Virginia Library as a reference. Highlights include integrating the Web into services; building collections for Web delivery; and evaluating spatial information…
Carbon flows in the benthic food web at the deep-sea observatory HAUSGARTEN (Fram Strait)
NASA Astrophysics Data System (ADS)
van Oevelen, Dick; Bergmann, Melanie; Soetaert, Karline; Bauerfeind, Eduard; Hasemann, Christiane; Klages, Michael; Schewe, Ingo; Soltwedel, Thomas; Budaeva, Nataliya E.
2011-11-01
The HAUSGARTEN observatory is located in the eastern Fram Strait (Arctic Ocean) and used as long-term monitoring site to follow changes in the Arctic benthic ecosystem. Linear inverse modelling was applied to decipher carbon flows among the compartments of the benthic food web at the central HAUSGARTEN station (2500 m) based on an empirical data set consisting of data on biomass, prokaryote production, total carbon deposition and community respiration. The model resolved 99 carbon flows among 4 abiotic and 10 biotic compartments, ranging from prokaryotes up to megafauna. Total carbon input was 3.78±0.31 mmol C m -2 d -1, which is a comparatively small fraction of total primary production in the area. The community respiration of 3.26±0.20 mmol C m -2 d -1 is dominated by prokaryotes (93%) and has lower contributions from surface-deposit feeding macro- (1.7%) and suspension feeding megafauna (1.9%), whereas contributions from nematode and other macro- and megabenthic compartments were limited to <1%. The high prokaryotic contribution to carbon processing suggests that functioning of the benthic food web at the central HAUSGARTEN station is comparable to abyssal plain sediments that are characterised by strong energy limitation. Faunal diet compositions suggest that labile detritus is important for deposit-feeding nematodes (24% of their diet) and surface-deposit feeding macrofauna (˜44%), but that semi-labile detritus is more important in the diets of deposit-feeding macro- and megafauna. Dependency indices on these food sources were also calculated as these integrate direct (i.e. direct grazing and predator-prey interactions) and indirect (i.e. longer loops in the food web) pathways in the food web. Projected sea-ice retreats for the Arctic Ocean typically anticipate a decrease in the labile detritus flux to the already food-limited benthic food web. The dependency indices indicate that faunal compartments depend similarly on labile and semi-labile detritus, which suggests that the benthic biota may be more sensitive to changes in labile detritus inputs than when assessed from diet composition alone. Species-specific responses to different types of labile detritus inputs, e.g. pelagic algae versus sympagic algae, however, are presently unknown and are needed to assess the vulnerability of individual components of the benthic food web.
Project CONVERGE: Impacts of local oceanographic processes on Adélie penguin foraging ecology
NASA Astrophysics Data System (ADS)
Kohut, J. T.; Bernard, K. S.; Fraser, W.; Oliver, M. J.; Statscewich, H.; Patterson-Fraser, D.; Winsor, P.; Cimino, M. A.; Miles, T. N.
2016-02-01
During the austral summer of 2014-2015, project CONVERGE deployed a multi-platform network to sample the Adélie penguin foraging hotspot associated with Palmer Deep Canyon along the Western Antarctic Peninsula. The focus of CONVERGE was to assess the impact of prey-concentrating ocean circulation dynamics on Adélie penguin foraging behavior. Food web links between phytoplankton and zooplankton abundance and penguin behavior were examined to better understand the within-season variability in Adélie foraging ecology. Since the High Frequency Radar (HFR) network installation in November 2014, the radial component current data from each of the three sites were combined to provide a high resolution (0.5 km) surface velocity maps. These hourly maps have revealed an incredibly dynamic system with strong fronts and frequent eddies extending across the Palmer Deep foraging area. A coordinated fleet of underwater gliders were used in concert with the HFR fields to sample the hydrography and phytoplankton distributions associated with convergent and divergent features. Three gliders mapped the along and across canyon variability of the hydrography, chlorophyll fluorescence and acoustic backscatter in the context of the observed surface currents and simultaneous penguin tracks. This presentation will highlight these synchronized measures of the food web in the context of the observed HFR fronts and eddies. The location and persistence of these features coupled with ecological sampling through the food web offer an unprecedented view of the Palmer Deep ecosystem. Specific examples will highlight how the vertical structure of the water column beneath the surface features stack the primary and secondary producers relative to observed penguin foraging behavior. The coupling from the physics through the food web as observed by our multi-platform network gives strong evidence for the critical role that distribution patterns of lower trophic levels have on Adélie foraging.
Duncan, R G; Saperia, D; Dulbandzhyan, R; Shabot, M M; Polaschek, J X; Jones, D T
2001-01-01
The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems.
Deep learning application: rubbish classification with aid of an android device
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Zhan, Jie
2017-06-01
Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.
Integrated design, execution, and analysis of arrayed and pooled CRISPR genome-editing experiments.
Canver, Matthew C; Haeussler, Maximilian; Bauer, Daniel E; Orkin, Stuart H; Sanjana, Neville E; Shalem, Ophir; Yuan, Guo-Cheng; Zhang, Feng; Concordet, Jean-Paul; Pinello, Luca
2018-05-01
CRISPR (clustered regularly interspaced short palindromic repeats) genome-editing experiments offer enormous potential for the evaluation of genomic loci using arrayed single guide RNAs (sgRNAs) or pooled sgRNA libraries. Numerous computational tools are available to help design sgRNAs with optimal on-target efficiency and minimal off-target potential. In addition, computational tools have been developed to analyze deep-sequencing data resulting from genome-editing experiments. However, these tools are typically developed in isolation and oftentimes are not readily translatable into laboratory-based experiments. Here, we present a protocol that describes in detail both the computational and benchtop implementation of an arrayed and/or pooled CRISPR genome-editing experiment. This protocol provides instructions for sgRNA design with CRISPOR (computational tool for the design, evaluation, and cloning of sgRNA sequences), experimental implementation, and analysis of the resulting high-throughput sequencing data with CRISPResso (computational tool for analysis of genome-editing outcomes from deep-sequencing data). This protocol allows for design and execution of arrayed and pooled CRISPR experiments in 4-5 weeks by non-experts, as well as computational data analysis that can be performed in 1-2 d by both computational and noncomputational biologists alike using web-based and/or command-line versions.
Semantically-enabled sensor plug & play for the sensor web.
Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian
2011-01-01
Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.
Semantically-Enabled Sensor Plug & Play for the Sensor Web
Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian
2011-01-01
Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033
WikiHyperGlossary (WHG): an information literacy technology for chemistry documents.
Bauer, Michael A; Berleant, Daniel; Cornell, Andrew P; Belford, Robert E
2015-01-01
The WikiHyperGlossary is an information literacy technology that was created to enhance reading comprehension of documents by connecting them to socially generated multimedia definitions as well as semantically relevant data. The WikiHyperGlossary enhances reading comprehension by using the lexicon of a discipline to generate dynamic links in a document to external resources that can provide implicit information the document did not explicitly provide. Currently, the most common method to acquire additional information when reading a document is to access a search engine and browse the web. This may lead to skimming of multiple documents with the novice actually never returning to the original document of interest. The WikiHyperGlossary automatically brings information to the user within the current document they are reading, enhancing the potential for deeper document understanding. The WikiHyperGlossary allows users to submit a web URL or text to be processed against a chosen lexicon, returning the document with tagged terms. The selection of a tagged term results in the appearance of the WikiHyperGlossary Portlet containing a definition, and depending on the type of word, tabs to additional information and resources. Current types of content include multimedia enhanced definitions, ChemSpider query results, 3D molecular structures, and 2D editable structures connected to ChemSpider queries. Existing glossaries can be bulk uploaded, locked for editing and associated with multiple social generated definitions. The WikiHyperGlossary leverages both social and semantic web technologies to bring relevant information to a document. This can not only aid reading comprehension, but increases the users' ability to obtain additional information within the document. We have demonstrated a molecular editor enabled knowledge framework that can result in a semantic web inductive reasoning process, and integration of the WikiHyperGlossary into other software technologies, like the Jikitou Biomedical Question and Answer system. Although this work was developed in the chemical sciences and took advantage of open science resources and initiatives, the technology is extensible to other knowledge domains. Through the DeepLit (Deeper Literacy: Connecting Documents to Data and Discourse) startup, we seek to extend WikiHyperGlossary technologies to other knowledge domains, and integrate them into other knowledge acquisition workflows.
A web GIS based integrated flood assessment modeling tool for coastal urban watersheds
NASA Astrophysics Data System (ADS)
Kulkarni, A. T.; Mohanty, J.; Eldho, T. I.; Rao, E. P.; Mohan, B. K.
2014-03-01
Urban flooding has become an increasingly important issue in many parts of the world. In this study, an integrated flood assessment model (IFAM) is presented for the coastal urban flood simulation. A web based GIS framework has been adopted to organize the spatial datasets for the study area considered and to run the model within this framework. The integrated flood model consists of a mass balance based 1-D overland flow model, 1-D finite element based channel flow model based on diffusion wave approximation and a quasi 2-D raster flood inundation model based on the continuity equation. The model code is written in MATLAB and the application is integrated within a web GIS server product viz: Web Gram Server™ (WGS), developed at IIT Bombay, using Java, JSP and JQuery technologies. Its user interface is developed using open layers and the attribute data are stored in MySQL open source DBMS. The model is integrated within WGS and is called via Java script. The application has been demonstrated for two coastal urban watersheds of Navi Mumbai, India. Simulated flood extents for extreme rainfall event of 26 July, 2005 in the two urban watersheds of Navi Mumbai city are presented and discussed. The study demonstrates the effectiveness of the flood simulation tool in a web GIS environment to facilitate data access and visualization of GIS datasets and simulation results.
Web-Mediated Knowledge Synthesis for Educators
ERIC Educational Resources Information Center
DeSchryver, Michael
2015-01-01
Ubiquitous and instant access to information on the Web is challenging what constitutes 21st century literacies. This article explores the notion of Web-mediated knowledge synthesis, an approach to integrating Web-based learning that may result in generative synthesis of ideas. This article describes the skills and strategies that may support…
Web-based Interspecies Correlation Estimation (Web-ICE) for Acute Toxicity: User Manual Version 3.1
Predictive toxicological models are integral to ecological risk assessment because data for most species are limited. Web-based Interspecies Correlation Estimation (Web-ICE) models are least square regressions that predict acute toxicity (LC50/LD50) of a chemical to a species, ge...
WEB-BASED INTERSPECIES CORRELATION ESTIMATION (WEB-ICE) FOR ACUTE TOXICITY: USER MANUAL V2
Predictive toxicological models are integral to environmental risk Assessment where data for most species are limited. Web-based Interspecies Correlation Estimation (Web-ICE) models are least square regressions that predict acute toxicity (LC50/LD50) of a chemical to a species, ...
75 FR 61779 - National Science Board: Sunshine Act Meetings; Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
...:30 p.m. to 3 p.m. SUBJECT MATTER: Review of NSB Action Item (NSB/CPP-10-63) (Deep Underground Science... National Science Board Web site http://www.nsf.gov/nsb for additional information and schedule updates...
ERIC Educational Resources Information Center
Lee, Min-Hsien; Tsai, Chin-Chung
2010-01-01
Research in the area of educational technology has claimed that Web technology has driven online pedagogy such that teachers need to know how to use Web technology to assist their teaching. This study provides a framework for understanding teachers' Technological Pedagogical Content Knowledge-Web (TPCK-W), while integrating Web technology into…
NASA Astrophysics Data System (ADS)
Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.
2005-12-01
We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.
Description of the U.S. Geological Survey Geo Data Portal data integration framework
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Lucido, Jessica M.
2012-01-01
The U.S. Geological Survey has developed an open-standard data integration framework for working efficiently and effectively with large collections of climate and other geoscience data. A web interface accesses catalog datasets to find data services. Data resources can then be rendered for mapping and dataset metadata are derived directly from these web services. Algorithm configuration and information needed to retrieve data for processing are passed to a server where all large-volume data access and manipulation takes place. The data integration strategy described here was implemented by leveraging existing free and open source software. Details of the software used are omitted; rather, emphasis is placed on how open-standard web services and data encodings can be used in an architecture that integrates common geographic and atmospheric data.
Public-Sector Information Security: A Call to Action for Public-Sector CIOs
2002-10-01
scenarios. However, in a larger sense, it is a story for all public-sector CIOs, a story both prophetic and sobering. Deep in this story, however, there...information technology (IT), our way of life, and the values that lay deep in the core of our American culture. These values include rights to...defines roles and accountabilities. The Scope of the Problem Today there are 109.5 million Internet hosts on the World Wide Web . Five years ago there
The semantic web in translational medicine: current applications and future directions
Machado, Catia M.; Rebholz-Schuhmann, Dietrich; Freitas, Ana T.; Couto, Francisco M.
2015-01-01
Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. PMID:24197933
The semantic web in translational medicine: current applications and future directions.
Machado, Catia M; Rebholz-Schuhmann, Dietrich; Freitas, Ana T; Couto, Francisco M
2015-01-01
Semantic web technologies offer an approach to data integration and sharing, even for resources developed independently or broadly distributed across the web. This approach is particularly suitable for scientific domains that profit from large amounts of data that reside in the public domain and that have to be exploited in combination. Translational medicine is such a domain, which in addition has to integrate private data from the clinical domain with proprietary data from the pharmaceutical domain. In this survey, we present the results of our analysis of translational medicine solutions that follow a semantic web approach. We assessed these solutions in terms of their target medical use case; the resources covered to achieve their objectives; and their use of existing semantic web resources for the purposes of data sharing, data interoperability and knowledge discovery. The semantic web technologies seem to fulfill their role in facilitating the integration and exploration of data from disparate sources, but it is also clear that simply using them is not enough. It is fundamental to reuse resources, to define mappings between resources, to share data and knowledge. All these aspects allow the instantiation of translational medicine at the semantic web-scale, thus resulting in a network of solutions that can share resources for a faster transfer of new scientific results into the clinical practice. The envisioned network of translational medicine solutions is on its way, but it still requires resolving the challenges of sharing protected data and of integrating semantic-driven technologies into the clinical practice. © The Author 2013. Published by Oxford University Press.
McLean, Michelle; Murrell, Kathy
2002-03-01
WebCT, front-end software for Internet-delivered material, became an integral part of a problem-based learning, student-centred curriculum introduced in January 2001 at the Nelson R. Mandela School of Medicine (South Africa). A template for six curriculum and two supplementary modules was developed. Organiser and Tool pages were added and files uploaded as each module progressed. This study provides feedback from students with regard to the value of WebCT in their curriculum, as well as discussing the value of WebCT for the delivery of digitized material (e.g., images, videos, PowerPoint presentations). In an anonymous survey following the completion of the first module, students, apparently irrespective of their level of computer literacy, responded positively to the communication facility between staff and students and amongst students, the resources and the URLs. Based on these preliminary responses, WebCT courses for all six modules were developed during 2001. With Faculty support, WebCT will probably be integrated into the rest of the MBChB programme. It will be particularly useful when students are off campus, undertaking electives and community service in the later years.
Integrating Streaming Media to Web-based Learning: A Modular Approach.
ERIC Educational Resources Information Center
Miltenoff, Plamen
2000-01-01
Explains streaming technology and discusses how to integrate it into Web-based instruction based on experiences at St. Cloud State University (Minnesota). Topics include a modular approach, including editing, copyright concerns, digitizing, maintenance, and continuing education needs; the role of the library; and how streaming can enhance…
Radiation tolerance of boron doped dendritic web silicon solar cells
NASA Technical Reports Server (NTRS)
Rohatgi, A.
1980-01-01
The potential of dendritic web silicon for giving radiation hard solar cells is compared with the float zone silicon material. Solar cells with n(+)-p-P(+) structure and approximately 15% (AMl) efficiency were subjected to 1 MeV electron irradiation. Radiation tolerance of web cell efficiency was found to be at least as good as that of the float zone silicon cell. A study of the annealing behavior of radiation-induced defects via deep level transient spectroscopy revealed that E sub v + 0.31 eV defect, attributed to boron-oxygen-vacancy complex, is responsible for the reverse annealing of the irradiated cells in the temperature range of 150 to 350 C.
2011-01-01
Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447
Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke
2011-10-24
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.
Bernard, André; Langille, Morgan; Hughes, Stephanie; Rose, Caren; Leddin, Desmond; Veldhuyzen van Zanten, Sander
2007-09-01
The Internet is a widely used information resource for patients with inflammatory bowel disease, but there is variation in the quality of Web sites that have patient information regarding Crohn's disease and ulcerative colitis. The purpose of the current study is to systematically evaluate the quality of these Web sites. The top 50 Web sites appearing in Google using the terms "Crohn's disease" or "ulcerative colitis" were included in the study. Web sites were evaluated using a (a) Quality Evaluation Instrument (QEI) that awarded Web sites points (0-107) for specific information on various aspects of inflammatory bowel disease, (b) a five-point Global Quality Score (GQS), (c) two reading grade level scores, and (d) a six-point integrity score. Thirty-four Web sites met the inclusion criteria, 16 Web sites were excluded because they were portals or non-IBD oriented. The median QEI score was 57 with five Web sites scoring higher than 75 points. The median Global Quality Score was 2.0 with five Web sites achieving scores of 4 or 5. The average reading grade level score was 11.2. The median integrity score was 3.0. There is marked variation in the quality of the Web sites containing information on Crohn's disease and ulcerative colitis. Many Web sites suffered from poor quality but there were five high-scoring Web sites.
Awareness and action for eliminating health care disparities in pain care: Web-based resources.
Fan, Ling; Thomas, Melissa; Deitrick, Ginna E; Polomano, Rosemary C
2008-01-01
Evidence shows that disparities in pain care exist, and this problem spans across all health care settings. Health care disparities are complex, and stem from the health system climate, limitations imposed by laws and regulations, and discriminatory practices that are deep seated in biases, stereotypes, and uncertainties surrounding communication and decision-making processes. A search of the Internet identified thousands of Web sites, documents, reports, and educational materials pertaining to health and pain disparities. Web sites for federal agencies, private foundations, and professional and consumer-oriented organizations provide useful information on disparities related to age, race, ethnicity, geography, socioeconomic status, and specific populations. The contents of 10 Web sites are examined for resources to assist health professionals and consumers in better understanding health and pain disparities and ways to overcome them in practice.
Extracting Databases from Dark Data with DeepDive.
Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng
2016-01-01
DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.
WebVR: an interactive web browser for virtual environments
NASA Astrophysics Data System (ADS)
Barsoum, Emad; Kuester, Falko
2005-03-01
The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.
Beveridge, Allan
2006-01-01
The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.
The Design of Modular Web-Based Collaboration
NASA Astrophysics Data System (ADS)
Intapong, Ploypailin; Settapat, Sittapong; Kaewkamnerdpong, Boonserm; Achalakul, Tiranee
Online collaborative systems are popular communication channels as the systems allow people from various disciplines to interact and collaborate with ease. The systems provide communication tools and services that can be integrated on the web; consequently, the systems are more convenient to use and easier to install. Nevertheless, most of the currently available systems are designed according to some specific requirements and cannot be straightforwardly integrated into various applications. This paper provides the design of a new collaborative platform, which is component-based and re-configurable. The platform is called the Modular Web-based Collaboration (MWC). MWC shares the same concept as computer supported collaborative work (CSCW) and computer-supported collaborative learning (CSCL), but it provides configurable tools for online collaboration. Each tool module can be integrated into users' web applications freely and easily. This makes collaborative system flexible, adaptable and suitable for online collaboration.
Ondex Web: web-based visualization and exploration of heterogeneous biological networks.
Taubert, Jan; Hassani-Pak, Keywan; Castells-Brooke, Nathalie; Rawlings, Christopher J
2014-04-01
Ondex Web is a new web-based implementation of the network visualization and exploration tools from the Ondex data integration platform. New features such as context-sensitive menus and annotation tools provide users with intuitive ways to explore and manipulate the appearance of heterogeneous biological networks. Ondex Web is open source, written in Java and can be easily embedded into Web sites as an applet. Ondex Web supports loading data from a variety of network formats, such as XGMML, NWB, Pajek and OXL. http://ondex.rothamsted.ac.uk/OndexWeb.
Bromberg, Yana; Yachdav, Guy; Ofran, Yanay; Schneider, Reinhard; Rost, Burkhard
2009-05-01
The rapidly increasing quantity of protein sequence data continues to widen the gap between available sequences and annotations. Comparative modeling suggests some aspects of the 3D structures of approximately half of all known proteins; homology- and network-based inferences annotate some aspect of function for a similar fraction of the proteome. For most known protein sequences, however, there is detailed knowledge about neither their function nor their structure. Comprehensive efforts towards the expert curation of sequence annotations have failed to meet the demand of the rapidly increasing number of available sequences. Only the automated prediction of protein function in the absence of homology can close the gap between available sequences and annotations in the foreseeable future. This review focuses on two novel methods for automated annotation, and briefly presents an outlook on how modern web software may revolutionize the field of protein sequence annotation. First, predictions of protein binding sites and functional hotspots, and the evolution of these into the most successful type of prediction of protein function from sequence will be discussed. Second, a new tool, comprehensive in silico mutagenesis, which contributes important novel predictions of function and at the same time prepares for the onset of the next sequencing revolution, will be described. While these two new sub-fields of protein prediction represent the breakthroughs that have been achieved methodologically, it will then be argued that a different development might further change the way biomedical researchers benefit from annotations: modern web software can connect the worldwide web in any browser with the 'Deep Web' (ie, proprietary data resources). The availability of this direct connection, and the resulting access to a wealth of data, may impact drug discovery and development more than any existing method that contributes to protein annotation.
Navigating the Web with a Typology of Corporate Uses.
ERIC Educational Resources Information Center
Hoger, Elizabeth A.; Cappel, James J.; Myerscough, Mark A.
1998-01-01
Describes a typology of business uses of the World Wide Web for electronic commerce. Gives examples of each type. Offers a sample assignment to show how the typology can be used in directing Web exploration, integrating the typology into an analytical assignment that analyzes a Web site using business communication concepts, and presenting the…
Web Accessibility Policies at Land-Grant Universities
ERIC Educational Resources Information Center
Bradbard, David A.; Peters, Cara; Caneva, Yoana
2010-01-01
The Web has become an integral part of postsecondary education within the United States. There are specific laws that legally mandate postsecondary institutions to have Web sites that are accessible for students with disabilities (e.g., the Americans with Disabilities Act (ADA)). Web accessibility policies are a way for universities to provide a…
The Adoption and Diffusion of Web Technologies into Mainstream Teaching.
ERIC Educational Resources Information Center
Hansen, Steve; Salter, Graeme
2001-01-01
Discusses various adoption and diffusion frameworks and methodologies to enhance the use of Web technologies by teaching staff. Explains the use of adopter-based models for product development; discusses the innovation-decision process; and describes PlatformWeb, a Web information system that was developed to help integrate a universities'…
Library Web Site Administration: A Strategic Planning Model For the Smaller Academic Library
ERIC Educational Resources Information Center
Ryan, Susan M.
2003-01-01
Strategic planning provides a useful structure for creating and implementing library web sites. The planned integration of a library's web site into its mission and objectives ensures that the library's community of users will consider the web site one of the most important information tools the library offers.
Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo
2015-01-01
Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.
Web accessibility and open source software.
Obrenović, Zeljko
2009-07-01
A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.
75 FR 82072 - Notice of Lodging of a Consent Decree Under the Clean Water Act
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
... injunctive measures, including the construction of seven deep underground tunnel systems--to reduce its CSO... Decree, may also be examined on the following Department of Justice Web site, to http://www.usdoj.gov...
Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha
2016-02-27
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-03-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha
2016-01-01
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335
Adding Processing Functionality to the Sensor Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pross, Benjamin; Jirka, Simon; Gräler, Benedikt
2017-04-01
The Sensor Web allows discovering, accessing and tasking different kinds of environmental sensors in the Web, ranging from simple in-situ sensors to remote sensing systems. However, (geo-)processing functionality needs to be applied to integrate data from different sensor sources and to generate higher level information products. Yet, a common standardized approach for processing sensor data in the Sensor Web is still missing and the integration differs from application to application. Standardizing not only the provision of sensor data, but also the processing facilitates sharing and re-use of processing modules, enables reproducibility of processing results, and provides a common way to integrate external scalable processing facilities or legacy software. In this presentation, we provide an overview on on-going research projects that develop concepts for coupling standardized geoprocessing technologies with Sensor Web technologies. At first, different architectures for coupling sensor data services with geoprocessing services are presented. Afterwards, profiles for linear regression and spatio-temporal interpolation of the OGC Web Processing Services that allow consuming sensor data coming from and uploading predictions to Sensor Observation Services are introduced. The profiles are implemented in processing services for the hydrological domain. Finally, we illustrate how the R software can be coupled with existing OGC Sensor Web and Geoprocessing Services and present an example, how a Web app can be built that allows exploring the results of environmental models in an interactive way using the R Shiny framework. All of the software presented is available as Open Source Software.
Chang, Alexandre A.; Lobato, Rodolfo C.; Nakamoto, Hugo A.; Tuma, Paulo; Ferreira, Marcus C.
2014-01-01
Background: We consider the use of dermal matrix associated with a skin graft to cover deep wounds in the extremities when tendon and bone are exposed. The objective of this article was to evaluate the efficacy of covering acute deep wounds through the use of a dermal regeneration template (Integra) associated with vacuum therapy and subsequent skin grafting. Methods: Twenty patients were evaluated prospectively. All of them had acute (up to 3 weeks) deep wounds in the limbs. We consider a deep wound to be that with exposure of bone, tendon, or joint. Results: The average area of integration of the dermal regeneration template was 86.5%. There was complete integration of the skin graft over the dermal matrix in 14 patients (70%), partial integration in 5 patients (25%), and total loss in 1 case (5%). The wound has completely closed in 95% of patients. Conclusions: The use of Integra dermal template associated with negative-pressure therapy and skin grafting showed an adequate rate of resolution of deep wounds with low morbidity. PMID:25289363
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
First results of the wind evaluation breadboard for ELT primary mirror design
NASA Astrophysics Data System (ADS)
Reyes García-Talavera, Marcos; Viera, Teodora; Núñez, Miguel
2010-07-01
The Wind Evaluation Breadboard (WEB) is a primary mirror and telescope simulator formed by seven aluminium segments, including position sensors, electromechanical support systems and support structures. WEB has been developed to evaluate technologies for primary mirror wavefront control and to evaluate the performance of the control of wind buffeting disturbance on ELT segmented mirrors. For this purpose WEB electro-mechanical set-up simulates the real operational constrains applied to large segmented mirrors. This paper describes the WEB assembly, integration and verification, the instrument characterisation and close loop control design, including the dynamical characterization of the instrument and the control architecture. The performance of the new technologies developed for position sensing, acting and controlling is evaluated. The integration of the instrument in the observatory and the results of the first experiments are summarised, with different wind conditions, elevation and azimuth angles of incidence. Conclusions are extracted with respect the wind rejection performance and the control strategy for an ELT. WEB has been designed and developed by IAC, ESO, ALTRAN and JUPASA, with the integration of subsystems of FOGALE and TNO.
A Simulation Model that Decreases Faculty Concerns about Adopting Web-Based Instruction
ERIC Educational Resources Information Center
Song, Hae-Deok; Wang, Wei-Tsong; Liu, Chao-Yueh
2011-01-01
Faculty members have different concerns as they integrate new technology into their teaching practices. The integration of Web-Based Instruction in higher-education settings will not be successful if these faculty concerns are not addressed. Four main stages of faculty concern (information, personal, management, and impact) were identified based…
Integrated Functional and Executional Modelling of Software Using Web-Based Databases
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Marietta, Roberta
1998-01-01
NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.
EvalCOMIX®: A Web-Based Programme to Support Collaboration in Assessment
ERIC Educational Resources Information Center
Ibarra-Sáiz, María Soledad; Rodríguez-Gómez, Gregorio
2016-01-01
For many years assessment strategies and practices have emphasized on the one hand the importance of integrating assessment and learning and, secondly, the need to develop technological tools that facilitate this relationship and integration. In this paper, firstly we describe the EvalCOMIX® web service and then we present the opinions of…
Workshop D9. The Community for Integrated Modeling (CIEM): Current Status and future direction
At the 2010 iEMSs conference a workshop entitled "Web Portal for the Community for Integrated Environmental Modelling" was presented. The goals of the workshop were to introduce the CIEM web-portal (iemHUB.org) to the wider modelling community, obtain feedback and comments, and e...
The McLuhan Global Classroom: A Singapore-U.S. One-Year Instructional Interaction.
ERIC Educational Resources Information Center
Aune, Adonica Schultz; Lim, Dan
WebCT was integrated and modeled in a global Instructional Technology (IT) Certification Summer Institute offered through the University of Minnesota. Courses were first introduced with an on-site certification where technology integration was modeled in each course through the use of highly interactive web-based learning applications and games…
Exploring Faculty Incentives and Barriers to Participation in Web-Based Instruction
ERIC Educational Resources Information Center
Kinuthia, Wanjira
2006-01-01
The area of integration of technology in education is a continuous effort that revolves around looking for factors and practices that can be applied to encourage faculty to integrate technology into their areas of teaching. Web-based instruction (WBI) is one of the technologies affecting higher education, and historically Black colleges and…
Integrated and Applied Curricula Discussion Group and Data Base Project. Final Report.
ERIC Educational Resources Information Center
Wisconsin Univ. - Stout, Menomonie. Center for Vocational, Technical and Adult Education.
A project was conducted to compile integrated and applied curriculum resources, develop databases on the World Wide Web, and encourage networking for high school and technical college educators through an Internet discussion group. Activities conducted during the project include the creation of a web page to guide users to resource banks…
Using Web-Based Peer Benchmarking to Manage the Client-Based Project
ERIC Educational Resources Information Center
Raska, David; Keller, Eileen Weisenbach; Shaw, Doris
2013-01-01
The complexities of integrating client-based projects into marketing courses provide challenges for the instructor but produce richness of context and active learning for the student. This paper explains the integration of Web-based peer benchmarking as a means of improving student performance on client-based projects within a single semester in…
DOT National Transportation Integrated Search
2012-03-01
This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...
Semantic web data warehousing for caGrid.
McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael
2009-10-01
The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.
Miles, Alistair; Zhao, Jun; Klyne, Graham; White-Cooper, Helen; Shotton, David
2010-10-01
Integrating heterogeneous data across distributed sources is a major requirement for in silico bioinformatics supporting translational research. For example, genome-scale data on patterns of gene expression in the fruit fly Drosophila melanogaster are widely used in functional genomic studies in many organisms to inform candidate gene selection and validate experimental results. However, current data integration solutions tend to be heavy weight, and require significant initial and ongoing investment of effort. Development of a common Web-based data integration infrastructure (a.k.a. data web), using Semantic Web standards, promises to alleviate these difficulties, but little is known about the feasibility, costs, risks or practical means of migrating to such an infrastructure. We describe the development of OpenFlyData, a proof-of-concept system integrating gene expression data on D. melanogaster, combining Semantic Web standards with light-weight approaches to Web programming based on Web 2.0 design patterns. To support researchers designing and validating functional genomic studies, OpenFlyData includes user-facing search applications providing intuitive access to and comparison of gene expression data from FlyAtlas, the BDGP in situ database, and FlyTED, using data from FlyBase to expand and disambiguate gene names. OpenFlyData's services are also openly accessible, and are available for reuse by other bioinformaticians and application developers. Semi-automated methods and tools were developed to support labour- and knowledge-intensive tasks involved in deploying SPARQL services. These include methods for generating ontologies and relational-to-RDF mappings for relational databases, which we illustrate using the FlyBase Chado database schema; and methods for mapping gene identifiers between databases. The advantages of using Semantic Web standards for biomedical data integration are discussed, as are open issues. In particular, although the performance of open source SPARQL implementations is sufficient to query gene expression data directly from user-facing applications such as Web-based data fusions (a.k.a. mashups), we found open SPARQL endpoints to be vulnerable to denial-of-service-type problems, which must be mitigated to ensure reliability of services based on this standard. These results are relevant to data integration activities in translational bioinformatics. The gene expression search applications and SPARQL endpoints developed for OpenFlyData are deployed at http://openflydata.org. FlyUI, a library of JavaScript widgets providing re-usable user-interface components for Drosophila gene expression data, is available at http://flyui.googlecode.com. Software and ontologies to support transformation of data from FlyBase, FlyAtlas, BDGP and FlyTED to RDF are available at http://openflydata.googlecode.com. SPARQLite, an implementation of the SPARQL protocol, is available at http://sparqlite.googlecode.com. All software is provided under the GPL version 3 open source license.
Risk of Neurological Insult in Competitive Deep Breath-Hold Diving.
Tetzlaff, Kay; Schöppenthau, Holger; Schipke, Jochen D
2017-02-01
It has been widely believed that tissue nitrogen uptake from the lungs during breath-hold diving would be insufficient to cause decompression stress in humans. With competitive free diving, however, diving depths have been ever increasing over the past decades. A case is presented of a competitive free-diving athlete who suffered stroke-like symptoms after surfacing from his last dive of a series of 3 deep breath-hold dives. A literature and Web search was performed to screen for similar cases of subjects with serious neurological symptoms after deep breath-hold dives. A previously healthy 31-y-old athlete experienced right-sided motor weakness and difficulty speaking immediately after surfacing from a breathhold dive to a depth of 100 m. He had performed 2 preceding breath-hold dives to that depth with surface intervals of only 15 min. The presentation of symptoms and neuroimaging findings supported a clinical diagnosis of stroke. Three more cases of neurological insults were retrieved by literature and Web search; in all cases the athletes presented with stroke-like symptoms after single breath-hold dives of depths exceeding 100 m. Two of these cases only had a short delay to recompression treatment and completely recovered from the insult. This report highlights the possibility of neurological insult, eg, stroke, due to cerebral arterial gas embolism as a consequence of decompression stress after deep breath-hold dives. Thus, stroke as a clinical presentation of cerebral arterial gas embolism should be considered another risk of extreme breath-hold diving.
NASA Astrophysics Data System (ADS)
Orcutt, B. N.; Bowman, D.; Turner, A.; Inderbitzen, K. E.; Fisher, A. T.; Peart, L. W.; Iodp Expedition 327 Shipboard Party
2010-12-01
We launched the "Adopt a Microbe" project as part of Integrated Ocean Drilling Program (IODP) Expedition 327 in Summer 2010. This eight-week-long education and outreach effort was run by shipboard scientists and educators from the research vessel JOIDES Resolution, using a web site (https://sites.google.com/site/adoptamicrobe) to engage students of all ages in an exploration of the deep biosphere inhabiting the upper ocean crust. Participants were initially introduced to a cast of microbes (residing within an ‘Adoption Center’ on the project website) that live in the dark ocean and asked to select and virtually ‘adopt’ a microbe. A new educational activity was offered each week to encourage learning about microbiology, using the adopted microbe as a focal point. Activities included reading information and asking questions about the adopted microbes (with subsequent responses from shipboard scientists), writing haiku about the adopted microbes, making balloon and fabric models of the adopted microbes, answering math questions related to the study of microbes in the ocean, growing cultures of microbes, and examining the gases produced by microbes. In addition, the website featured regular text, photo and video updates about the science of the expedition using a toy microbe as narrator, as well as stories written by shipboard scientists from the perspective of deep ocean microbes accompanied by watercolor illustrations prepared by a shipboard artist. Assessment methods for evaluating the effectiveness of the Adopt a Microbe project included participant feedback via email and online surveys, website traffic monitoring, and online video viewing rates. Quantitative metrics suggest that the “Adope A Microbe” project was successful in reaching target audiences and helping to encourage and maintain interest in topics related to IODP Expedition 327. The “Adopt A Microbe” project mdel can be adapted for future oceanographic expeditions to help connect the public at large to cutting-edge, exploratory research and for engaging students in active learning.
Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.
Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung
2017-01-01
The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.
Web-based network analysis and visualization using CellMaps
Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín
2016-01-01
Summary: CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. Availability and Implementation: The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps. The client is implemented in JavaScript and the server in C and Java. Contact: jdopazo@cipf.es Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27296979
Web-based network analysis and visualization using CellMaps.
Salavert, Francisco; García-Alonso, Luz; Sánchez, Rubén; Alonso, Roberto; Bleda, Marta; Medina, Ignacio; Dopazo, Joaquín
2016-10-01
: CellMaps is an HTML5 open-source web tool that allows displaying, editing, exploring and analyzing biological networks as well as integrating metadata into them. Computations and analyses are remotely executed in high-end servers, and all the functionalities are available through RESTful web services. CellMaps can easily be integrated in any web page by using an available JavaScript API. The application is available at: http://cellmaps.babelomics.org/ and the code can be found in: https://github.com/opencb/cell-maps The client is implemented in JavaScript and the server in C and Java. jdopazo@cipf.es Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Solar cells and modules from dentritic web silicon
NASA Technical Reports Server (NTRS)
Campbell, R. B.; Rohatgi, A.; Seman, E. J.; Davis, J. R.; Rai-Choudhury, P.; Gallagher, B. D.
1980-01-01
Some of the noteworthy features of the processes developed in the fabrication of solar cell modules are the handling of long lengths of web, the use of cost effective dip coating of photoresist and antireflection coatings, selective electroplating of the grid pattern and ultrasonic bonding of the cell interconnect. Data on the cells is obtained by means of dark I-V analysis and deep level transient spectroscopy. A histogram of over 100 dentritic web solar cells fabricated in a number of runs using different web crystals shows an average efficiency of over 13%, with some efficiencies running above 15%. Lower cell efficiency is generally associated with low minority carrier time due to recombination centers sometimes present in the bulk silicon. A cost analysis of the process sequence using a 25 MW production line indicates a selling price of $0.75/peak watt in 1986. It is concluded that the efficiency of dentritic web cells approaches that of float zone silicon cells, reduced somewhat by the lower bulk lifetime of the former.
PaaS for web applications with OpenShift Origin
NASA Astrophysics Data System (ADS)
Lossent, A.; Rodriguez Peon, A.; Wagner, A.
2017-10-01
The CERN Web Frameworks team has deployed OpenShift Origin to facilitate deployment of web applications and to improving efficiency in terms of computing resource usage. OpenShift leverages Docker containers and Kubernetes orchestration to provide a Platform-as-a-service solution oriented for web applications. We will review use cases and how OpenShift was integrated with other services such as source control, web site management and authentication services.
ERIC Educational Resources Information Center
Wood, Pamela L.; Quitadamo, Ian J.; DePaepe, James L.; Loverro, Ian
2007-01-01
The WebQuest is a four-step process integrated at appropriate points in the Animal Studies unit. Through the WebQuest, students create a series of habitat maps that build on the knowledge gained from conducting the various activities of the unit. The quest concludes with an evaluation using the WebQuest rubric and an oral presentation of a final…
WebQuest on Conic Sections as a Learning Tool for Prospective Teachers
ERIC Educational Resources Information Center
Kurtulus, Aytac; Ada, Tuba
2012-01-01
WebQuests incorporate technology with educational concepts through integrating online resources with student-centred and activity-based learning. In this study, we describe and evaluate a WebQuest based on conic sections, which we have used with a group of prospective mathematics teachers. The WebQuest entitled: "Creating a Carpet Design Using…
A web-based biosignal data management system for U-health data integration.
Ro, Dongwoo; Yoo, Sooyoung; Choi, Jinwook
2008-11-06
In the ubiquitous healthcare environment, the biosignal data should be easily accessed and properly maintained. This paper describes a web-based data management system. It consists of a device interface, a data upload control, a central repository, and a web server. For the user-specific web services, a MFER Upload ActiveX Control was developed.
Depth-specific Analyses of the Lake Superior Food Web
Characteristics of large, deep aquatic systems include depth gradients in community composition, in the quality and distribution of food resources, and in the strategies that organisms use to obtain their nutrition. In Lake Superior, nearshore communities that rely upon a combina...
CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises
NASA Astrophysics Data System (ADS)
Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.
2011-12-01
JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.
Information integration from heterogeneous data sources: a Semantic Web approach.
Kunapareddy, Narendra; Mirhaji, Parsa; Richards, David; Casscells, S Ward
2006-01-01
Although the decentralized and autonomous implementation of health information systems has made it possible to extend the reach of surveillance systems to a variety of contextually disparate domains, public health use of data from these systems is not primarily anticipated. The Semantic Web has been proposed to address both representational and semantic heterogeneity in distributed and collaborative environments. We introduce a semantic approach for the integration of health data using the Resource Definition Framework (RDF) and the Simple Knowledge Organization System (SKOS) developed by the Semantic Web community.
The deep web, dark matter, metabundles and the broadband elites: do you need an informaticist?
Holden, Gary; Rosenberg, Gary
2003-01-01
The World Wide Web (WWW) is growing in size and is becoming a substantial component of life. This seems especially true for US professionals, including social workers. It will require effort by these professionals to use the WWW effectively and efficiently. One of the main issues that these professionals will encounter in these efforts is the quality of materials located on the WWW. This paper reviews some of the factors related to improving the quality of information obtained from the WWW by social workers.
Software for Allocating Resources in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester; Zendejas, Silvino; Baldwin, John
2003-01-01
TIGRAS 2.0 is a computer program designed to satisfy a need for improved means for analyzing the tracking demands of interplanetary space-flight missions upon the set of ground antenna resources of the Deep Space Network (DSN) and for allocating those resources. Written in Microsoft Visual C++, TIGRAS 2.0 provides a single rich graphical analysis environment for use by diverse DSN personnel, by connecting to various data sources (relational databases or files) based on the stages of the analyses being performed. Notable among the algorithms implemented by TIGRAS 2.0 are a DSN antenna-load-forecasting algorithm and a conflict-aware DSN schedule-generating algorithm. Computers running TIGRAS 2.0 can also be connected using SOAP/XML to a Web services server that provides analysis services via the World Wide Web. TIGRAS 2.0 supports multiple windows and multiple panes in each window for users to view and use information, all in the same environment, to eliminate repeated switching among various application programs and Web pages. TIGRAS 2.0 enables the use of multiple windows for various requirements, trajectory-based time intervals during which spacecraft are viewable, ground resources, forecasts, and schedules. Each window includes a time navigation pane, a selection pane, a graphical display pane, a list pane, and a statistics pane.
Food-Web Complexity in Guaymas Basin Hydrothermal Vents and Cold Seeps.
Portail, Marie; Olu, Karine; Dubois, Stanislas F; Escobar-Briones, Elva; Gelinas, Yves; Menot, Lénaick; Sarrazin, Jozée
In the Guaymas Basin, the presence of cold seeps and hydrothermal vents in close proximity, similar sedimentary settings and comparable depths offers a unique opportunity to assess and compare the functioning of these deep-sea chemosynthetic ecosystems. The food webs of five seep and four vent assemblages were studied using stable carbon and nitrogen isotope analyses. Although the two ecosystems shared similar potential basal sources, their food webs differed: seeps relied predominantly on methanotrophy and thiotrophy via the Calvin-Benson-Bassham (CBB) cycle and vents on petroleum-derived organic matter and thiotrophy via the CBB and reductive tricarboxylic acid (rTCA) cycles. In contrast to symbiotic species, the heterotrophic fauna exhibited high trophic flexibility among assemblages, suggesting weak trophic links to the metabolic diversity of chemosynthetic primary producers. At both ecosystems, food webs did not appear to be organised through predator-prey links but rather through weak trophic relationships among co-occurring species. Examples of trophic or spatial niche differentiation highlighted the importance of species-sorting processes within chemosynthetic ecosystems. Variability in food web structure, addressed through Bayesian metrics, revealed consistent trends across ecosystems. Food-web complexity significantly decreased with increasing methane concentrations, a common proxy for the intensity of seep and vent fluid fluxes. Although high fluid-fluxes have the potential to enhance primary productivity, they generate environmental constraints that may limit microbial diversity, colonisation of consumers and the structuring role of competitive interactions, leading to an overall reduction of food-web complexity and an increase in trophic redundancy. Heterogeneity provided by foundation species was identified as an additional structuring factor. According to their biological activities, foundation species may have the potential to partly release the competitive pressure within communities of low fluid-flux habitats. Finally, ecosystem functioning in vents and seeps was highly similar despite environmental differences (e.g. physico-chemistry, dominant basal sources) suggesting that ecological niches are not specifically linked to the nature of fluids. This comparison of seep and vent functioning in the Guaymas basin thus provides further supports to the hypothesis of continuity among deep-sea chemosynthetic ecosystems.
Food-Web Complexity in Guaymas Basin Hydrothermal Vents and Cold Seeps
Olu, Karine; Dubois, Stanislas F.; Escobar-Briones, Elva; Gelinas, Yves; Menot, Lénaick; Sarrazin, Jozée
2016-01-01
In the Guaymas Basin, the presence of cold seeps and hydrothermal vents in close proximity, similar sedimentary settings and comparable depths offers a unique opportunity to assess and compare the functioning of these deep-sea chemosynthetic ecosystems. The food webs of five seep and four vent assemblages were studied using stable carbon and nitrogen isotope analyses. Although the two ecosystems shared similar potential basal sources, their food webs differed: seeps relied predominantly on methanotrophy and thiotrophy via the Calvin-Benson-Bassham (CBB) cycle and vents on petroleum-derived organic matter and thiotrophy via the CBB and reductive tricarboxylic acid (rTCA) cycles. In contrast to symbiotic species, the heterotrophic fauna exhibited high trophic flexibility among assemblages, suggesting weak trophic links to the metabolic diversity of chemosynthetic primary producers. At both ecosystems, food webs did not appear to be organised through predator-prey links but rather through weak trophic relationships among co-occurring species. Examples of trophic or spatial niche differentiation highlighted the importance of species-sorting processes within chemosynthetic ecosystems. Variability in food web structure, addressed through Bayesian metrics, revealed consistent trends across ecosystems. Food-web complexity significantly decreased with increasing methane concentrations, a common proxy for the intensity of seep and vent fluid fluxes. Although high fluid-fluxes have the potential to enhance primary productivity, they generate environmental constraints that may limit microbial diversity, colonisation of consumers and the structuring role of competitive interactions, leading to an overall reduction of food-web complexity and an increase in trophic redundancy. Heterogeneity provided by foundation species was identified as an additional structuring factor. According to their biological activities, foundation species may have the potential to partly release the competitive pressure within communities of low fluid-flux habitats. Finally, ecosystem functioning in vents and seeps was highly similar despite environmental differences (e.g. physico-chemistry, dominant basal sources) suggesting that ecological niches are not specifically linked to the nature of fluids. This comparison of seep and vent functioning in the Guaymas basin thus provides further supports to the hypothesis of continuity among deep-sea chemosynthetic ecosystems. PMID:27683216
[A web-based integrated clinical database for laryngeal cancer].
E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu
2014-08-01
To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.
[Dynamics and interactions between the university community and public health 2.0].
Rodríguez-Gómez, Rodolfo
2016-01-01
To explore the experiences of a group of participants in a university community with the web in general and with digital contents on public health, to describe their motivations and to understand how social networks influence their interaction with content on public health. Qualitative research. Deep semi-structured interviews were conducted to understand the phenomenon. Five categories emerged after the study: socialization and internalization of the cyberculture, social marketing linked to the web and public health, culture of fear and distrust, the concept of health, and the health system and public health. Participants have internalized the web and have given it a strong symbolic capital. The challenges of public health 2.0 are not only to achieve interaction with users and to get a place in cyberspace, but also to fight against the stigma of the "public" and to take advantage of the influence of the web on small-world networks to communicate.
ERIC Educational Resources Information Center
Otamendi, Francisco Javier; Doncel, Luis Miguel
2013-01-01
Experimental teaching in general, and simulation in particular, have primarily been used in lecture rooms but in the future must also be adapted to e-learning. The integration of web simulators into virtual learning environments, coupled with specific supporting video documentation and the use of videoconference tools, results in robust…
ERIC Educational Resources Information Center
Chang, Kuo-En; Sung, Yao-Ting; Hou, Huei-Tse
2006-01-01
Educational software for teachers is an important, yet usually ignored, link for integrating information technology into classroom instruction. This study builds a web-based teaching material design and development system. The process in the system is divided into four stages, analysis, design, development, and practice. Eight junior high school…
The Urban-Rural Gap: Project-Based Learning with Web 2.0 among West Virginian Teachers
ERIC Educational Resources Information Center
Goh, Debbie; Kale, Ugur
2016-01-01
To overcome the digital divide in West Virginia, schools are urged to integrate emerging information and communication technologies (ICTs) such as Web 2.0 and alternative pedagogies to develop students' twenty-first-century skills. Yet, the potential effects of the digital divide on technology integration have not necessarily been part of planning…
ERIC Educational Resources Information Center
Scalise, Kathleen
2016-01-01
With the onset of Web 2.0 and 3.0--the social and semantic webs--a next wave for integration of educational technology into the classroom is occurring. The aim of this paper is to show how some teachers are increasingly bringing collaboration and shared meaning-making through technology environments into learning environments (Evergreen Education…
ERIC Educational Resources Information Center
Asuman, Baguma; Khan, Md. Shahadat Hossain; Clement, Che Kum
2018-01-01
This article reports on the barriers encountered by teachers and the possible solutions to the integration of web-based learning (WBL) into higher educational institutions in Uganda. A total of 50 teachers in the departments of ICT, management, and social sciences from five different universities were purposively selected. A self-designed…
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
Social network extraction based on Web: 3. the integrated superficial method
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, O. S.; Noah, S. A.
2018-03-01
The Web as a source of information has become part of the social behavior information. Although, by involving only the limitation of information disclosed by search engines in the form of: hit counts, snippets, and URL addresses of web pages, the integrated extraction method produces a social network not only trusted but enriched. Unintegrated extraction methods may produce social networks without explanation, resulting in poor supplemental information, or resulting in a social network of durmise laden, consequently unrepresentative social structures. The integrated superficial method in addition to generating the core social network, also generates an expanded network so as to reach the scope of relation clues, or number of edges computationally almost similar to n(n - 1)/2 for n social actors.
Jensen, Sigmund; Neufeld, Josh D; Birkeland, Nils-Kåre; Hovland, Martin; Murrell, John Colin
2008-11-01
Deep-water coral reefs are seafloor environments with diverse biological communities surrounded by cold permanent darkness. Sources of energy and carbon for the nourishment of these reefs are presently unclear. We investigated one aspect of the food web using DNA stable-isotope probing (DNA-SIP). Sediment from beneath a Lophelia pertusa reef off the coast of Norway was incubated until assimilation of 5 micromol 13CH4 g(-1) wet weight occurred. Extracted DNA was separated into 'light' and 'heavy' fractions for analysis of labelling. Bacterial community fingerprinting of PCR-amplified 16S rRNA gene fragments revealed two predominant 13C-specific bands. Sequencing of these bands indicated that carbon from 13CH4 had been assimilated by a Methylomicrobium and an uncultivated member of the Gammaproteobacteria. Cloning and sequencing of 16S rRNA genes from the heavy DNA, in addition to genes encoding particulate methane monooxygenase and methanol dehydrogenase, all linked Methylomicrobium with methane metabolism. Putative cross-feeders were affiliated with Methylophaga (Gammaproteobacteria), Hyphomicrobium (Alphaproteobacteria) and previously unrecognized methylotrophs of the Gammaproteobacteria, Alphaproteobacteria, Deferribacteres and Bacteroidetes. This first marine methane SIP study provides evidence for the presence of methylotrophs that participate in sediment food webs associated with deep-water coral reefs.
Integrating Data Distribution and Data Assimilation Between the OOI CI and the NOAA DIF
NASA Astrophysics Data System (ADS)
Meisinger, M.; Arrott, M.; Clemesha, A.; Farcas, C.; Farcas, E.; Im, T.; Schofield, O.; Krueger, I.; Klacansky, I.; Orcutt, J.; Peach, C.; Chave, A.; Raymer, D.; Vernon, F.
2008-12-01
The Ocean Observatories Initiative (OOI) is an NSF funded program to establish the ocean observing infrastructure of the 21st century benefiting research and education. It is currently approaching final design and promises to deliver cyber and physical observatory infrastructure components as well as substantial core instrumentation to study environmental processes of the ocean at various scales, from coastal shelf-slope exchange processes to the deep ocean. The OOI's data distribution network lies at the heart of its cyber- infrastructure, which enables a multitude of science and education applications, ranging from data analysis, to processing, visualization and ontology supported query and mediation. In addition, it fundamentally supports a class of applications exploiting the knowledge gained from analyzing observational data for objective-driven ocean observing applications, such as automatically triggered response to episodic environmental events and interactive instrument tasking and control. The U.S. Department of Commerce through NOAA operates the Integrated Ocean Observing System (IOOS) providing continuous data in various formats, rates and scales on open oceans and coastal waters to scientists, managers, businesses, governments, and the public to support research and inform decision-making. The NOAA IOOS program initiated development of the Data Integration Framework (DIF) to improve management and delivery of an initial subset of ocean observations with the expectation of achieving improvements in a select set of NOAA's decision-support tools. Both OOI and NOAA through DIF collaborate on an effort to integrate the data distribution, access and analysis needs of both programs. We present details and early findings from this collaboration; one part of it is the development of a demonstrator combining web-based user access to oceanographic data through ERDDAP, efficient science data distribution, and scalable, self-healing deployment in a cloud computing environment. ERDDAP is a web-based front-end application integrating oceanographic data sources of various formats, for instance CDF data files as aggregated through NcML or presented using a THREDDS server. The OOI-designed data distribution network provides global traffic management and computational load balancing for observatory resources; it makes use of the OpenDAP Data Access Protocol (DAP) for efficient canonical science data distribution over the network. A cloud computing strategy is the basis for scalable, self-healing organization of an observatory's computing and storage resources, independent of the physical location and technical implementation of these resources.
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
Uniformity testing: assessment of a centralized web-based uniformity analysis system.
Klempa, Meaghan C
2011-06-01
Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.
Computational knowledge integration in biopharmaceutical research.
Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim
2003-09-01
An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.
1994-09-01
50 years ago as an imperative f’or a simple fighter- boamber escort team has .ince produced a highly sophisticated web of relationships between multip...encyclopedia of mission.specific u,)ject.ives that. are neither defined nor conceived at the operational level. .’lhe ATO cannot possibly cut this deep , nor...but they were nervous). [Meanwhile] F-15s are at their orbit point 150 miles deep in Iraq-waiting! We should do better than that. Responsiveness What
"WGL," a Web Laboratory for Geometry
ERIC Educational Resources Information Center
Quaresma, Pedro; Santos, Vanda; Maric, Milena
2018-01-01
The role of information and communication technologies (ICT) in education is nowadays well recognised. The "Web Geometry Laboratory," is an e-learning, collaborative and adaptive, Web environment for geometry, integrating a well known dynamic geometry system. In a collaborative session, teachers and students, engaged in solving…
9 CFR 94.12 - Pork and pork products from regions where swine vesicular disease exists.
Code of Federal Regulations, 2013 CFR
2013-01-01
... vesicular disease is maintained on the APHIS Web site at http://www.aphis.usda.gov/import_export/animals... 260 °C for approximately 210 minutes after which they must be cooked in hot oil (deep-fried) at a...
Depth-specific Analyses of the Lake Superior Food Web, oral presentation
Characteristics of large, deep aquatic systems include depth gradients in community composition, in the quality and distribution of food resources, and in the strategies that organisms use to obtain their nutrition. In Lake Superior, nearshore communities that rely upon a combina...
A flexible cruciform journal bearing mount
NASA Technical Reports Server (NTRS)
Frost, A. E.; Geiger, W. A.
1973-01-01
Flexible mount achieves low roll, pitch and yaw stiffnesses while maintaining high radial stiffness by holding bearing pad in fixed relationship to deep web cruciform member and holding this member in fixed relationship to bearing support. This mount has particular application in small, high performance gas turbines.
Ocean Drilling Program: Related Sites
) 306-0390 Web site: www.nsf.gov Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES) US Members: Columbia University, Lamont-Doherty Earth Observatory Florida State University Oregon State University, College of Oceanic and Atmospheric Sciences Pennsylvania State University, College of Earth and
ICM: a web server for integrated clustering of multi-dimensional biomedical data.
He, Song; He, Haochen; Xu, Wenjian; Huang, Xin; Jiang, Shuai; Li, Fei; He, Fuchu; Bo, Xiaochen
2016-07-08
Large-scale efforts for parallel acquisition of multi-omics profiling continue to generate extensive amounts of multi-dimensional biomedical data. Thus, integrated clustering of multiple types of omics data is essential for developing individual-based treatments and precision medicine. However, while rapid progress has been made, methods for integrated clustering are lacking an intuitive web interface that facilitates the biomedical researchers without sufficient programming skills. Here, we present a web tool, named Integrated Clustering of Multi-dimensional biomedical data (ICM), that provides an interface from which to fuse, cluster and visualize multi-dimensional biomedical data and knowledge. With ICM, users can explore the heterogeneity of a disease or a biological process by identifying subgroups of patients. The results obtained can then be interactively modified by using an intuitive user interface. Researchers can also exchange the results from ICM with collaborators via a web link containing a Project ID number that will directly pull up the analysis results being shared. ICM also support incremental clustering that allows users to add new sample data into the data of a previous study to obtain a clustering result. Currently, the ICM web server is available with no login requirement and at no cost at http://biotech.bmi.ac.cn/icm/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
MAPI: towards the integrated exploitation of bioinformatics Web Services.
Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo
2011-10-27
Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
Developing a Web-based system by integrating VGI and SDI for real estate management and marketing
NASA Astrophysics Data System (ADS)
Salajegheh, J.; Hakimpour, F.; Esmaeily, A.
2014-10-01
Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.
Zheng, Ling-Ling; Xu, Wei-Lin; Liu, Shun; Sun, Wen-Ju; Li, Jun-Hao; Wu, Jie; Yang, Jian-Hua; Qu, Liang-Hu
2016-07-08
tRNA-derived small RNA fragments (tRFs) are one class of small non-coding RNAs derived from transfer RNAs (tRNAs). tRFs play important roles in cellular processes and are involved in multiple cancers. High-throughput small RNA (sRNA) sequencing experiments can detect all the cellular expressed sRNAs, including tRFs. However, distinguishing genuine tRFs from RNA fragments generated by random degradation remains a major challenge. In this study, we developed an integrated web-based computing system, tRF2Cancer, to accurately identify tRFs from sRNA deep-sequencing data and evaluate their expression in multiple cancers. The binomial test was introduced to evaluate whether reads from a small RNA-seq data set represent tRFs or degraded fragments. A classification method was then used to annotate the types of tRFs based on their sites of origin in pre-tRNA or mature tRNA. We applied the pipeline to analyze 10 991 data sets from 32 types of cancers and identified thousands of expressed tRFs. A tool called 'tRFinCancer' was developed to facilitate the users to inspect the expression of tRFs across different types of cancers. Another tool called 'tRFBrowser' shows both the sites of origin and the distribution of chemical modification sites in tRFs on their source tRNA. The tRF2Cancer web server is available at http://rna.sysu.edu.cn/tRFfinder/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
The impact of anticyclonic mesoscale structures on microbial food webs in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Christaki, U.; van Wambeke, F.; Lefevre, D.; Lagaria, A.; Prieur, L.; Pujo-Pay, M.; Grattepanche, J.-D.; Colombet, J.; Psarra, S.; Dolan, J. R.; Sime-Ngando, T.; Conan, P.; Weinbauer, M. G.; Moutin, T.
2011-01-01
The abundance and activity of the major members of the heterotrophic microbial community - from viruses to ciliates - were studied along a longitudinal transect across the Mediterranean Sea in the summer of 2008. The Mediterranean Sea is characterized by a west to the east gradient of deepening of DCM (deep chlorophyll maximum) and increasing oligotrophy reflected in gradients of heterotrophic microbial biomass and production. However, within this longitudinal trend, hydrological mesoscale features exist and likely influence microbial dynamics. We show here the importance of mesoscale structures by a description of the structure and function of the microbial food web through an investigation of 3 geographically distant eddies within a longitudinal transect. Three selected sites each located in the center of an anticyclonic eddy were intensively investigated: in the Algero-Provencal Basin (St. A), the Ionian Basin (St. B), and the Levantine Basin (St. C). The 3 geographically distant eddies showed the lowest values of the different heterotrophic compartments of the microbial food web, and except for viruses in site C, all stocks were higher in the neighboring stations outside the eddies. During our study the 3 eddies showed equilibrium between GCP (Gross Community Production) and DCR (Dark Community Respiration); moreover, the west-east (W-E) gradient was evident in terms of heterotrophic biomass but not in terms of production. Means of integrated PPp values were higher at site B (~190 mg C m-2 d-1) and about 15% lower at sites A and C (~160 mg C m-2 d-1). Net community production fluxes were similar at all three stations exhibiting equilibrium between gross community production and dark community respiration.
White Matter Tract Injury is Associated with Deep Gray Matter Iron Deposition in Multiple Sclerosis.
Bergsland, Niels; Tavazzi, Eleonora; Laganà, Maria Marcella; Baglio, Francesca; Cecconi, Pietro; Viotti, Stefano; Zivadinov, Robert; Baselli, Giuseppe; Rovaris, Marco
2017-01-01
With respect to healthy controls (HCs), increased iron concentrations in the deep gray matter (GM) and decreased white matter (WM) integrity are common findings in multiple sclerosis (MS) patients. The association between these features of the disease remains poorly understood. We investigated the relationship between deep iron deposition in the deep GM and WM injury in associated fiber tracts in MS patients. Sixty-six MS patients (mean age 50.0 years, median Expanded Disability Status Scale 5.25, mean disease duration 19.1 years) and 29 HCs, group matched for age and sex were imaged on a 1.5T scanner. Susceptibility-weighted imaging and diffusion tensor imaging (DTI) were used for assessing high-pass filtered phase values in the deep GM and normal appearing WM (NAWM) integrity in associated fiber tracts, respectively. Correlation analyses investigated the associations between filtered phase values (suggestive of iron content) and WM damage. Areas indicative of increased iron levels were found in the left and right caudates as well as in the left thalamus. MS patients presented with decreased DTI-derived measures of tissue integrity in the associated WM tracts. Greater mean, axial and radial diffusivities were associated with increased iron levels in all three GM areas (r values .393 to .514 with corresponding P values .003 to <.0001). Global NAWM diffusivity measures were not related to mean filtered phase values within the deep GM. Increased iron concentration in the deep GM is associated with decreased tissue integrity of the connected WM in MS patients. Copyright © 2016 by the American Society of Neuroimaging.
Development of grid-like applications for public health using Web 2.0 mashup techniques.
Scotch, Matthew; Yip, Kevin Y; Cheung, Kei-Hoi
2008-01-01
Development of public health informatics applications often requires the integration of multiple data sources. This process can be challenging due to issues such as different file formats, schemas, naming systems, and having to scrape the content of web pages. A potential solution to these system development challenges is the use of Web 2.0 technologies. In general, Web 2.0 technologies are new internet services that encourage and value information sharing and collaboration among individuals. In this case report, we describe the development and use of Web 2.0 technologies including Yahoo! Pipes within a public health application that integrates animal, human, and temperature data to assess the risk of West Nile Virus (WNV) outbreaks. The results of development and testing suggest that while Web 2.0 applications are reasonable environments for rapid prototyping, they are not mature enough for large-scale public health data applications. The application, in fact a "systems of systems," often failed due to varied timeouts for application response across web sites and services, internal caching errors, and software added to web sites by administrators to manage the load on their servers. In spite of these concerns, the results of this study demonstrate the potential value of grid computing and Web 2.0 approaches in public health informatics.
Research of marine sensor web based on SOA and EDA
NASA Astrophysics Data System (ADS)
Jiang, Yongguo; Dou, Jinfeng; Guo, Zhongwen; Hu, Keyong
2015-04-01
A great deal of ocean sensor observation data exists, for a wide range of marine disciplines, derived from in situ and remote observing platforms, in real-time, near-real-time and delayed mode. Ocean monitoring is routinely completed using sensors and instruments. Standardization is the key requirement for exchanging information about ocean sensors and sensor data and for comparing and combining information from different sensor networks. One or more sensors are often physically integrated into a single ocean `instrument' device, which often brings in many challenges related to diverse sensor data formats, parameters units, different spatiotemporal resolution, application domains, data quality and sensors protocols. To face these challenges requires the standardization efforts aiming at facilitating the so-called Sensor Web, which making it easy to provide public access to sensor data and metadata information. In this paper, a Marine Sensor Web, based on SOA and EDA and integrating the MBARI's PUCK protocol, IEEE 1451 and OGC SWE 2.0, is illustrated with a five-layer architecture. The Web Service layer and Event Process layer are illustrated in detail with an actual example. The demo study has demonstrated that a standard-based system can be built to access sensors and marine instruments distributed globally using common Web browsers for monitoring the environment and oceanic conditions besides marine sensor data on the Web, this framework of Marine Sensor Web can also play an important role in many other domains' information integration.
Evolving EO-1 Sensor Web Testbed Capabilities in Pursuit of GEOSS
NASA Technical Reports Server (NTRS)
Mandi, Dan; Ly, Vuong; Frye, Stuart; Younis, Mohamed
2006-01-01
A viewgraph presentation to evolve sensor web capabilities in pursuit of capabilities to support Global Earth Observing System of Systems (GEOSS) is shown. The topics include: 1) Vision to Enable Sensor Webs with "Hot Spots"; 2) Vision Extended for Communication/Control Architecture for Missions to Mars; 3) Key Capabilities Implemented to Enable EO-1 Sensor Webs; 4) One of Three Experiments Conducted by UMBC Undergraduate Class 12-14-05 (1 - 3); 5) Closer Look at our Mini-Rovers and Simulated Mars Landscae at GSFC; 6) Beginning to Implement Experiments with Standards-Vision for Integrated Sensor Web Environment; 7) Goddard Mission Services Evolution Center (GMSEC); 8) GMSEC Component Catalog; 9) Core Flight System (CFS) and Extension for GMSEC for Flight SW; 10) Sensor Modeling Language; 11) Seamless Ground to Space Integrated Message Bus Demonstration (completed December 2005); 12) Other Experiments in Queue; 13) Acknowledgements; and 14) References.
distributed computing, Web information systems engineering, software engineering, computer graphics, and Dashboard, NREL Energy Story visualization, Green Button data integration, as well as a large number of Web of an R&D 100 Award. Prior to joining NREL, Alex worked as a system administrator, Web developer
Integration of the NRL Digital Library.
ERIC Educational Resources Information Center
King, James
2001-01-01
The Naval Research Laboratory (NRL) Library has identified six primary areas that need improvement: infrastructure, InfoWeb, TORPEDO Ultra, journal data management, classified data, and linking software. It is rebuilding InfoWeb and TORPEDO Ultra as database-driven Web applications, upgrading the STILAS library catalog, and creating other support…
A web-enabled system for integrated assessment of watershed development
Dymond, R.; Lohani, V.; Regmi, B.; Dietz, R.
2004-01-01
Researchers at Virginia Tech have put together the primary structure of a web enabled integrated modeling system that has potential to be a planning tool to help decision makers and stakeholders in making appropriate watershed management decisions. This paper describes the integrated system, including data sources, collection, analysis methods, system software and design, and issues of integrating the various component models. The integrated system has three modeling components, namely hydrology, economics, and fish health, and is accompanied by descriptive 'help files.' Since all three components have a related spatial aspect, GIS technology provides the integration platform. When completed, a user will access the integrated system over the web to choose pre-selected land development patterns to create a 'what if' scenario using an easy-to-follow interface. The hydrologic model simulates effects of the scenario on annual runoff volume, flood peaks of various return periods, and ground water recharge. The economics model evaluates tax revenue and fiscal costs as a result of a new land development scenario. The fish health model evaluates effects of new land uses in zones of influence to the health of fish populations in those areas. Copyright ASCE 2004.
Integrating web 2.0 in clinical research education in a developing country.
Amgad, Mohamed; AlFaar, Ahmad Samir
2014-09-01
The use of Web 2.0 tools in education and health care has received heavy attention over the past years. Over two consecutive years, Children's Cancer Hospital - Egypt 57357 (CCHE 57357), in collaboration with Egyptian universities, student bodies, and NGOs, conducted a summer course that supports undergraduate medical students to cross the gap between clinical practice and clinical research. This time, there was a greater emphasis on reaching out to the students using social media and other Web 2.0 tools, which were heavily used in the course, including Google Drive, Facebook, Twitter, YouTube, Mendeley, Google Hangout, Live Streaming, Research Electronic Data Capture (REDCap), and Dropbox. We wanted to investigate the usefulness of integrating Web 2.0 technologies into formal educational courses and modules. The evaluation survey was filled in by 156 respondents, 134 of whom were course candidates (response rate = 94.4 %) and 22 of whom were course coordinators (response rate = 81.5 %). The course participants came from 14 different universities throughout Egypt. Students' feedback was positive and supported the integration of Web 2.0 tools in academic courses and modules. Google Drive, Facebook, and Dropbox were found to be most useful.
Extracting Databases from Dark Data with DeepDive
Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng
2016-01-01
DeepDive is a system for extracting relational databases from dark data: the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data — scientific papers, Web classified ads, customer service notes, and so on — were instead in a relational database, it would give analysts a massive and valuable new set of “big data.” DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference. PMID:28316365
ERIC Educational Resources Information Center
Oner, Diler; Adadan, Emine
2016-01-01
This study investigated the effectiveness of an integrated web-based portfolio system, namely the BOUNCE System, which primarily focuses on improving preservice teachers' reflective thinking skills. BOUNCE©, the software component of the system, was designed and developed to support a teaching practice model including a cycle of activities to be…
ERIC Educational Resources Information Center
Huang, Jian
2010-01-01
With the increasing wealth of information on the Web, information integration is ubiquitous as the same real-world entity may appear in a variety of forms extracted from different sources. This dissertation proposes supervised and unsupervised algorithms that are naturally integrated in a scalable framework to solve the entity resolution problem,…
ERIC Educational Resources Information Center
Carney, Robert D.
2010-01-01
This dissertation rationalizes the best use of Web-based instruction (WBI) for teaching music theory to private piano students in the later primary grades. It uses an integrative research methodology for defining, designing, and implementing a curriculum that includes WBI. Research from the fields of music education, educational technology,…
Semantic web data warehousing for caGrid
McCusker, James P; Phillips, Joshua A; Beltrán, Alejandra González; Finkelstein, Anthony; Krauthammer, Michael
2009-01-01
The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG® Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges. PMID:19796399
Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies.
Huang, Yu S; Horton, Matthew; Vilhjálmsson, Bjarni J; Seren, Umit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O; Bergelson, Joy; Nordborg, Magnus
2011-01-01
With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/
Food web flows through a sub-arctic deep-sea benthic community
NASA Astrophysics Data System (ADS)
Gontikaki, E.; van Oevelen, D.; Soetaert, K.; Witte, U.
2011-11-01
The benthic food web of the deep Faroe-Shetland Channel (FSC) was modelled by using the linear inverse modelling methodology. The reconstruction of carbon pathways by inverse analysis was based on benthic oxygen uptake rates, biomass data and transfer of labile carbon through the food web as revealed by a pulse-chase experiment. Carbon deposition was estimated at 2.2 mmol C m -2 d -1. Approximately 69% of the deposited carbon was respired by the benthic community with bacteria being responsible for 70% of the total respiration. The major fraction of the labile detritus flux was recycled within the microbial loop leaving merely 2% of the deposited labile phytodetritus available for metazoan consumption. Bacteria assimilated carbon at high efficiency (0.55) but only 24% of bacterial production was grazed by metazoans; the remaining returned to the dissolved organic matter pool due to viral lysis. Refractory detritus was the basal food resource for nematodes covering ∼99% of their carbon requirements. On the contrary, macrofauna seemed to obtain the major part of their metabolic needs from bacteria (49% of macrofaunal consumption). Labile detritus transfer was well-constrained, based on the data from the pulse-chase experiment, but appeared to be of limited importance to the diet of the examined benthic organisms (<1% and 5% of carbon requirements of nematodes and macrofauna respectively). Predation on nematodes was generally low with the exception of sub-surface deposit-feeding polychaetes that obtained 35% of their energy requirements from nematode ingestion. Carnivorous polychaetes also covered 35% of their carbon demand through predation although the preferred prey, in this case, was other macrofaunal animals rather than nematodes. Bacteria and detritus contributed 53% and 12% to the total carbon ingestion of carnivorous polychaetes suggesting a high degree of omnivory among higher consumers in the FSC benthic food web. Overall, this study provided a unique insight into the functioning of a deep-sea benthic community and demonstrated how conventional data can be exploited further when combined with state-of-the-art modelling approaches.
9 CFR 94.9 - Pork and pork products from regions where classical swine fever exists.
Code of Federal Regulations, 2013 CFR
2013-01-01
... swine fever is maintained on the APHIS Web site at http://www.aphis.usda.gov/import_export/animals... which they must be cooked in hot oil (deep-fried) at a minimum of 104 °C for an additional 150 minutes...
9 CFR 94.12 - Pork and pork products from regions where swine vesicular disease exists.
Code of Federal Regulations, 2014 CFR
2014-01-01
... has declared free of swine vesicular disease is maintained on the APHIS Web site at http://www.aphis... which they must be cooked in hot oil (deep-fried) at a minimum of 104 °C for an additional 150 minutes...
Benke, Arthur C
2018-03-31
The majority of food web studies are based on connectivity, top-down impacts, bottom-up flows, or trophic position (TP), and ecologists have argued for decades which is best. Rarely have any two been considered simultaneously. The present study uses a procedure that integrates the last three approaches based on taxon-specific secondary production and gut analyses. Ingestion flows are quantified to create a flow web and the same data are used to quantify TP for all taxa. An individual predator's impacts also are estimated using the ratio of its ingestion (I) of each prey to prey production (P) to create an I/P web. This procedure was applied to 41 invertebrate taxa inhabiting submerged woody habitat in a southeastern U.S. river. A complex flow web starting with five basal food resources had 462 flows >1 mg·m -2 ·yr -1 , providing far more information than a connectivity web. Total flows from basal resources to primary consumers/omnivores were dominated by allochthonous amorphous detritus and ranged from 1 to >50,000 mg·m -2 ·yr -1 . Most predator-prey flows were much lower (<50 mg·m -2 ·yr -1 ), but some were >1,000 mg·m -2 ·yr -1 . The I/P web showed that 83% of individual predator impacts were weak (<10%), whereas total predator impacts were often strong (e.g., 35% of prey sustained an impact >90%). Quantitative estimates of TP ranged from 2 to 3.7, contrasting sharply with seven integer-based trophic levels based on longest feeding chain. Traditional omnivores (TP = 2.4-2.9) played an important role by consuming more prey and exerting higher impacts on primary consumers than strict predators (TP ≥ 3). This study illustrates how simultaneous quantification of flow pathways, predator impacts, and TP together provide an integrated characterization of natural food webs. © 2018 by the Ecological Society of America.
Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne
2018-03-23
Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic-Kerry, Catherine King, Kathleen O'Moore, Melinda Achilles, Bridianne O'Dea. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 23.03.2018.
NASA Astrophysics Data System (ADS)
Maio, R.; Arko, R. A.; Lehnert, K.; Ji, P.
2017-12-01
Unlocking the full, rich, network of links between the scientific literature and the real world entities to which data correspond - such as field expeditions (cruises) on oceanographic research vessels and physical samples collected during those expeditions - remains a challenge for the geoscience community. Doing so would enable data reuse and integration on a broad scale; making it possible to inspect the network and discover, for example, all rock samples reported in the scientific literature found within 10 kilometers of an undersea volcano, and associated geochemical analyses. Such a capability could facilitate new scientific discoveries. The GeoDeepDive project provides negotiated access to 4.2+ million documents from scientific publishers, enabling text and document mining via a public API and cyberinfrastructure. We mined this corpus using entity linking techniques, which are inherently uncertain, and recorded provenance information about each link. This opens the entity linking methodology to scrutiny, and enables downstream applications to make informed assessments about the suitability of an entity link for consumption. A major challenge is how to model and disseminate the provenance information. We present results from entity linking between journal articles, research vessels and cruises, and physical samples from the Petrological Database (PetDB), and incorporate Linked Data resources such as cruises in the Rolling Deck to Repository (R2R) catalog where possible. Our work demonstrates the value and potential of the GeoDeepDive cyberinfrastructure in combination with Linked Data infrastructure provided by the EarthCube GeoLink project. We present a research workflow to capture provenance information that leverages the World Wide Web Consortium (W3C) recommendation PROV Ontology.
Web 2.0 Strategy in Libraries and Information Services
ERIC Educational Resources Information Center
Byrne, Alex
2008-01-01
Web 2.0 challenges libraries to change from their predominantly centralised service models with integrated library management systems at the hub. Implementation of Web 2.0 technologies and the accompanying attitudinal shifts will demand reconceptualisation of the nature of library and information service around a dynamic, ever changing, networked,…
Webquests for English-Language Learners: Essential Elements for Design
ERIC Educational Resources Information Center
Sox, Amanda; Rubinstein-Avila, Eliane
2009-01-01
The authors of this article advocate for the adaptation and use of WebQuests (web-based interdisciplinary collaborative learning units) to integrate technological competencies and content area knowledge development at the secondary level and to support the linguistic needs of English-language learners (ELLs). After examining eight WebQuests, the…
Web-Based Surveys Facilitate Undergraduate Research and Knowledge
ERIC Educational Resources Information Center
Grimes, Paul, Ed.; Steele, Scott R.
2008-01-01
The author presents Web-based surveying as a valuable tool for achieving quality undergraduate research in upper-level economics courses. Web-based surveys can be employed in efforts to integrate undergraduate research into the curriculum without overburdening students or faculty. The author discusses the value of undergraduate research, notes…
Web Geometry Laboratory: Case Studies in Portugal and Serbia
ERIC Educational Resources Information Center
Santos, Vanda; Quaresma, Pedro; Maric, Milena; Campos, Helena
2018-01-01
The role of information and communication technologies (ICT) in education is well recognised--learning environments where the ICT features included are being proposed for many years now. The Web Geometry Laboratory (WGL) innovates in proposing a blended learning, collaborative and adaptive learning Web-environment for geometry. It integrates a…
The UAH GeoIntegrator: A Web Mapping System for On-site Data Insertion and Viewing
NASA Astrophysics Data System (ADS)
He, M.; Hardin, D.; Sever, T.; Irwin, D.
2005-12-01
There is a growing need in the scientific community to combine data colleted in the field with maps, imagery and other layered sources. For example, a biologist, who has collected pollination data during a field study, may want to see his data presented on a regional map. There are many commercial web mapping tools available, but they are expensive, and may require advanced computer knowledge to operate. Researchers from the Information Technology and Systems Center at the University of Alabama in Huntsville are developing a web mapping system that will allow scientists to map their data in an easy way. This system is called the UAH GeoIntegrator. The UAH GeoIntegrator is built on top of three open-source components: the Apache web server, MapServer, and the Chameleon viewer. Chameleon allows developers to customize its map viewer interface by adding widgets. These widgets provide unique functionality focused to the specific needs of the researcher. The UAH GeoIntegrator utilizes a suite of widgets that bring new functionality focused on specific needs, to a typical web map viewer. Specifically, a common input text file format was defined and widgets developed to convert user's field collections into web map layers. These layers can then laid on top of other map layers to produce data products that are versatile, informative and easy to distribute via web services. The UAH GeoIntegrator is being developed as part of the SERVIR project. SERVIR (a Spanish acronym meaning to serve) is part of an international effort to preserve the remaining forested regions of Mesoamerica and to help establish sustainable development in the region. The National Aeronautics and Space Administration along with the World Bank, the United States Agency for International Development and the Central American Commission for Environment and Development are cooperating in this effort. The UAH GeoIntegrator is part of an advanced decision support system that will provide scientists, educators, and policy makers the capabilities needed to monitor and forecast ecological changes, respond to natural disasters, and better understand both natural and human induced effects in Mesoamerica. In this paper, the architecture of the system, data input format, and details of the suite of will be presented.
Primary Science Teaching--Is It Integral and Deep Experience for Students?
ERIC Educational Resources Information Center
Timoštšuk, Inge
2016-01-01
Integral and deep pedagogical content knowledge can support future primary teachers' ability to follow ideas of education for sustainability in science class. Initial teacher education provides opportunity to learn what and how to teach but still the practical experiences of teaching can reveal uneven development of student teachers'…
Hocum, Jonah D; Battrell, Logan R; Maynard, Ryan; Adair, Jennifer E; Beard, Brian C; Rawlings, David J; Kiem, Hans-Peter; Miller, Daniel G; Trobridge, Grant D
2015-07-07
Analyzing the integration profile of retroviral vectors is a vital step in determining their potential genotoxic effects and developing safer vectors for therapeutic use. Identifying retroviral vector integration sites is also important for retroviral mutagenesis screens. We developed VISA, a vector integration site analysis server, to analyze next-generation sequencing data for retroviral vector integration sites. Sequence reads that contain a provirus are mapped to the human genome, sequence reads that cannot be localized to a unique location in the genome are filtered out, and then unique retroviral vector integration sites are determined based on the alignment scores of the remaining sequence reads. VISA offers a simple web interface to upload sequence files and results are returned in a concise tabular format to allow rapid analysis of retroviral vector integration sites.
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
Integrated Functional and Executional Modelling of Software Using Web-Based Databases
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Marietta, Roberta
1998-01-01
NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.
Artemis: Integrating Scientific Data on the Grid (Preprint)
2004-07-01
Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and
9 CFR 94.9 - Pork and pork products from regions where classical swine fever exists.
Code of Federal Regulations, 2014 CFR
2014-01-01
... declared free of classical swine fever is maintained on the APHIS Web site at http://www.aphis.usda.gov... which they must be cooked in hot oil (deep-fried) at a minimum of 104 °C for an additional 150 minutes...
E-Learning for Depth in the Semantic Web
ERIC Educational Resources Information Center
Shafrir, Uri; Etkind, Masha
2006-01-01
In this paper, we describe concept parsing algorithms, a novel semantic analysis methodology at the core of a new pedagogy that focuses learners attention on deep comprehension of the conceptual content of learned material. Two new e-learning tools are described in some detail: interactive concept discovery learning and meaning equivalence…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-12
...., Charleston, SC 29403. To submit comments please see our Web site at: http://www.sac.usace.army.mil/?action... container traffic and cargo value. In 2009, the Charleston port district was ranked ninth (out of 200 deep... [[Page 50188
NASA Technical Reports Server (NTRS)
Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang
2009-01-01
Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.
Oxygen, ecology, and the Cambrian radiation of animals
Sperling, Erik A.; Frieder, Christina A.; Raman, Akkur V.; Girguis, Peter R.; Levin, Lisa A.; Knoll, Andrew H.
2013-01-01
The Proterozoic-Cambrian transition records the appearance of essentially all animal body plans (phyla), yet to date no single hypothesis adequately explains both the timing of the event and the evident increase in diversity and disparity. Ecological triggers focused on escalatory predator–prey “arms races” can explain the evolutionary pattern but not its timing, whereas environmental triggers, particularly ocean/atmosphere oxygenation, do the reverse. Using modern oxygen minimum zones as an analog for Proterozoic oceans, we explore the effect of low oxygen levels on the feeding ecology of polychaetes, the dominant macrofaunal animals in deep-sea sediments. Here we show that low oxygen is clearly linked to low proportions of carnivores in a community and low diversity of carnivorous taxa, whereas higher oxygen levels support more complex food webs. The recognition of a physiological control on carnivory therefore links environmental triggers and ecological drivers, providing an integrated explanation for both the pattern and timing of Cambrian animal radiation. PMID:23898193
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Dive and discover: Expeditions to the seafloor
NASA Astrophysics Data System (ADS)
Lawrence, Lisa Ayers
The Dive and Discover Web site is a virtual treasure chest of deep sea science and classroom resources. The goals of Dive and Discover are to engage students, teachers, and the general public in the excitement of ocean disco very through an interactive educational Web site. You can follow scientists on oceanographic research cruises by reading their daily cruise logs, viewing photos and video clips of the discoveries, and even e-mailing questions to the scientists and crew. WHOI has also included an “Educator's Companion” section with teaching strategies, activities, and assessments, making Dive and Discover an excellent resource for the classroom.
Dive and discover: Expeditions to the seafloor
NASA Astrophysics Data System (ADS)
Ayers Lawrence, Lisa
The Dive and Discover Web site is a virtual treasure chest of deep sea science and classroom resources. The goals of Dive and Discover are to engage students, teachers, and the general public in the excitement of ocean disco very through an interactive educational Web site. You can follow scientists on oceanographic research cruises by reading their daily cruise logs, viewing photos and video clips of the discoveries, and even e-mailing questions to the scientists and crew. WHOI has also included an "Educator's Companion" section with teaching strategies, activities, and assessments, making Dive and Discover an excellent resource for the classroom.
A novel web-enabled healthcare solution on health vault system.
Liao, Lingxia; Chen, Min; Rodrigues, Joel J P C; Lai, Xiaorong; Vuong, Son
2012-06-01
Complicated Electronic Medical Records (EMR) systems have created problems in systems regarding an easy implementation and interoperability for a Web-enabled Healthcare Solution, which is normally provided by an independent healthcare giver with limited IT knowledge and interests. An EMR system with well-designed and user-friendly interface, such as Microsoft HealthVault System used as the back-end platform of a Web-enabled healthcare application will be an approach to deal with these problems. This paper analyzes the patient oriented Web-enabled healthcare service application as the new trend to delivery healthcare from hospital/clinic-centric to patient-centric, the current e-healthcare applications, and the main backend EMR systems. Then, we present a novel web-enabled healthcare solution based on Microsoft HealthVault EMR system to meet customers' needs, such as, low total cost, easily development and maintenance, and good interoperability. A sample system is given to show how the solution can be fulfilled, evaluated, and validated. We expect that this paper will provide a deep understanding of the available EMR systems, leading to insights for new solutions and approaches driven to next generation EMR systems.
Comparison: Mediation Solutions of WSMOLX and WebML/WebRatio
NASA Astrophysics Data System (ADS)
Zaremba, Maciej; Zaharia, Raluca; Turati, Andrea; Brambilla, Marco; Vitvar, Tomas; Ceri, Stefano
In this chapter we compare the WSMO/WSML/WSMX andWebML/WebRatio approaches to the SWS-Challenge workshop mediation scenario in terms of the utilized underlying technologies and delivered solutions. In the mediation scenario one partner uses Roset-taNet to define its B2B protocol while the other one operates on a proprietary solution. Both teams shown how these partners could be semantically integrated.
Web scraping technologies in an API world.
Glez-Peña, Daniel; Lourenço, Anália; López-Fernández, Hugo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino
2014-09-01
Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The Virtual Learning Commons (VLC): Enabling Co-Innovation Across Disciplines
NASA Astrophysics Data System (ADS)
Pennington, D. D.; Gandara, A.; Del Rio, N.
2014-12-01
A key challenge for scientists addressing grand-challenge problems is identifying, understanding, and integrating potentially relevant methods, models and tools that that are rapidly evolving in the informatics community. Such tools are essential for effectively integrating data and models in complex research projects, yet it is often difficult to know what tools are available and it is not easy to understand or evaluate how they might be used in a given research context. The goal of the National Science Foundation-funded Virtual Learning Commons (VLC) is to improve awareness and understanding of emerging methodologies and technologies, facilitate individual and group evaluation of these, and trace the impact of innovations within and across teams, disciplines, and communities. The VLC is a Web-based social bookmarking site designed specifically to support knowledge exchange in research communities. It is founded on well-developed models of technology adoption, diffusion of innovation, and experiential learning. The VLC makes use of Web 2.0 (Social Web) and Web 3.0 (Semantic Web) approaches. Semantic Web approaches enable discovery of potentially relevant methods, models, and tools, while Social Web approaches enable collaborative learning about their function. The VLC is under development and the first release is expected Fall 2014.
WebGLORE: a web service for Grid LOgistic REgression.
Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian
2013-12-15
WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.
A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis
Guardia, Gabriela D. A.; Pires, Luís Ferreira; Vêncio, Ricardo Z. N.; Malmegrim, Kelen C. R.; de Farias, Cléver R. G.
2015-01-01
Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis. PMID:26207740
A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis.
Guardia, Gabriela D A; Pires, Luís Ferreira; Vêncio, Ricardo Z N; Malmegrim, Kelen C R; de Farias, Cléver R G
2015-01-01
Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis.
32 CFR Appendix A to Part 806b - Definitions
Code of Federal Regulations, 2010 CFR
2010-07-01
... exemption for protecting the identity of confidential sources. Cookie: Data created by a Web server that is... (persistent cookie). It provides a way for the Web site to identify users and keep track of their preferences... or is sent to a Web site different from the one you are currently viewing. Defense Data Integrity...
32 CFR Appendix A to Part 806b - Definitions
Code of Federal Regulations, 2011 CFR
2011-07-01
... exemption for protecting the identity of confidential sources. Cookie: Data created by a Web server that is... (persistent cookie). It provides a way for the Web site to identify users and keep track of their preferences... or is sent to a Web site different from the one you are currently viewing. Defense Data Integrity...
ERIC Educational Resources Information Center
Yang, Shu Ching
2001-01-01
Describes the integration of Web resources as instructional and learning tools in an EFL (English as a Foreign Language) class in Taiwan. Highlights include challenges and advantages of using the Web; learners' perceptions; intentional and incidental learning; disorientation and cognitive overload; and information seeking as problem-solving. A…
Vocabulary Learning on Learner-Created Content by Using Web 2.0 Tools
ERIC Educational Resources Information Center
Eren, Omer
2015-01-01
The present research examined the use of Web 2.0 tools to improve students' vocabulary knowledge at the School of Foreign Languages, Gaziantep University. Current studies in literature mostly deal with descriptions of students' attitudes towards the reasons for the use of web-based platforms. However, integrating usual classroom environment with…
Integrating a Project Management Approach to E-Business Application Course
ERIC Educational Resources Information Center
Chen, Kuan C.; Chuang, Keh-Wen
2008-01-01
Teaching students project managements requires a hands-on approach. Incorporating project management concepts and processes into a student team Web development project adds a dimension that exposes students to the realities of effective Web development. This paper will describe the project management approach used in a Web development course in…
Developing Higher-Order Thinking Skills through WebQuests
ERIC Educational Resources Information Center
Polly, Drew; Ausband, Leigh
2009-01-01
In this study, 32 teachers participated in a year-long professional development project related to technology integration in which they designed and implemented a WebQuest. This paper describes the extent to which higher-order thinking skills (HOTS) and levels of technology implementation (LoTI) occur in the WebQuests that participants designed.…
Web-Based Learning Environment: A Theory-Based Design Process for Development and Evaluation
ERIC Educational Resources Information Center
Nam, Chang S.; Smith-Jackson, Tonya L.
2007-01-01
Web-based courses and programs have increasingly been developed by many academic institutions, organizations, and companies worldwide due to their benefits for both learners and educators. However, many of the developmental approaches lack two important considerations needed for implementing Web-based learning applications: (1) integration of the…
Not Your Father's Web Site: Corporate Sites Emerge as New Content Innovators.
ERIC Educational Resources Information Center
O'Leary, Mick
2002-01-01
New economy corporate Web sites have pioneered exciting techniques-rich media, interactivity, personalization, community, and integration of much third-party content. Discusses business-to-business (B2B) Web commerce, with examples of several B2B corporate sites; portal and content elements of these sites; and corporate content outlooks. (AEF)
Opinion Integration and Summarization
ERIC Educational Resources Information Center
Lu, Yue
2011-01-01
As Web 2.0 applications become increasingly popular, more and more people express their opinions on the Web in various ways in real time. Such wide coverage of topics and abundance of users make the Web an extremely valuable source for mining people's opinions about all kinds of topics. However, since the opinions are usually expressed as…
MendelWeb: An Electronic Science/Math/History Resource for the WWW.
ERIC Educational Resources Information Center
Blumberg, Roger B.
This paper describes a hypermedia resource, called MendelWeb that integrates elementary biology, discrete mathematics, and the history of science. MendelWeb is constructed from Gregor Menders 1865 paper, "Experiments in Plant Hybridization". An English translation of Mendel's paper, which is considered to mark the birth of classical and…
ERIC Educational Resources Information Center
Raulston, Cassie; Moellinger, Donna
2007-01-01
With the evolution of technology, students can now take online classes that may not be offered in their home schools. While online courses are commonly found in many high schools, WebQuests are used more commonly in elementary schools. Through the exploration of WebQuests, students are able to integrate the Internet into classroom activities. The…
Experience on Mashup Development with End User Programming Environment
ERIC Educational Resources Information Center
Yue, Kwok-Bun
2010-01-01
Mashups, Web applications integrating data and functionality from other Web sources to provide a new service, have quickly become ubiquitous. Because of their role as a focal point in three important trends (Web 2.0, situational software applications, and end user development), mashups are a crucial emerging technology for information systems…
ERIC Educational Resources Information Center
Gerjets, Peter; Kammerer, Yvonne; Werner, Benita
2011-01-01
Web searching for complex information requires to appropriately evaluating diverse sources of information. Information science studies identified different criteria applied by searchers to evaluate Web information. However, the explicit evaluation instructions used in these studies might have resulted in a distortion of spontaneous evaluation…
Eco-Webbing: A Teaching Strategy to Facilitate Critical Consciousness and Agency
ERIC Educational Resources Information Center
Williams, Joseph M.; McMahon, H. George; Goodman, Rachael D.
2015-01-01
Eco-webbing is a teaching strategy that can be used to help counselor educators integrate a social justice focus into their courses. Preliminary data indicated increased critical consciousness and social justice agency as a result of using eco-webbing with counseling students (N = 17). The authors provide implications for counselor educators and…
A Comparison of Web-Based and Face-to-Face Functional Measurement Experiments
ERIC Educational Resources Information Center
Van Acker, Frederik; Theuns, Peter
2010-01-01
Information Integration Theory (IIT) is concerned with how people combine information into an overall judgment. A method is hereby presented to perform Functional Measurement (FM) experiments, the methodological counterpart of IIT, on the Web. In a comparison of Web-based FM experiments, face-to-face experiments, and computer-based experiments in…
Experience of Integrating Web 2.0 Technologies
ERIC Educational Resources Information Center
Zdravkova, Katerina; Ivanovic, Mirjana; Putnik, Zoran
2012-01-01
Web users in the 21st century are no longer only passive consumers. On a contrary, they are active contributors willing to obtain, share and evolve information. In this paper we report our experience regarding the implementation of Web 2.0 concept in several Computer Ethics related courses jointly conducted at two Universities. These courses have…
Criteria for the Assessment of Foreign Language Instructional Software and Web Sites.
ERIC Educational Resources Information Center
Rifkin, Benjamin
2003-01-01
Presents standards for assessing language-learning software and Web sites in three different contexts: (1) teachers considering whether and how to integrate computer-mediated materials into their instruction; (2) specialists writing reviews of software or Web sites for professional journals; and (3) college administrators evaluating the quality of…
NMRPro: an integrated web component for interactive processing and visualization of NMR spectra.
Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi
2016-07-01
The popularity of using NMR spectroscopy in metabolomics and natural products has driven the development of an array of NMR spectral analysis tools and databases. Particularly, web applications are well used recently because they are platform-independent and easy to extend through reusable web components. Currently available web applications provide the analysis of NMR spectra. However, they still lack the necessary processing and interactive visualization functionalities. To overcome these limitations, we present NMRPro, a web component that can be easily incorporated into current web applications, enabling easy-to-use online interactive processing and visualization. NMRPro integrates server-side processing with client-side interactive visualization through three parts: a python package to efficiently process large NMR datasets on the server-side, a Django App managing server-client interaction, and SpecdrawJS for client-side interactive visualization. Demo and installation instructions are available at http://mamitsukalab.org/tools/nmrpro/ mohamed@kuicr.kyoto-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Web-Based Computational Chemistry Education with CHARMMing I: Lessons and Tutorial
Miller, Benjamin T.; Singh, Rishi P.; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S.; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R.; Woodcock, H. Lee
2014-01-01
This article describes the development, implementation, and use of web-based “lessons” to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that “point and click” simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance. PMID:25057988
A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web
de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández
2014-01-01
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678
Web-based computational chemistry education with CHARMMing I: Lessons and tutorial.
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka; Pevzner, Yuri; Sun, Jingjun; Miller, Carrie S; Boresch, Stefan; Ichiye, Toshiko; Brooks, Bernard R; Woodcock, H Lee
2014-07-01
This article describes the development, implementation, and use of web-based "lessons" to introduce students and other newcomers to computer simulations of biological macromolecules. These lessons, i.e., interactive step-by-step instructions for performing common molecular simulation tasks, are integrated into the collaboratively developed CHARMM INterface and Graphics (CHARMMing) web user interface (http://www.charmming.org). Several lessons have already been developed with new ones easily added via a provided Python script. In addition to CHARMMing's new lessons functionality, web-based graphical capabilities have been overhauled and are fully compatible with modern mobile web browsers (e.g., phones and tablets), allowing easy integration of these advanced simulation techniques into coursework. Finally, one of the primary objections to web-based systems like CHARMMing has been that "point and click" simulation set-up does little to teach the user about the underlying physics, biology, and computational methods being applied. In response to this criticism, we have developed a freely available tutorial to bridge the gap between graphical simulation setup and the technical knowledge necessary to perform simulations without user interface assistance.
A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.
de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso
2014-06-18
Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.
Advances in deep-UV processing using cluster tools
NASA Astrophysics Data System (ADS)
Escher, Gary C.; Tepolt, Gary; Mohondro, Robert D.
1993-09-01
Deep-UV laser lithography has shown the capability of supporting the manufacture of multiple generations of integrated circuits (ICs) due to its wide process latitude and depth of focus (DOF) for 0.2 micrometers to 0.5 micrometers feature sizes. This capability has been attained through improvements in deep-UV wide field lens technology, excimer lasers, steppers and chemically amplified, positive deep-UV resists. Chemically amplified deep-UV resists are required for 248 nm lithography due to the poor absorption and sensitivity of conventional novolac resists. The acid catalyzation processes of the new resists requires control of the thermal history and environmental conditions of the lithographic process. Work is currently underway at several resist vendors to reduce the need for these controls, but practical manufacturing solutions exist today. One of these solutions is the integration of steppers and resist tracks into a `cluster tool' or `Lithocell' to insure a consistent thermal profile for the resist process and reduce the time the resist is exposed to atmospheric contamination. The work here reports processing and system integration results with a Machine Technology, Inc (MTI) post-exposure bake (PEB) track interfaced with an advanced GCA XLS 7800 deep-UV stepper [31 mm diameter, variable NA (0.35 - 0.53) and variable sigma (0.3 - 0.74)].
Vives, Ingrid; Grimalt, Joan O; Ventura, Marc; Catalan, Jordi
2005-06-01
We investigated the contents of polycyclic aromatic hydrocarbons (PAHs) in the food web organisms included in the diet of brown trout from a remote mountain lake. The preferential habitat and trophic level of the component species have been assessed from the signature of stable isotopes (delta13C and delta15N). Subsequently, the patterns of accumulation and transformation of these hydrocarbons in the food chain have been elucidated. Most of the food web organisms exhibit PAH distributions largely dominated by phenanthrene, which agrees with its predominance in atmospheric deposition, water, and suspended particles. Total PAH levels are higher in the organisms from the littoral habitat than from the deep sediments or the pelagic water column. However, organisms from deep sediments exhibit higher proportions of higher molecular weight PAH than those in other lake areas. Distinct organisms exhibit specific features in their relative PAH composition that point to different capacities for uptake and metabolic degradation. Brown trout show an elevated capacity for metabolic degradation because they have lower PAH concentrations than food and they are enriched strongly in lower molecular weight compounds. The PAH levels in trout highly depend on organisms living in the littoral areas. Fish exposure to PAH, therefore, may vary from lake to lake according to the relative contribution of littoral organisms to their diet.
Nonlinear material behaviour of spider silk yields robust webs.
Cranford, Steven W; Tarakanova, Anna; Pugno, Nicola M; Buehler, Markus J
2012-02-01
Natural materials are renowned for exquisite designs that optimize function, as illustrated by the elasticity of blood vessels, the toughness of bone and the protection offered by nacre. Particularly intriguing are spider silks, with studies having explored properties ranging from their protein sequence to the geometry of a web. This material system, highly adapted to meet a spider's many needs, has superior mechanical properties. In spite of much research into the molecular design underpinning the outstanding performance of silk fibres, and into the mechanical characteristics of web-like structures, it remains unknown how the mechanical characteristics of spider silk contribute to the integrity and performance of a spider web. Here we report web deformation experiments and simulations that identify the nonlinear response of silk threads to stress--involving softening at a yield point and substantial stiffening at large strain until failure--as being crucial to localize load-induced deformation and resulting in mechanically robust spider webs. Control simulations confirmed that a nonlinear stress response results in superior resistance to structural defects in the web compared to linear elastic or elastic-plastic (softening) material behaviour. We also show that under distributed loads, such as those exerted by wind, the stiff behaviour of silk under small deformation, before the yield point, is essential in maintaining the web's structural integrity. The superior performance of silk in webs is therefore not due merely to its exceptional ultimate strength and strain, but arises from the nonlinear response of silk threads to strain and their geometrical arrangement in a web.
Blodgett, David L.; Lucido, Jessica M.; Kreft, James M.
2016-01-01
Critical water-resources issues ranging from flood response to water scarcity make access to integrated water information, services, tools, and models essential. Since 1995 when the first water data web pages went online, the U.S. Geological Survey has been at the forefront of water data distribution and integration. Today, real-time and historical streamflow observations are available via web pages and a variety of web service interfaces. The Survey has built partnerships with Federal and State agencies to integrate hydrologic data providing continuous observations of surface and groundwater, temporally discrete water quality data, groundwater well logs, aquatic biology data, water availability and use information, and tools to help characterize the landscape for modeling. In this paper, we summarize the status and design patterns implemented for selected data systems. We describe how these systems contribute to a U.S. Federal Open Water Data Initiative and present some gaps and lessons learned that apply to global hydroinformatics data infrastructure.
Schofield, E C; Carver, T; Achuthan, P; Freire-Pritchett, P; Spivakov, M; Todd, J A; Burren, O S
2016-08-15
Promoter capture Hi-C (PCHi-C) allows the genome-wide interrogation of physical interactions between distal DNA regulatory elements and gene promoters in multiple tissue contexts. Visual integration of the resultant chromosome interaction maps with other sources of genomic annotations can provide insight into underlying regulatory mechanisms. We have developed Capture HiC Plotter (CHiCP), a web-based tool that allows interactive exploration of PCHi-C interaction maps and integration with both public and user-defined genomic datasets. CHiCP is freely accessible from www.chicp.org and supports most major HTML5 compliant web browsers. Full source code and installation instructions are available from http://github.com/D-I-L/django-chicp ob219@cam.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved.
Vickerman, Katrina A; Kellogg, Elizabeth S; Zbikowski, Susan M
2015-01-01
Background Phone-based tobacco cessation program effectiveness has been established and randomized controlled trials have provided some support for Web-based services. Relatively little is known about who selects different treatment modalities and how they engage with treatments in a real-world setting. Objective This paper describes the characteristics, Web utilization patterns, and return rates of tobacco users who self-selected into a Web-based (Web-Only) versus integrated phone/Web (Phone/Web) cessation program. Methods We examined the demographics, baseline tobacco use, Web utilization patterns, and return rates of 141,429 adult tobacco users who self-selected into a Web-Only or integrated Phone/Web cessation program through 1 of 10 state quitlines from August 2012 through July 2013. For each state, registrants were only included from the timeframe in which both programs were offered to all enrollees. Utilization data were limited to site interactions occurring within 6 months after registration. Results Most participants selected the Phone/Web program (113,019/141,429, 79.91%). After enrollment in Web services, Web-Only were more likely to log in compared to Phone/Web (21,832/28,410, 76.85% vs 23,920/56,892, 42.04%; P<.001), but less likely to return after their initial log-in (8766/21,832, 40.15% vs 13,966/23,920, 58.39%; P<.001). In bivariate and multivariable analyses, those who chose Web-Only were younger, healthier, more highly educated, more likely to be uninsured or commercially insured, more likely to be white non-Hispanic and less likely to be black non-Hispanic, less likely to be highly nicotine-addicted, and more likely to have started their program enrollment online (all P<.001). Among both program populations, participants were more likely to return to Web services if they were women, older, more highly educated, or were sent nicotine replacement therapy (NRT) through their quitline (all P<.001). Phone/Web were also more likely to return if they had completed a coaching call, identified as white non-Hispanic or “other” race, or were commercially insured (all P<.001). Web-Only were less likely to return if they started their enrollment online versus via phone. The interactive Tobacco Tracker, Cost Savings Calculator, and Quitting Plan were the most widely used features overall. Web-Only were more likely than Phone/Web to use most key features (all P<.001), most notably the 5 Quitting Plan behaviors. Among quitlines that offered NRT to both Phone/Web and Web-Only, Web-Only were less likely to have received quitline NRT. Conclusions This paper adds to our understanding of who selects different cessation treatment modalities and how they engage with the program in a real-world setting. Web-Only were younger, healthier smokers of higher socioeconomic status who interacted more intensely with services in a single session, but were less likely to re-engage or access NRT benefits. Further research should examine the efficacy of different engagement techniques and services with different subpopulations of tobacco users. PMID:25673013
Nash, Chelsea M; Vickerman, Katrina A; Kellogg, Elizabeth S; Zbikowski, Susan M
2015-02-04
Phone-based tobacco cessation program effectiveness has been established and randomized controlled trials have provided some support for Web-based services. Relatively little is known about who selects different treatment modalities and how they engage with treatments in a real-world setting. This paper describes the characteristics, Web utilization patterns, and return rates of tobacco users who self-selected into a Web-based (Web-Only) versus integrated phone/Web (Phone/Web) cessation program. We examined the demographics, baseline tobacco use, Web utilization patterns, and return rates of 141,429 adult tobacco users who self-selected into a Web-Only or integrated Phone/Web cessation program through 1 of 10 state quitlines from August 2012 through July 2013. For each state, registrants were only included from the timeframe in which both programs were offered to all enrollees. Utilization data were limited to site interactions occurring within 6 months after registration. Most participants selected the Phone/Web program (113,019/141,429, 79.91%). After enrollment in Web services, Web-Only were more likely to log in compared to Phone/Web (21,832/28,410, 76.85% vs 23,920/56,892, 42.04%; P<.001), but less likely to return after their initial log-in (8766/21,832, 40.15% vs 13,966/23,920, 58.39%; P<.001). In bivariate and multivariable analyses, those who chose Web-Only were younger, healthier, more highly educated, more likely to be uninsured or commercially insured, more likely to be white non-Hispanic and less likely to be black non-Hispanic, less likely to be highly nicotine-addicted, and more likely to have started their program enrollment online (all P<.001). Among both program populations, participants were more likely to return to Web services if they were women, older, more highly educated, or were sent nicotine replacement therapy (NRT) through their quitline (all P<.001). Phone/Web were also more likely to return if they had completed a coaching call, identified as white non-Hispanic or "other" race, or were commercially insured (all P<.001). Web-Only were less likely to return if they started their enrollment online versus via phone. The interactive Tobacco Tracker, Cost Savings Calculator, and Quitting Plan were the most widely used features overall. Web-Only were more likely than Phone/Web to use most key features (all P<.001), most notably the 5 Quitting Plan behaviors. Among quitlines that offered NRT to both Phone/Web and Web-Only, Web-Only were less likely to have received quitline NRT. This paper adds to our understanding of who selects different cessation treatment modalities and how they engage with the program in a real-world setting. Web-Only were younger, healthier smokers of higher socioeconomic status who interacted more intensely with services in a single session, but were less likely to re-engage or access NRT benefits. Further research should examine the efficacy of different engagement techniques and services with different subpopulations of tobacco users.
INDIGO – INtegrated Data Warehouse of MIcrobial GenOmes with Examples from the Red Sea Extremophiles
Alam, Intikhab; Antunes, André; Kamau, Allan Anthony; Ba alawi, Wail; Kalkatawi, Manal; Stingl, Ulrich; Bajic, Vladimir B.
2013-01-01
Background The next generation sequencing technologies substantially increased the throughput of microbial genome sequencing. To functionally annotate newly sequenced microbial genomes, a variety of experimental and computational methods are used. Integration of information from different sources is a powerful approach to enhance such annotation. Functional analysis of microbial genomes, necessary for downstream experiments, crucially depends on this annotation but it is hampered by the current lack of suitable information integration and exploration systems for microbial genomes. Results We developed a data warehouse system (INDIGO) that enables the integration of annotations for exploration and analysis of newly sequenced microbial genomes. INDIGO offers an opportunity to construct complex queries and combine annotations from multiple sources starting from genomic sequence to protein domain, gene ontology and pathway levels. This data warehouse is aimed at being populated with information from genomes of pure cultures and uncultured single cells of Red Sea bacteria and Archaea. Currently, INDIGO contains information from Salinisphaera shabanensis, Haloplasma contractile, and Halorhabdus tiamatea - extremophiles isolated from deep-sea anoxic brine lakes of the Red Sea. We provide examples of utilizing the system to gain new insights into specific aspects on the unique lifestyle and adaptations of these organisms to extreme environments. Conclusions We developed a data warehouse system, INDIGO, which enables comprehensive integration of information from various resources to be used for annotation, exploration and analysis of microbial genomes. It will be regularly updated and extended with new genomes. It is aimed to serve as a resource dedicated to the Red Sea microbes. In addition, through INDIGO, we provide our Automatic Annotation of Microbial Genomes (AAMG) pipeline. The INDIGO web server is freely available at http://www.cbrc.kaust.edu.sa/indigo. PMID:24324765
Alam, Intikhab; Antunes, André; Kamau, Allan Anthony; Ba Alawi, Wail; Kalkatawi, Manal; Stingl, Ulrich; Bajic, Vladimir B
2013-01-01
The next generation sequencing technologies substantially increased the throughput of microbial genome sequencing. To functionally annotate newly sequenced microbial genomes, a variety of experimental and computational methods are used. Integration of information from different sources is a powerful approach to enhance such annotation. Functional analysis of microbial genomes, necessary for downstream experiments, crucially depends on this annotation but it is hampered by the current lack of suitable information integration and exploration systems for microbial genomes. We developed a data warehouse system (INDIGO) that enables the integration of annotations for exploration and analysis of newly sequenced microbial genomes. INDIGO offers an opportunity to construct complex queries and combine annotations from multiple sources starting from genomic sequence to protein domain, gene ontology and pathway levels. This data warehouse is aimed at being populated with information from genomes of pure cultures and uncultured single cells of Red Sea bacteria and Archaea. Currently, INDIGO contains information from Salinisphaera shabanensis, Haloplasma contractile, and Halorhabdus tiamatea - extremophiles isolated from deep-sea anoxic brine lakes of the Red Sea. We provide examples of utilizing the system to gain new insights into specific aspects on the unique lifestyle and adaptations of these organisms to extreme environments. We developed a data warehouse system, INDIGO, which enables comprehensive integration of information from various resources to be used for annotation, exploration and analysis of microbial genomes. It will be regularly updated and extended with new genomes. It is aimed to serve as a resource dedicated to the Red Sea microbes. In addition, through INDIGO, we provide our Automatic Annotation of Microbial Genomes (AAMG) pipeline. The INDIGO web server is freely available at http://www.cbrc.kaust.edu.sa/indigo.
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Pan, Xiaoyong; Shen, Hong-Bin
2017-02-28
RNAs play key roles in cells through the interactions with proteins known as the RNA-binding proteins (RBP) and their binding motifs enable crucial understanding of the post-transcriptional regulation of RNAs. How the RBPs correctly recognize the target RNAs and why they bind specific positions is still far from clear. Machine learning-based algorithms are widely acknowledged to be capable of speeding up this process. Although many automatic tools have been developed to predict the RNA-protein binding sites from the rapidly growing multi-resource data, e.g. sequence, structure, their domain specific features and formats have posed significant computational challenges. One of current difficulties is that the cross-source shared common knowledge is at a higher abstraction level beyond the observed data, resulting in a low efficiency of direct integration of observed data across domains. The other difficulty is how to interpret the prediction results. Existing approaches tend to terminate after outputting the potential discrete binding sites on the sequences, but how to assemble them into the meaningful binding motifs is a topic worth of further investigation. In viewing of these challenges, we propose a deep learning-based framework (iDeep) by using a novel hybrid convolutional neural network and deep belief network to predict the RBP interaction sites and motifs on RNAs. This new protocol is featured by transforming the original observed data into a high-level abstraction feature space using multiple layers of learning blocks, where the shared representations across different domains are integrated. To validate our iDeep method, we performed experiments on 31 large-scale CLIP-seq datasets, and our results show that by integrating multiple sources of data, the average AUC can be improved by 8% compared to the best single-source-based predictor; and through cross-domain knowledge integration at an abstraction level, it outperforms the state-of-the-art predictors by 6%. Besides the overall enhanced prediction performance, the convolutional neural network module embedded in iDeep is also able to automatically capture the interpretable binding motifs for RBPs. Large-scale experiments demonstrate that these mined binding motifs agree well with the experimentally verified results, suggesting iDeep is a promising approach in the real-world applications. The iDeep framework not only can achieve promising performance than the state-of-the-art predictors, but also easily capture interpretable binding motifs. iDeep is available at http://www.csbio.sjtu.edu.cn/bioinf/iDeep.
Planning of electroporation-based treatments using Web-based treatment-planning software.
Pavliha, Denis; Kos, Bor; Marčan, Marija; Zupanič, Anže; Serša, Gregor; Miklavčič, Damijan
2013-11-01
Electroporation-based treatment combining high-voltage electric pulses and poorly permanent cytotoxic drugs, i.e., electrochemotherapy (ECT), is currently used for treating superficial tumor nodules by following standard operating procedures. Besides ECT, another electroporation-based treatment, nonthermal irreversible electroporation (N-TIRE), is also efficient at ablating deep-seated tumors. To perform ECT or N-TIRE of deep-seated tumors, following standard operating procedures is not sufficient and patient-specific treatment planning is required for successful treatment. Treatment planning is required because of the use of individual long-needle electrodes and the diverse shape, size and location of deep-seated tumors. Many institutions that already perform ECT of superficial metastases could benefit from treatment-planning software that would enable the preparation of patient-specific treatment plans. To this end, we have developed a Web-based treatment-planning software for planning electroporation-based treatments that does not require prior engineering knowledge from the user (e.g., the clinician). The software includes algorithms for automatic tissue segmentation and, after segmentation, generation of a 3D model of the tissue. The procedure allows the user to define how the electrodes will be inserted. Finally, electric field distribution is computed, the position of electrodes and the voltage to be applied are optimized using the 3D model and a downloadable treatment plan is made available to the user.
Organizational Alignment Through Information Technology: A Web-Based Approach to Change
NASA Technical Reports Server (NTRS)
Heinrichs, W.; Smith, J.
1999-01-01
This paper reports on the effectiveness of web-based internet tools and databases to facilitate integration of technical organizations with interfaces that minimize modification of each technical organization.
Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi
2009-01-01
The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.
Enriching and improving the quality of linked data with GIS
NASA Astrophysics Data System (ADS)
Iwaniak, Adam; Kaczmarek, Iwona; Strzelecki, Marek; Lukowicz, Jaromar; Jankowski, Piotr
2016-06-01
Standardization of methods for data exchange in GIS has along history predating the creation of World Wide Web. The advent of World Wide Web brought the emergence of new solutions for data exchange and sharing including; more recently, standards proposed by the W3C for data exchange involving Semantic Web technologies and linked data. Despite the growing interest in integration, GIS and linked data are still two separate paradigms for describing and publishing spatial data on the Web. At the same time, both paradigms offer complementary ways of representing real world phenomena and means of analysis using different processing functions. The complementarity of linked data and GIS can be leveraged to synergize both paradigms resulting in richer data content and more powerful inferencing. The article presents an approach aimed at integrating linked data with GIS. The approach relies on the use of GIS tools for integration, verification and enrichment of linked data. The GIS tools are employed to enrich linked data by furnishing access to collection of data resources, defining relationship between data resources, and subsequently facilitating GIS data integration with linked data. The proposed approach is demonstrated with examples using data from DBpedia, OSM, and tools developed by the authors for standard GIS software.
Hollinderbäumer, Anke; Hartz, Tobias; Uckert, Frank
2013-01-01
Present-day students have grown up with considerable knowledge concerning multi-media. The communication modes they use are faster, more spontaneous, and independent of place and time. These new web-based forms of information and communication are used by students, educators, and patients in various ways. Universities which have already used these tools report many positive effects on the learning behaviour of the students. In a systematic literature review, we summarized the manner in which the integration of Social Media and Web 2.0 into education has taken place. A systematic literature search covering the last 5 years using MeSH terms was carried out via PubMed. Among the 20 chosen publications, there was only one German publication. Most of the publications are from the US and Great Britain. The latest publications report on the concrete usage of the tools in education, including social networking, podcasts, blogs, wikis, YouTube, Twitter and Skype. The integration of Web 2.0 and Social Media is the modern form of self-determined learning. It stimulates reflection and actively integrates the students in the construction of their knowledge. With these new tools, the students acquire skills which they need in both their social and professional lives.
Hollinderbäumer, Anke; Hartz, Tobias; Ückert, Frank
2013-01-01
Objective: Present-day students have grown up with considerable knowledge concerning multi-media. The communication modes they use are faster, more spontaneous, and independent of place and time. These new web-based forms of information and communication are used by students, educators, and patients in various ways. Universities which have already used these tools report many positive effects on the learning behaviour of the students. In a systematic literature review, we summarized the manner in which the integration of Social Media and Web 2.0 into education has taken place. Method: A systematic literature search covering the last 5 years using MeSH terms was carried out via PubMed. Result: Among the 20 chosen publications, there was only one German publication. Most of the publications are from the US and Great Britain. The latest publications report on the concrete usage of the tools in education, including social networking, podcasts, blogs, wikis, YouTube, Twitter and Skype. Conclusion: The integration of Web 2.0 and Social Media is the modern form of self-determined learning. It stimulates reflection and actively integrates the students in the construction of their knowledge. With these new tools, the students acquire skills which they need in both their social and professional lives. PMID:23467509
ERIC Educational Resources Information Center
Daher, Tareq; Lazarevic, Bojan
2014-01-01
The purpose of this research is to provide insight into the several aspects of instructional use of emerging web-based technologies. The study first explores the extent of Web 2.0 technology integration into face-to-face classroom activities. In this phase, the main focus of research interests was on the types and dynamics of Web 2.0 tools used by…
2008-03-01
Machine [29]. OC4J applications support Java Servlets , Web services, and the following J2EE specific standards: Extensible Markup Language (XML...IMAP Internet Message Access Protocol IP Internet Protocol IT Information Technology xviii J2EE Java Enterprise Environment JSR 168 Java ...LDAP), World Wide Web Distributed Authoring and Versioning (WebDav), Java Specification Request 168 (JSR 168), and Web Services for Remote
Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs
NASA Astrophysics Data System (ADS)
Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.
2006-12-01
The sensor web is a distributed, federated infrastructure much like its predecessors, the internet and the world wide web. It will be a federation of many sensor webs, large and small, under many distinct spans of control, that loosely cooperates and share information for many purposes. Realistically, it will grow piecemeal as distinct, individual systems are developed and deployed, some expressly built for a sensor web while many others were created for other purposes. Therefore, the architecture of the sensor web is of fundamental import and architectural strictures that inhibit innovation, experimentation, sharing or scaling may prove fatal. Drawing upon the architectural lessons of the world wide web, we offer a novel system architecture, the flow web, that elevates flows, sequences of messages over a domain of interest and constrained in both time and space, to a position of primacy as a dynamic, real-time, medium of information exchange for computational services. The flow web captures; in a single, uniform architectural style; the conflicting demands of the sensor web including dynamic adaptations to changing conditions, ease of experimentation, rapid recovery from the failures of sensors and models, automated command and control, incremental development and deployment, and integration at multiple levels—in many cases, at different times. Our conception of sensor webs—dynamic amalgamations of sensor webs each constructed within a flow web infrastructure—holds substantial promise for earth science missions in general, and of weather, air quality, and disaster management in particular. Flow webs, are by philosophy, design and implementation a dynamic infrastructure that permits massive adaptation in real-time. Flows may be attached to and detached from services at will, even while information is in transit through the flow. This concept, flow mobility, permits dynamic integration of earth science products and modeling resources in response to real-time demands. Flows are the connective tissue of flow webs—massive computational engines organized as directed graphs whose nodes are semi-autonomous components and whose edges are flows. The individual components of a flow web may themselves be encapsulated flow webs. In other words, a flow web subgraph may be presented to a yet larger flow web as a single, seamless component. Flow webs, at all levels, may be edited and modified while still executing. Within a flow web individual components may be added, removed, started, paused, halted, reparameterized, or inspected. The topology of a flow web may be changed at will. Thus, flow webs exhibit an extraordinary degree of adaptivity and robustness as they are explicitly designed to be modified on the fly, an attribute well suited for dynamic model interactions in sensor webs. We describe our concept for a sensor web, implemented as a flow web, in the context of a wildfire disaster management system for the southern California region. Comprehensive wildfire management requires cooperation among multiple agencies. Flow webs allow agencies to share resources in exactly the manner they choose. We will explain how to employ flow webs and agents to integrate satellite remote sensing data, models, in-situ sensors, UAVs and other resources into a sensor web that interconnects organizations and their disaster management tools in a manner that simultaneously preserves their independence and builds upon the individual strengths of agency-specific models and data sources.
Kushniruk, A W; Patel, C; Patel, V L; Cimino, J J
2001-04-01
The World Wide Web provides an unprecedented opportunity for widespread access to health-care applications by both patients and providers. The development of new methods for assessing the effectiveness and usability of these systems is becoming a critical issue. This paper describes the distance evaluation (i.e. 'televaluation') of emerging Web-based information technologies. In health informatics evaluation, there is a need for application of new ideas and methods from the fields of cognitive science and usability engineering. A framework is presented for conducting evaluations of health-care information technologies that integrates a number of methods, ranging from deployment of on-line questionnaires (and Web-based forms) to remote video-based usability testing of user interactions with clinical information systems. Examples illustrating application of these techniques are presented for the assessment of a patient clinical information system (PatCIS), as well as an evaluation of use of Web-based clinical guidelines. Issues in designing, prototyping and iteratively refining evaluation components are discussed, along with description of a 'virtual' usability laboratory.
EntrezAJAX: direct web browser access to the Entrez Programming Utilities.
Loman, Nicholas J; Pallen, Mark J
2010-06-21
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/
CAS-viewer: web-based tool for splicing-guided integrative analysis of multi-omics cancer data.
Han, Seonggyun; Kim, Dongwook; Kim, Youngjun; Choi, Kanghoon; Miller, Jason E; Kim, Dokyoon; Lee, Younghee
2018-04-20
The Cancer Genome Atlas (TCGA) project is a public resource that provides transcriptomic, DNA sequence, methylation, and clinical data for 33 cancer types. Transforming the large size and high complexity of TCGA cancer genome data into integrated knowledge can be useful to promote cancer research. Alternative splicing (AS) is a key regulatory mechanism of genes in human cancer development and in the interaction with epigenetic factors. Therefore, AS-guided integration of existing TCGA data sets will make it easier to gain insight into the genetic architecture of cancer risk and related outcomes. There are already existing tools analyzing and visualizing alternative mRNA splicing patterns for large-scale RNA-seq experiments. However, these existing web-based tools are limited to the analysis of individual TCGA data sets at a time, such as only transcriptomic information. We implemented CAS-viewer (integrative analysis of Cancer genome data based on Alternative Splicing), a web-based tool leveraging multi-cancer omics data from TCGA. It illustrates alternative mRNA splicing patterns along with methylation, miRNAs, and SNPs, and then provides an analysis tool to link differential transcript expression ratio to methylation, miRNA, and splicing regulatory elements for 33 cancer types. Moreover, one can analyze AS patterns with clinical data to identify potential transcripts associated with different survival outcome for each cancer. CAS-viewer is a web-based application for transcript isoform-driven integration of multi-omics data in multiple cancer types and will aid in the visualization and possible discovery of biomarkers for cancer by integrating multi-omics data from TCGA.
WheatGenome.info: A Resource for Wheat Genomics Resource.
Lai, Kaitao
2016-01-01
An integrated database with a variety of Web-based systems named WheatGenome.info hosting wheat genome and genomic data has been developed to support wheat research and crop improvement. The resource includes multiple Web-based applications, which are implemented as a variety of Web-based systems. These include a GBrowse2-based wheat genome viewer with BLAST search portal, TAGdb for searching wheat second generation genome sequence data, wheat autoSNPdb, links to wheat genetic maps using CMap and CMap3D, and a wheat genome Wiki to allow interaction between diverse wheat genome sequencing activities. This portal provides links to a variety of wheat genome resources hosted at other research organizations. This integrated database aims to accelerate wheat genome research and is freely accessible via the web interface at http://www.wheatgenome.info/ .
Starvation and recovery in the deep-sea methanotroph Methyloprofundus sedimenti.
Tavormina, Patricia L; Kellermann, Matthias Y; Antony, Chakkiath Paul; Tocheva, Elitza I; Dalleska, Nathan F; Jensen, Ashley J; Valentine, David L; Hinrichs, Kai-Uwe; Jensen, Grant J; Dubilier, Nicole; Orphan, Victoria J
2017-01-01
In the deep ocean, the conversion of methane into derived carbon and energy drives the establishment of diverse faunal communities. Yet specific biological mechanisms underlying the introduction of methane-derived carbon into the food web remain poorly described, due to a lack of cultured representative deep-sea methanotrophic prokaryotes. Here, the response of the deep-sea aerobic methanotroph Methyloprofundus sedimenti to methane starvation and recovery was characterized. By combining lipid analysis, RNA analysis, and electron cryotomography, it was shown that M. sedimenti undergoes discrete cellular shifts in response to methane starvation, including changes in headgroup-specific fatty acid saturation levels, and reductions in cytoplasmic storage granules. Methane starvation is associated with a significant increase in the abundance of gene transcripts pertinent to methane oxidation. Methane reintroduction to starved cells stimulates a rapid, transient extracellular accumulation of methanol, revealing a way in which methane-derived carbon may be routed to community members. This study provides new understanding of methanotrophic responses to methane starvation and recovery, and lays the initial groundwork to develop Methyloprofundus as a model chemosynthesizing bacterium from the deep sea. © 2016 John Wiley & Sons Ltd.
Using EMBL-EBI services via Web interface and programmatically via Web Services
Lopez, Rodrigo; Cowley, Andrew; Li, Weizhong; McWilliam, Hamish
2015-01-01
The European Bioinformatics Institute (EMBL-EBI) provides access to a wide range of databases and analysis tools that are of key importance in bioinformatics. As well as providing Web interfaces to these resources, Web Services are available using SOAP and REST protocols that enable programmatic access to our resources and allow their integration into other applications and analytical workflows. This unit describes the various options available to a typical researcher or bioinformatician who wishes to use our resources via Web interface or programmatically via a range of programming languages. PMID:25501941
Understanding the Deep Earth: Slabs, Drips, Plumes and More - An On the Cutting Edge Workshop
NASA Astrophysics Data System (ADS)
Williams, M. L.; Mogk, D. W.; McDaris, J. R.
2010-12-01
Exciting new science is emerging from the study of the deep Earth using a variety of approaches: observational instrumentation (e.g. EarthScope’s USArray; IRIS), analysis of rocks (xenoliths, isotopic tracers), experimental methods (COMPRES facilities), and modeling (physical and computational, e.g. CIG program). New images and models of active faults, subducting plates, mantle drips, and rising plumes are spurring a new excitement about deep Earth processes and connections between Earth’s internal systems, the plate tectonic system, and the physiography of Earth’s surface. The integration of these lines of research presents unique opportunities and also challenges in geoscience education. How can we best teach about the architecture, composition, and processes of Earth where it is hidden from direct observation. How can we make deep Earth science relevant and meaningful to students across the geoscience curriculum? And how can we use the exciting new discoveries about Earth processes to attract new students into science? To explore the intersection of research and teaching about the deep Earth, a virtual workshop was convened in February 2010 for experts in deep Earth research and undergraduate geoscience education. The six-day workshop consisted of online plenary talks, large and small group discussions, asynchronous contributions using threaded listservs and web-based work spaces, as well as development and review of new classroom and laboratory activities. The workshop goals were to: 1) help participants stay current about data, tools, services, and research related to the deep earth, 2) address the "big science questions" related to deep earth (e.g. plumes, slabs, drips, post-perovskite, etc.) and explore exciting new scientific approaches, 3) to consider ways to effectively teach about "what can't be seen", at least not directly, and 4) develop and review classroom teaching activities for undergraduate education using these data, tools, services, and research results to facilitate teaching about the deep earth across the geoscience curriculum. Another goal of the workshop was to experiment with, and evaluate the effectiveness of, the virtual format. Although there are advantages to face-to-face workshops, the virtual format was remarkably effective. The interactive discussions during synchronous presentations were vibrant, and the virtual format allowed participants to introduce references, images and ideas in real-time. The virtual nature of the workshop allowed participation by those who are not able to attend a traditional workshop, with an added benefit that participants had direct access to all their research and teaching materials to share with the workshop. Some participants broadcast the workshop ‘live’ to their classes and many brought discussions directly from the presentation to the classroom. The workshop webpage includes the workshop program with links to recordings of all presentations, discussion summaries, a collection of recommended resources about deep Earth research, and collections of peer-reviewed instructional activities. http://serc.carleton.edu/NAGTWorkshops/deepearth/index.html
ERIC Educational Resources Information Center
McCracken, Holly
2009-01-01
The importance of the interconnectedness of academic, student, and technical support processes intrinsic to the provision of on-line instruction has been frequently depicted as a "service Web," with students at the center of the infrastructure. However, as programming to support distance learning continues to develop, such service Webs have grown…
Evaluations of Students on Facebook as an Educational Environment
ERIC Educational Resources Information Center
Coklar, Ahmet Naci
2012-01-01
Taking cognizance of the transformation experienced in education technologies, the concept that comes into prominence in integration of ICTs to education process at present is web 2.0. The main philosophy of web 2.0 technologies is its contribution to content formation of users and high-level interaction between users. One of web 2.0 technologies…
Project MERLOT: Bringing Peer Review to Web-Based Educational Resources
ERIC Educational Resources Information Center
Cafolla, Ralph
2006-01-01
The unprecedented growth of the World Wide Web has resulted in a profusion of educational resources. The challenge for faculty is finding these resources and integrating them into their instruction. Even after the resource is found, the instructor must assess the effectiveness of the resource. As the number of educational web sites mount into the…