NASA Astrophysics Data System (ADS)
Hashimoto, S.; Hamano, H.; Fujita, T.; Hori, H.
2008-12-01
Annex I parties of the Kyoto Protocol are facing even greater pressures to fulfill their commitment for GHG reduction as they enter the first commitment period of the Kyoto Protocol 2008-2012. In Japanese context, one such challenge is to reduce CO2 emissions from the household and business sectors because CO2 emissions from the both sectors has increased by 12% and 20% respectively since 1990 while the industry has achieved 21% of CO2 emissions reduction. Land use planning, which, either directly or indirectly, controls appropriate uses for land within jurisdictions, might play very important roles to deal with CO2 reductions from the household and business sectors. In this research, aiming at effective reductions of air- conditioning energy consumption and resultant CO2 emissions from the household and business sectors, the framework to design and evaluate land use planning was developed. The design and evaluation processes embraced in this framework consist of GIS database, technology and policy inventory for planning, one- dimensional urban canopy model which evaluate urban climate at neighborhood level and air-conditioning load calculation procedure. The GIS database provides spatial information of target areas such as land use, building use and road networks, which, then, helps design alternative land use plans. The technology and policy inventory includes various planning options ranging from those for land over control to those for building energy control, which, combined with the GIS database, serves for planning process. The urban canopy model derives vertical profiles of local climate, such as temperature and humidity, using the information of land use, building height and so on, aided by the GIS database. Vertical profiles of the urban climate are then utilized to derive air-conditioning load and associated CO2 emissions for each building located in target areas. The framework developed was applied to the coastal district of Kawasaki, Japan, with an area of 40 square kilometers, for August 2006, to explore effective combinations of technologies and policies for land use planning. Six alternative land use policies were designed, including BaU in which current land use continues, and were, then, evaluated to seek more effective alternatives. Our findings suggested that about 541 MWh power and 204 tons of CO2 emission be saved at maximum by greening building sites, introducing water retentive pavement and installing energy-saving technologies for buildings in an appropriate manner.
First, Eric L; Gounaris, Chrysanthos E; Floudas, Christodoulos A
2013-05-07
With the growing number of zeolites and metal-organic frameworks (MOFs) available, computational methods are needed to screen databases of structures to identify those most suitable for applications of interest. We have developed novel methods based on mathematical optimization to predict the shape selectivity of zeolites and MOFs in three dimensions by considering the energy costs of transport through possible pathways. Our approach is applied to databases of over 1800 microporous materials including zeolites, MOFs, zeolitic imidazolate frameworks, and hypothetical MOFs. New materials are identified for applications in gas separations (CO2/N2, CO2/CH4, and CO2/H2), air separation (O2/N2), and chemicals (propane/propylene, ethane/ethylene, styrene/ethylbenzene, and xylenes).
Holley, A.L.; Wilson, A.C.; Noel, M.; Palermo, T.M.
2018-01-01
Background and objective The co-occurrence of chronic pain and post-traumatic stress symptoms (PTSS) and post-traumatic stress disorder (PTSD) has gained increasing research attention. Studies on associations among pain and PTSS or PTSD in youth have largely been conducted in the context of acute injury or trauma. Less is known about the risk for co-occurrence with paediatric chronic pain. In this review, we (1) propose a conceptual framework to outline factors salient during childhood that may be associated with symptom severity, co-occurrence and mutual maintenance, (2) present relevant literature on PTSS in youth with acute and chronic pain and identify research gaps and (3) provide recommendations to guide paediatric research examining shared symptomatology. Databases and data treatment Electronic databases (PubMed and Google Scholar) were used to identify relevant articles using the search terms ‘child, adolescent, paediatric, chronic pain, acute pain, post-traumatic stress symptoms and post-traumatic stress disorder’. Studies were retrieved and reviewed based on relevance to the topic. Results Our findings revealed that existing biobehavioural and ecological models of paediatric chronic pain lack attention to traumatic events or the potential development of PTSS. Paediatric studies are also limited by lack of a conceptual framework for understanding the prevalence, risk and trajectories of PTSS in youth with chronic pain. Conclusions Our new developmentally informed framework highlights individual symptoms and shared contextual factors that are important when examining potential associations among paediatric chronic pain and PTSS. Future studies should consider bidirectional and mutually maintaining associations, which will be aided by prospective, longitudinal designs. PMID:27275585
Ezra Tsur, Elishai
2017-01-01
Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.
Database for CO2 Separation Performances of MOFs Based on Computational Materials Screening.
Altintas, Cigdem; Avci, Gokay; Daglar, Hilal; Nemati Vesali Azar, Ayda; Velioglu, Sadiye; Erucar, Ilknur; Keskin, Seda
2018-05-23
Metal-organic frameworks (MOFs) are potential adsorbents for CO 2 capture. Because thousands of MOFs exist, computational studies become very useful in identifying the top performing materials for target applications in a time-effective manner. In this study, molecular simulations were performed to screen the MOF database to identify the best materials for CO 2 separation from flue gas (CO 2 /N 2 ) and landfill gas (CO 2 /CH 4 ) under realistic operating conditions. We validated the accuracy of our computational approach by comparing the simulation results for the CO 2 uptakes, CO 2 /N 2 and CO 2 /CH 4 selectivities of various types of MOFs with the available experimental data. Binary CO 2 /N 2 and CO 2 /CH 4 mixture adsorption data were then calculated for the entire MOF database. These data were then used to predict selectivity, working capacity, regenerability, and separation potential of MOFs. The top performing MOF adsorbents that can separate CO 2 /N 2 and CO 2 /CH 4 with high performance were identified. Molecular simulations for the adsorption of a ternary CO 2 /N 2 /CH 4 mixture were performed for these top materials to provide a more realistic performance assessment of MOF adsorbents. The structure-performance analysis showed that MOFs with Δ Q st 0 > 30 kJ/mol, 3.8 Å < pore-limiting diameter < 5 Å, 5 Å < largest cavity diameter < 7.5 Å, 0.5 < ϕ < 0.75, surface area < 1000 m 2 /g, and ρ > 1 g/cm 3 are the best candidates for selective separation of CO 2 from flue gas and landfill gas. This information will be very useful to design novel MOFs exhibiting high CO 2 separation potentials. Finally, an online, freely accessible database https://cosmoserc.ku.edu.tr was established, for the first time in the literature, which reports all of the computed adsorbent metrics of 3816 MOFs for CO 2 /N 2 , CO 2 /CH 4 , and CO 2 /N 2 /CH 4 separations in addition to various structural properties of MOFs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005
In silico design and screening of hypothetical MOF-74 analogs and their experimental synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witman, Matthew; Ling, Sanliang; Anderson, Samantha
Here, we present the in silico design of metal-organic frameworks (MOFs) exhibiting 1-dimensional rod topologies. We then introduce an algorithm for construction of this family of MOF topologies, and illustrate its application for enumerating MOF-74-type analogs. Furthermore, we perform a broad search for new linkers that satisfy the topological requirements of MOF-74 and consider the largest database of known chemical space for organic compounds, the PubChem database. Our in silico crystal assembly, when combined with dispersion-corrected density functional theory (DFT) calculations, is demonstrated to generate a hypothetical library of open-metal site containing MOF-74 analogs in the 1-D rod topology frommore » which we can simulate the adsorption behavior of CO 2 . We conclude that these hypothetical structures have synthesizable potential through computational identification and experimental validation of a novel MOF-74 analog, Mg 2 (olsalazine).« less
In silico design and screening of hypothetical MOF-74 analogs and their experimental synthesis
Witman, Matthew; Ling, Sanliang; Anderson, Samantha; ...
2016-06-21
Here, we present the in silico design of metal-organic frameworks (MOFs) exhibiting 1-dimensional rod topologies. We then introduce an algorithm for construction of this family of MOF topologies, and illustrate its application for enumerating MOF-74-type analogs. Furthermore, we perform a broad search for new linkers that satisfy the topological requirements of MOF-74 and consider the largest database of known chemical space for organic compounds, the PubChem database. Our in silico crystal assembly, when combined with dispersion-corrected density functional theory (DFT) calculations, is demonstrated to generate a hypothetical library of open-metal site containing MOF-74 analogs in the 1-D rod topology frommore » which we can simulate the adsorption behavior of CO 2 . We conclude that these hypothetical structures have synthesizable potential through computational identification and experimental validation of a novel MOF-74 analog, Mg 2 (olsalazine).« less
A Framework for Mapping User-Designed Forms to Relational Databases
ERIC Educational Resources Information Center
Khare, Ritu
2011-01-01
In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…
DOT National Transportation Integrated Search
2006-01-01
An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was designed, developed, and implemented at the Virginia Department of Transportation (VDOT) in 2002 to retrieve, manage, archive, and analyze geotechnical da...
Towards the design of novel cuprate-based superconductors
NASA Astrophysics Data System (ADS)
Yee, Chuck-Hou
The rapid maturation of materials databases combined with recent development of theories seeking to quantitatively link chemical properties to superconductivity in the cuprates provide the context to design novel superconductors. In this talk, we describe a framework designed to search for new superconductors, which combines chemical rules-of-thumb, insights of transition temperatures from dynamical mean-field theory, first-principles electronic structure tools, materials databases and structure prediction via evolutionary algorithms. We apply the framework to design a family of copper oxysulfides and evaluate the prospects of superconductivity.
Porous materials with pre-designed single-molecule traps for CO2 selective adsorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, JR; Yu, JM; Lu, WG
2013-02-26
Despite tremendous efforts, precise control in the synthesis of porous materials with pre-designed pore properties for desired applications remains challenging. Newly emerged porous metal-organic materials, such as metal-organic polyhedra and metal-organic frameworks, are amenable to design and property tuning, enabling precise control of functionality by accurate design of structures at the molecular level. Here we propose and validate, both experimentally and computationally, a precisely designed cavity, termed a 'single-molecule trap', with the desired size and properties suitable for trapping target CO2 molecules. Such a single-molecule trap can strengthen CO2-host interactions without evoking chemical bonding, thus showing potential for CO2 capture.more » Molecular single-molecule traps in the form of metal-organic polyhedra are designed, synthesised and tested for selective adsorption of CO2 over N-2 and CH4, demonstrating the trapping effect. Building these pre-designed single-molecule traps into extended frameworks yields metal-organic frameworks with efficient mass transfer, whereas the CO2 selective adsorption nature of single-molecule traps is preserved.« less
A Graphics Design Framework to Visualize Multi-Dimensional Economic Datasets
ERIC Educational Resources Information Center
Chandramouli, Magesh; Narayanan, Badri; Bertoline, Gary R.
2013-01-01
This study implements a prototype graphics visualization framework to visualize multidimensional data. This graphics design framework serves as a "visual analytical database" for visualization and simulation of economic models. One of the primary goals of any kind of visualization is to extract useful information from colossal volumes of…
CoINcIDE: A framework for discovery of patient subtypes across multiple datasets.
Planey, Catherine R; Gevaert, Olivier
2016-03-09
Patient disease subtypes have the potential to transform personalized medicine. However, many patient subtypes derived from unsupervised clustering analyses on high-dimensional datasets are not replicable across multiple datasets, limiting their clinical utility. We present CoINcIDE, a novel methodological framework for the discovery of patient subtypes across multiple datasets that requires no between-dataset transformations. We also present a high-quality database collection, curatedBreastData, with over 2,500 breast cancer gene expression samples. We use CoINcIDE to discover novel breast and ovarian cancer subtypes with prognostic significance and novel hypothesized ovarian therapeutic targets across multiple datasets. CoINcIDE and curatedBreastData are available as R packages.
A Framework for the Design of Service Systems
NASA Astrophysics Data System (ADS)
Tan, Yao-Hua; Hofman, Wout; Gordijn, Jaap; Hulstijn, Joris
We propose a framework for the design and implementation of service systems, especially to design controls for long-term sustainable value co-creation. The framework is based on the software support tool e3-control. To illustrate the framework we use a large-scale case study, the Beer Living Lab, for simplification of customs procedures in international trade. The BeerLL shows how value co-creation can be achieved by reduction of administrative burden in international beer export due to electronic customs. Participants in the BeerLL are Heineken, IBM and Dutch Tax & Customs.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
2013-01-01
Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762
Kiener, Joos
2013-12-11
Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.
Chung, Yongchul G.; Gómez-Gualdrón, Diego A.; Li, Peng; Leperi, Karson T.; Deria, Pravas; Zhang, Hongda; Vermeulen, Nicolaas A.; Stoddart, J. Fraser; You, Fengqi; Hupp, Joseph T.; Farha, Omar K.; Snurr, Randall Q.
2016-01-01
Discovery of new adsorbent materials with a high CO2 working capacity could help reduce CO2 emissions from newly commissioned power plants using precombustion carbon capture. High-throughput computational screening efforts can accelerate the discovery of new adsorbents but sometimes require significant computational resources to explore the large space of possible materials. We report the in silico discovery of high-performing adsorbents for precombustion CO2 capture by applying a genetic algorithm to efficiently search a large database of metal-organic frameworks (MOFs) for top candidates. High-performing MOFs identified from the in silico search were synthesized and activated and show a high CO2 working capacity and a high CO2/H2 selectivity. One of the synthesized MOFs shows a higher CO2 working capacity than any MOF reported in the literature under the operating conditions investigated here. PMID:27757420
RUAN, XIYUN; LI, HONGYUN; LIU, BO; CHEN, JIE; ZHANG, SHIBAO; SUN, ZEQIANG; LIU, SHUANGQING; SUN, FAHAI; LIU, QINGYONG
2015-01-01
The aim of the present study was to develop a novel method for identifying pathways associated with renal cell carcinoma (RCC) based on a gene co-expression network. A framework was established where a co-expression network was derived from the database as well as various co-expression approaches. First, the backbone of the network based on differentially expressed (DE) genes between RCC patients and normal controls was constructed by the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. The differentially co-expressed links were detected by Pearson’s correlation, the empirical Bayesian (EB) approach and Weighted Gene Co-expression Network Analysis (WGCNA). The co-expressed gene pairs were merged by a rank-based algorithm. We obtained 842; 371; 2,883 and 1,595 co-expressed gene pairs from the co-expression networks of the STRING database, Pearson’s correlation EB method and WGCNA, respectively. Two hundred and eighty-one differentially co-expressed (DC) gene pairs were obtained from the merged network using this novel method. Pathway enrichment analysis based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and the network enrichment analysis (NEA) method were performed to verify feasibility of the merged method. Results of the KEGG and NEA pathway analyses showed that the network was associated with RCC. The suggested method was computationally efficient to identify pathways associated with RCC and has been identified as a useful complement to traditional co-expression analysis. PMID:26058425
The STEP database through the end-users eyes--USABILITY STUDY.
Salunke, Smita; Tuleu, Catherine
2015-08-15
The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
ERIC Educational Resources Information Center
Cardenas-Claros, Monica Stella; Gruba, Paul A.
2013-01-01
This paper proposes a theoretical framework for the conceptualization and design of help options in computer-based second language (L2) listening. Based on four empirical studies, it aims at clarifying both conceptualization and design (CoDe) components. The elements of conceptualization consist of a novel four-part classification of help options:…
In-Memory Graph Databases for Web-Scale Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Morari, Alessandro; Weaver, Jesse R.
RDF databases have emerged as one of the most relevant way for organizing, integrating, and managing expo- nentially growing, often heterogeneous, and not rigidly structured data for a variety of scientific and commercial fields. In this paper we discuss the solutions integrated in GEMS (Graph database Engine for Multithreaded Systems), a software framework for implementing RDF databases on commodity, distributed-memory high-performance clusters. Unlike the majority of current RDF databases, GEMS has been designed from the ground up to primarily employ graph-based methods. This is reflected in all the layers of its stack. The GEMS framework is composed of: a SPARQL-to-C++more » compiler, a library of data structures and related methods to access and modify them, and a custom runtime providing lightweight software multithreading, network messages aggregation and a partitioned global address space. We provide an overview of the framework, detailing its component and how they have been closely designed and customized to address issues of graph methods applied to large-scale datasets on clusters. We discuss in details the principles that enable automatic translation of the queries (expressed in SPARQL, the query language of choice for RDF databases) to graph methods, and identify differences with respect to other RDF databases.« less
DockScreen: A database of in silico biomolecular interactions to support computational toxicology
We have developed DockScreen, a database of in silico biomolecular interactions designed to enable rational molecular toxicological insight within a computational toxicology framework. This database is composed of chemical/target (receptor and enzyme) binding scores calculated by...
GMODWeb: a web framework for the generic model organism database
O'Connor, Brian D; Day, Allen; Cain, Scott; Arnaiz, Olivier; Sperling, Linda; Stein, Lincoln D
2008-01-01
The Generic Model Organism Database (GMOD) initiative provides species-agnostic data models and software tools for representing curated model organism data. Here we describe GMODWeb, a GMOD project designed to speed the development of model organism database (MOD) websites. Sites created with GMODWeb provide integration with other GMOD tools and allow users to browse and search through a variety of data types. GMODWeb was built using the open source Turnkey web framework and is available from . PMID:18570664
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.
Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel
2018-02-20
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints
Navet, Nicolas; Havet, Lionel
2018-01-01
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489
Lanthanide co-ordination frameworks: Opportunities and diversity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Robert J.; Long, De-Liang; Hubberstey, Peter
2005-08-15
Significant successes have been made over recent years in preparing co-ordination framework polymers that show macroscopic material properties, but in the vast majority of cases this has been achieved with d-block metal-based systems. Lanthanide co-ordination frameworks also offer attractive properties in terms of their potential applications as luminescent, non-linear optical and porous materials. However, lanthanide-based systems have been far less studied to date than their d-block counterparts. One possible reason for this is that the co-ordination spheres of lanthanide cations are more difficult to control and, in the absence of design strategies for lanthanide co-ordination frameworks, it is significantly moremore » difficult to target materials with specific properties. However, this article highlights some of the exciting possibilities that have emerged from the earliest investigations in this field with new topological families of compounds being discovered from relatively simple framework components, including unusual eight, seven and five-connected framework systems. Our own research, as well as others, is leading to a much greater appreciation of the factors that control framework formation and the resultant observed topologies of these polymers. As this understanding develops targeting particular framework types will become more straightforward and the development of designed polyfunctional materials more accessible. Thus, it can be seen that lanthanide co-ordination frameworks have the potential to open up previously unexplored directions for materials chemistry. This article focuses on the underlying concepts for the construction of these enticing and potentially highly important materials.« less
Interactive Exploration for Continuously Expanding Neuron Databases.
Li, Zhongyu; Metaxas, Dimitris N; Lu, Aidong; Zhang, Shaoting
2017-02-15
This paper proposes a novel framework to help biologists explore and analyze neurons based on retrieval of data from neuron morphological databases. In recent years, the continuously expanding neuron databases provide a rich source of information to associate neuronal morphologies with their functional properties. We design a coarse-to-fine framework for efficient and effective data retrieval from large-scale neuron databases. In the coarse-level, for efficiency in large-scale, we employ a binary coding method to compress morphological features into binary codes of tens of bits. Short binary codes allow for real-time similarity searching in Hamming space. Because the neuron databases are continuously expanding, it is inefficient to re-train the binary coding model from scratch when adding new neurons. To solve this problem, we extend binary coding with online updating schemes, which only considers the newly added neurons and update the model on-the-fly, without accessing the whole neuron databases. In the fine-grained level, we introduce domain experts/users in the framework, which can give relevance feedback for the binary coding based retrieval results. This interactive strategy can improve the retrieval performance through re-ranking the above coarse results, where we design a new similarity measure and take the feedback into account. Our framework is validated on more than 17,000 neuron cells, showing promising retrieval accuracy and efficiency. Moreover, we demonstrate its use case in assisting biologists to identify and explore unknown neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
Designing an Integrated System of Databases: A Workstation for Information Seekers.
ERIC Educational Resources Information Center
Micco, Mary; Smith, Irma
1987-01-01
Proposes a framework for the design of a full function workstation for information retrieval based on study of information seeking behavior. A large amount of local storage of the CD-ROM jukebox variety and full networking capability to both local and external databases are identified as requirements of the prototype. (MES)
A Database of Young Star Clusters for Five Hundred Galaxies
NASA Astrophysics Data System (ADS)
Whitmore, Brad
2009-07-01
We propose to use the source lists developed as part of the Hubble Legacy Archive {HLA: Data Release 1 - February 8, 2008} to obtain a large {N 50 galaxies for multi-wavelength, N 500 galaxies for ACS F814W}, uniform {ACS + WFPC2 + NICMOS: DAOphot used for object detection} database of super star clusters in nearby star-forming galaxies in order to address two fundamental astronomical questions: 1} To what degree is the cluster luminosity {and mass} function of star clusters universal ? 2} What fraction of super star clusters are "missing" in optical studies {i.e., are hidden by dust}? This database will also support comparisons with new Monte-Carlo simulations that have independently been developed in the past few years by co-I Larsen and PI Whitmore, and will be used to test the Whitmore, Chandar, Fall {2007} framework designed to understand the demographics of star clusters in all star forming galaxies. The catalogs will increase the number of galaxies with measured mass and luminosity functions by an order of magnitude, and will provide a powerful new tool for comparative studies, both ours and the community's.
LCGbase: A Comprehensive Database for Lineage-Based Co-regulated Genes.
Wang, Dapeng; Zhang, Yubin; Fan, Zhonghua; Liu, Guiming; Yu, Jun
2012-01-01
Animal genes of different lineages, such as vertebrates and arthropods, are well-organized and blended into dynamic chromosomal structures that represent a primary regulatory mechanism for body development and cellular differentiation. The majority of genes in a genome are actually clustered, which are evolutionarily stable to different extents and biologically meaningful when evaluated among genomes within and across lineages. Until now, many questions concerning gene organization, such as what is the minimal number of genes in a cluster and what is the driving force leading to gene co-regulation, remain to be addressed. Here, we provide a user-friendly database-LCGbase (a comprehensive database for lineage-based co-regulated genes)-hosting information on evolutionary dynamics of gene clustering and ordering within animal kingdoms in two different lineages: vertebrates and arthropods. The database is constructed on a web-based Linux-Apache-MySQL-PHP framework and effective interactive user-inquiry service. Compared to other gene annotation databases with similar purposes, our database has three comprehensible advantages. First, our database is inclusive, including all high-quality genome assemblies of vertebrates and representative arthropod species. Second, it is human-centric since we map all gene clusters from other genomes in an order of lineage-ranks (such as primates, mammals, warm-blooded, and reptiles) onto human genome and start the database from well-defined gene pairs (a minimal cluster where the two adjacent genes are oriented as co-directional, convergent, and divergent pairs) to large gene clusters. Furthermore, users can search for any adjacent genes and their detailed annotations. Third, the database provides flexible parameter definitions, such as the distance of transcription start sites between two adjacent genes, which is extendable to genes that flanking the cluster across species. We also provide useful tools for sequence alignment, gene ontology (GO) annotation, promoter identification, gene expression (co-expression), and evolutionary analysis. This database not only provides a way to define lineage-specific and species-specific gene clusters but also facilitates future studies on gene co-regulation, epigenetic control of gene expression (DNA methylation and histone marks), and chromosomal structures in a context of gene clusters and species evolution. LCGbase is freely available at http://lcgbase.big.ac.cn/LCGbase.
Tuning the Adsorption-Induced Phase Change in the Flexible Metal–Organic Framework Co(bdp)
Taylor, Mercedes K.; Runčevski, Tomče; Oktawiec, Julia; ...
2016-11-02
Metal–organic frameworks that flex to undergo structural phase changes upon gas adsorption are promising materials for gas storage and separations, and achieving synthetic control over the pressure at which these changes occur is crucial to the design of such materials for specific applications. To this end, a new family of materials based on the flexible metal–organic framework Co(bdp) (bdp 2– = 1,4-benzenedipyrazolate) has been prepared via the introduction of fluorine, deuterium, and methyl functional groups on the bdp 2– ligand, namely, Co(F-bdp), Co(p-F 2-bdp), Co(o-F 2-bdp), Co(D 4-bdp), and Co(p-Me 2-bdp). These frameworks are isoreticular to the parent framework andmore » exhibit similar structural flexibility, transitioning from a low-porosity, collapsed phase to high-porosity, expanded phases with increasing gas pressure. Powder X-ray diffraction studies reveal that fluorination of the aryl ring disrupts edge-to-face π–π interactions, which work to stabilize the collapsed phase at low gas pressures, while deuteration preserves these interactions and methylation strengthens them. In agreement with these observations, high-pressure CH 4 adsorption isotherms show that the pressure of the CH 4-induced framework expansion can be systematically controlled by ligand functionalization, as materials without edge-to-face interactions in the collapsed phase expand at lower CH 4 pressures, while frameworks with strengthened edge-to-face interactions expand at higher pressures. This work puts forth a general design strategy relevant to many other families of flexible metal–organic frameworks, which will be a powerful tool in optimizing these phase-change materials for industrial applications.« less
InverPep: A database of invertebrate antimicrobial peptides.
Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio
2017-03-01
The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bae, Kyoung-Il; Kim, Jung-Hyun; Huh, Soon-Young
2003-01-01
Discusses process information sharing among participating organizations in a virtual enterprise and proposes a federated process framework and system architecture that provide a conceptual design for effective implementation of process information sharing supporting the autonomy and agility of the organizations. Develops the framework using an…
Informatics approaches in the Biological Characterization of ...
Adverse Outcome Pathways (AOPs) are a conceptual framework to characterize toxicity pathways by a series of mechanistic steps from a molecular initiating event to population outcomes. This framework helps to direct risk assessment research, for example by aiding in computational prioritization of chemicals, genes, and tissues relevant to an adverse health outcome. We have designed and implemented a computational workflow to access a wealth of public data relating genes, chemicals, diseases, pathways, and species, to provide a biological context for putative AOPs. We selected three AOP case studies: ER/Aromatase Antagonism Leading to Reproductive Dysfunction, AHR1 Activation Leading to Cardiotoxicity, and AChE Inhibition Leading to Acute Mortality, and deduced a taxonomic range of applicability for each AOP. We developed computational tools to automatically access and analyze the pathway activity of AOP-relevant protein orthologs, finding broad similarity among vertebrate species for the ER/Aromatase and AHR1 AOPs, and similarity extending to invertebrate animal species for AChE inhibition. Additionally, we used public gene expression data to find groups of highly co-expressed genes, and compared those groups across organisms. To interpret these findings at a higher level of biological organization, we created the AOPdb, a relational database that mines results from sources including NCBI, KEGG, Reactome, CTD, and OMIM. This multi-source database connects genes,
NASA Technical Reports Server (NTRS)
Plitau, Denis; Prasad, Narasimha S.
2012-01-01
The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.
Designing for Peta-Scale in the LSST Database
NASA Astrophysics Data System (ADS)
Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.
2007-10-01
The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.
ERIC Educational Resources Information Center
Neary, Mike; Winn, Joss
2017-01-01
This report provides an interim account of a participatory action research project undertaken during 2015-16. The research brought together scholars, students and expert members of the co-operative movement to design a theoretically informed and practically grounded framework for co-operative higher education that activists, educators and the…
Data management and database framework for the MICE experiment
NASA Astrophysics Data System (ADS)
Martyniak, J.; Nebrensky, J. J.; Rajaram, D.; MICE Collaboration
2017-10-01
The international Muon Ionization Cooling Experiment (MICE) currently operating at the Rutherford Appleton Laboratory in the UK, is designed to demonstrate the principle of muon ionization cooling for application to a future Neutrino Factory or Muon Collider. We present the status of the framework for the movement and curation of both raw and reconstructed data. A raw data-mover has been designed to safely upload data files onto permanent tape storage as soon as they have been written out. The process has been automated, and checks have been built in to ensure the integrity of data at every stage of the transfer. The data processing framework has been recently redesigned in order to provide fast turnaround of reconstructed data for analysis. The automated reconstruction is performed on a dedicated machine in the MICE control room and any reprocessing is done at Tier-2 Grid sites. In conjunction with this redesign, a new reconstructed-data-mover has been designed and implemented. We also review the implementation of a robust database system that has been designed for MICE. The processing of data, whether raw or Monte Carlo, requires accurate knowledge of the experimental conditions. MICE has several complex elements ranging from beamline magnets to particle identification detectors to superconducting magnets. A Configuration Database, which contains information about the experimental conditions (magnet currents, absorber material, detector calibrations, etc.) at any given time has been developed to ensure accurate and reproducible simulation and reconstruction. A fully replicated, hot-standby database system has been implemented with a firewall-protected read-write master running in the control room, and a read-only slave running at a different location. The actual database is hidden from end users by a Web Service layer, which provides platform and programming language-independent access to the data.
Ultra-Structure database design methodology for managing systems biology data and analyses
Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C
2009-01-01
Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849
NASA Astrophysics Data System (ADS)
Haddam, N. A.; Michel, E.; Siani, G.; Cortese, G.; Bostock, H. C.; Duprat, J. M.; Isguder, G.
2016-06-01
We present an improved database of planktonic foraminiferal census counts from the Southern Hemisphere oceans (SHO) from 15°S to 64°S. The SHO database combines three existing databases. Using this SHO database, we investigated dissolution biases that might affect faunal census counts. We suggest a depth/ΔCO32- threshold of ~3800 m/ΔCO32- = ~ -10 to -5 µmol/kg for the Pacific and Indian Oceans and ~4000 m/ΔCO32- = ~0 to 10 µmol/kg for the Atlantic Ocean, under which core-top assemblages can be affected by dissolution and are less reliable for paleo-sea surface temperature (SST) reconstructions. We removed all core tops beyond these thresholds from the SHO database. This database has 598 core tops and is able to reconstruct past SST variations from 2° to 25.5°C, with a root mean square error of 1.00°C, for annual temperatures. To inspect how dissolution affects SST reconstruction quality, we tested the data base with two "leave-one-out" tests, with and without the deep core tops. We used this database to reconstruct summer SST (SSST) over the last 20 ka, using the Modern Analog Technique method, on the Southeast Pacific core MD07-3100. This was compared to the SSST reconstructed using the three databases used to compile the SHO database, thus showing that the reconstruction using the SHO database is more reliable, as its dissimilarity values are the lowest. The most important aspect here is the importance of a bias-free, geographic-rich database. We leave this data set open-ended to future additions; the new core tops must be carefully selected, with their chronological frameworks, and evidence of dissolution assessed.
Two rare indium-based porous metal-metalloporphyrin frameworks exhibiting interesting CO 2 uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Wen-Yang; Zhang, Zhuxiu; Cash, Lindsay
2014-01-13
Two rare indium-based porous metal–metalloporphyrin frameworks (MMPFs), MMPF-7 and MMPF-8, were constructed by self-assembly of In(III) and two custom-designed porphyrin–tetracarboxylate ligands. MMPF-7 and MMPF-8 possess the pts topology and exhibit interesting CO 2 adsorption properties.
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
ECO: A Framework for Entity Co-Occurrence Exploration with Faceted Navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halliday, K. D.
2010-08-20
Even as highly structured databases and semantic knowledge bases become more prevalent, a substantial amount of human knowledge is reported as written prose. Typical textual reports, such as news articles, contain information about entities (people, organizations, and locations) and their relationships. Automatically extracting such relationships from large text corpora is a key component of corporate and government knowledge bases. The primary goal of the ECO project is to develop a scalable framework for extracting and presenting these relationships for exploration using an easily navigable faceted user interface. ECO uses entity co-occurrence relationships to identify related entities. The system aggregates andmore » indexes information on each entity pair, allowing the user to rapidly discover and mine relational information.« less
2015-03-13
A. Lee. “A Programming Model for Time - Synchronized Distributed Real- Time Systems”. In: Proceedings of Real Time and Em- bedded Technology and Applications Symposium. 2007, pp. 259–268. ...From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
Dash, Bibek
2018-04-26
The present work deals with a density functional theory (DFT) study of porous organic framework materials containing - groups for CO 2 capture. In this study, first principle calculations were performed for CO 2 adsorption using N-containing covalent organic framework (COFs) models. Ab initio and DFT-based methods were used to characterize the N-containing porous model system based on their interaction energies upon complexing with CO 2 and nitrogen gas. Binding energies (BEs) of CO 2 and N 2 molecules with the polymer framework were calculated with DFT methods. Hybrid B3LYP and second order MP2 methods combined with of Pople 6-31G(d,p) and correlation consistent basis sets cc-pVDZ, cc-pVTZ and aug-ccVDZ were used to calculate BEs. The effect of linker groups in the designed covalent organic framework model system on the CO 2 and N 2 interactions was studied using quantum calculations.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.
Guided design of copper oxysulfide superconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Chuck-Hou; Birol, Turan; Kotliar, Gabriel
2015-07-01
We describe a framework for designing novel materials, combining modern first-principles electronic-structure tools, materials databases, and evolutionary algorithms capable of exploring large configurational spaces. Guided by the chemical principles introduced by Antipov et al., for the design and synthesis of the Hg-based high-temperature superconductors, we apply our framework to screen 333 proposed compositions to design a new layered copper oxysulfide, Hg(CaS)2CuO2. We evaluate the prospects of superconductivity in this oxysulfide using theories based on charge-transfer energies, orbital distillation and uniaxial strain.
Designing Online Management Education Courses Using the Community of Inquiry Framework
ERIC Educational Resources Information Center
Weyant, Lee E.
2013-01-01
Online learning has grown as a program delivery option for many colleges and programs of business. The Community of Inquiry (CoI) framework consisting of three interrelated elements--social presence, cognitive presence, and teaching presences--provides a model to guide business faculty in their online course design. The course design of an online…
A Proposed Framework for Collaborative Design in a Virtual Environment
NASA Astrophysics Data System (ADS)
Breland, Jason S.; Shiratuddin, Mohd Fairuz
This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.
Dziadkowiec, Oliwier; Callahan, Tiffany; Ozkaynak, Mustafa; Reeder, Blaine; Welton, John
2016-01-01
Objectives: We examine the following: (1) the appropriateness of using a data quality (DQ) framework developed for relational databases as a data-cleaning tool for a data set extracted from two EPIC databases, and (2) the differences in statistical parameter estimates on a data set cleaned with the DQ framework and data set not cleaned with the DQ framework. Background: The use of data contained within electronic health records (EHRs) has the potential to open doors for a new wave of innovative research. Without adequate preparation of such large data sets for analysis, the results might be erroneous, which might affect clinical decision-making or the results of Comparative Effectives Research studies. Methods: Two emergency department (ED) data sets extracted from EPIC databases (adult ED and children ED) were used as examples for examining the five concepts of DQ based on a DQ assessment framework designed for EHR databases. The first data set contained 70,061 visits; and the second data set contained 2,815,550 visits. SPSS Syntax examples as well as step-by-step instructions of how to apply the five key DQ concepts these EHR database extracts are provided. Conclusions: SPSS Syntax to address each of the DQ concepts proposed by Kahn et al. (2012)1 was developed. The data set cleaned using Kahn’s framework yielded more accurate results than the data set cleaned without this framework. Future plans involve creating functions in R language for cleaning data extracted from the EHR as well as an R package that combines DQ checks with missing data analysis functions. PMID:27429992
NASA Astrophysics Data System (ADS)
Murumkar, Prashant Revan; Zambre, Vishal Prakash; Yadav, Mange Ram
2010-02-01
A chemical feature-based pharmacophore model was developed for Tumor Necrosis Factor-α converting enzyme (TACE) inhibitors. A five point pharmacophore model having two hydrogen bond acceptors (A), one hydrogen bond donor (D) and two aromatic rings (R) with discrete geometries as pharmacophoric features was developed. The pharmacophore model so generated was then utilized for in silico screening of a database. The pharmacophore model so developed was validated by using four compounds having proven TACE inhibitory activity which were grafted into the database. These compounds mapped well onto the five listed pharmacophoric features. This validated pharmacophore model was also used for alignment of molecules in CoMFA and CoMSIA analysis. The contour maps of the CoMFA/CoMSIA models were utilized to provide structural insight for activity improvement of potential novel TACE inhibitors. The pharmacophore model so developed could be used for in silico screening of any commercial/in house database for identification of TACE inhibiting lead compounds, and the leads so identified could be optimized using the developed CoMSIA model. The present work highlights the tremendous potential of the two mutually complementary ligand-based drug designing techniques (i.e. pharmacophore mapping and 3D-QSAR analysis) using TACE inhibitors as prototype biologically active molecules.
Daily Migraine Prevention and Its Influence on Resource Utilization in the Military Health System
2006-08-01
Database and employed a one group pretest - posttest design of patients exposed to prevention. Each patient was followed over 18 months (6 months prior to...Framework ..............................................37 Chapter IV: Research Design and Methodology.............................38 Overview of Design...39 Data Collection ................................................................................41 Research Hypotheses
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
[Computer aided design for fixed partial denture framework based on reverse engineering technology].
Sun, Yu-chun; Lü, Pei-jun; Wang, Yong
2006-03-01
To explore a computer aided design (CAD) route for the framework of domestic fixed partial denture (FPD) and confirm the suitable method of 3-D CAD. The working area of a dentition model was scanned with a 3-D mechanical scanner. Using the reverse engineering (RE) software, margin and border curves were extracted and several reference curves were created to ensure the dimension and location of pontic framework that was taken from the standard database. The shoulder parts of the retainers were created after axial surfaces constructed. The connecting areas, axial line and curving surface of the framework connector were finally created. The framework of a three-unit FPD was designed with RE technology, which showed smooth surfaces and continuous contours. The design route is practical. The result of this study is significant in theory and practice, which will provide a reference for establishing the computer aided design/computer aided manufacture (CAD/CAM) system of domestic FPD.
Lee, Woo Ram; Kim, Jeong Eun; Lee, Sung Jin; Kang, Minjung; Kang, Dong Won; Lee, Hwa Young; Hiremath, Vishwanath; Seo, Jeong Gil; Jin, Hailian; Moon, Dohyun; Cho, Moses; Jung, Yousung; Hong, Chang Seop
2018-05-25
For real-world postcombustion applications in the mitigation of CO 2 emissions using dry sorbents, adsorption and desorption behaviors should be controlled to design and fabricate prospective materials with optimal CO 2 performances. Herein, we prepared diamine-functionalized Mg 2 (dobpdc) (H 4 dobpdc=4,4'-dihydroxy-(1,1'-biphenyl)-3,3'-dicarboxylic acid). (1-diamine) with ethylenediamine (en), primary-secondary (N-ethylethylenediamine-een and N-isopropylethylenediamine-ipen), primary-tertiary, and secondary-secondary diamines. A slight alteration of the number of alkyl substituents on the diamines and their alkyl chain length dictates the desorption temperature (T des ) at 100 % CO 2 , desorption characteristics, and ΔT systematically to result in the tuning of the working capacity. The existence of bulky substituents on the diamines improves the framework stability upon exposure to O 2 , SO 2 , and water vapor, relevant to real flue-gas conditions. Bulky substituents are also responsible for an interesting two-step behavior observed for the ipen case, as revealed by DFT calculations. Among the diamine-appended metal-organic frameworks, 1-een, which has the required adsorption and desorption properties, is a promising material for sorbent-based CO 2 capture processes. Hence, CO 2 performance and framework durability can be tailored by the judicial selection of the diamine structure, which enables property design at will and facilitates the development of desirable CO 2 -capture materials. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
SBROME: a scalable optimization and module matching framework for automated biosystems design.
Huynh, Linh; Tsoukalas, Athanasios; Köppe, Matthias; Tagkopoulos, Ilias
2013-05-17
The development of a scalable framework for biodesign automation is a formidable challenge given the expected increase in part availability and the ever-growing complexity of synthetic circuits. To allow for (a) the use of previously constructed and characterized circuits or modules and (b) the implementation of designs that can scale up to hundreds of nodes, we here propose a divide-and-conquer Synthetic Biology Reusable Optimization Methodology (SBROME). An abstract user-defined circuit is first transformed and matched against a module database that incorporates circuits that have previously been experimentally characterized. Then the resulting circuit is decomposed to subcircuits that are populated with the set of parts that best approximate the desired function. Finally, all subcircuits are subsequently characterized and deposited back to the module database for future reuse. We successfully applied SBROME toward two alternative designs of a modular 3-input multiplexer that utilize pre-existing logic gates and characterized biological parts.
Knowledge translation within a population health study: how do you do it?
2013-01-01
Background Despite the considerable and growing body of knowledge translation (KT) literature, there are few methodologies sufficiently detailed to guide an integrated KT research approach for a population health study. This paper argues for a clearly articulated collaborative KT approach to be embedded within the research design from the outset. Discussion Population health studies are complex in their own right, and strategies to engage the local community in adopting new interventions are often fraught with considerable challenges. In order to maximise the impact of population health research, more explicit KT strategies need to be developed from the outset. We present four propositions, arising from our work in developing a KT framework for a population health study. These cover the need for an explicit theory-informed conceptual framework; formalizing collaborative approaches within the design; making explicit the roles of both the stakeholders and the researchers; and clarifying what counts as evidence. From our deliberations on these propositions, our own co-creating (co-KT) Framework emerged in which KT is defined as both a theoretical and practical framework for actioning the intent of researchers and communities to co-create, refine, implement and evaluate the impact of new knowledge that is sensitive to the context (values, norms and tacit knowledge) where it is generated and used. The co-KT Framework has five steps. These include initial contact and framing the issue; refining and testing knowledge; interpreting, contextualising and adapting knowledge to the local context; implementing and evaluating; and finally, the embedding and translating of new knowledge into practice. Summary Although descriptions of how to incorporate KT into research designs are increasing, current theoretical and operational frameworks do not generally span a holistic process from knowledge co-creation to knowledge application and implementation within one project. Population health studies may have greater health impact when KT is incorporated early and explicitly into the research design. This, we argue, will require that particular attention be paid to collaborative approaches, stakeholder identification and engagement, the nature and sources of evidence used, and the role of the research team working with the local study community. PMID:23694753
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
ERIC Educational Resources Information Center
Sutton, Jann Marie
2017-01-01
As institutions continue to expand their online learning programs, it becomes increasingly important to identify research-based strategies to support their design. Numerous professional organizations provide guidance to institutions to direct the mechanics of online delivery. The Community of Inquiry (CoI) framework (Garrison, Anderson, Archer,…
Concepts and data model for a co-operative neurovascular database.
Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J
2001-08-01
Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.
Database integration in a multimedia-modeling environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorow, Kevin E.
2002-09-02
Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include toolsmore » to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.« less
Network portal: a database for storage, analysis and visualization of biological networks
Turkarslan, Serdar; Wurtmann, Elisabeth J.; Wu, Wei-Ju; Jiang, Ning; Bare, J. Christopher; Foley, Karen; Reiss, David J.; Novichkov, Pavel; Baliga, Nitin S.
2014-01-01
The ease of generating high-throughput data has enabled investigations into organismal complexity at the systems level through the inference of networks of interactions among the various cellular components (genes, RNAs, proteins and metabolites). The wider scientific community, however, currently has limited access to tools for network inference, visualization and analysis because these tasks often require advanced computational knowledge and expensive computing resources. We have designed the network portal (http://networks.systemsbiology.net) to serve as a modular database for the integration of user uploaded and public data, with inference algorithms and tools for the storage, visualization and analysis of biological networks. The portal is fully integrated into the Gaggle framework to seamlessly exchange data with desktop and web applications and to allow the user to create, save and modify workspaces, and it includes social networking capabilities for collaborative projects. While the current release of the database contains networks for 13 prokaryotic organisms from diverse phylogenetic clades (4678 co-regulated gene modules, 3466 regulators and 9291 cis-regulatory motifs), it will be rapidly populated with prokaryotic and eukaryotic organisms as relevant data become available in public repositories and through user input. The modular architecture, simple data formats and open API support community development of the portal. PMID:24271392
Working Group 1: Software System Design and Implementation for Environmental Modeling
ISCMEM Working Group One Presentation, presentation with the purpose of fostering the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases.
Processing SPARQL queries with regular expressions in RDF databases
2011-01-01
Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225
Processing SPARQL queries with regular expressions in RDF databases.
Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon
2011-03-29
As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.
Wolstenholme, Daniel; Ross, Helen; Cobb, Mark; Bowen, Simon
2017-05-01
To explore, using the example of a project working with older people in an outpatient setting in a large UK NHS Teaching hospital, how the constructs of Person Centred Nursing are reflected in interviews from participants in a Co-design led service improvement project. Person Centred Care and Person Centred Nursing are recognised terms in healthcare. Co-design (sometimes called participatory design) is an approach that seeks to involve all stakeholders in a creative process to deliver the best result, be this a product, technology or in this case a service. Co-design practice shares some of the underpinning philosophy of Person Centred Nursing and potentially has methods to aid in Person Centred Nursing implementation. The research design was a qualitative secondary Directed analysis. Seven interview transcripts from nurses and older people who had participated in a Co-design led improvement project in a large teaching hospital were transcribed and analysed. Two researchers analysed the transcripts for codes derived from McCormack & McCance's Person Centred Nursing Framework. The four most expressed codes were as follows: from the pre-requisites: knowing self; from care processes, engagement, working with patient's beliefs and values and shared Decision-making; and from Expected outcomes, involvement in care. This study describes the Co-design theory and practice that the participants responded to in the interviews and look at how the co-design activity facilitated elements of the Person Centred Nursing framework. This study adds to the rich literature about using emancipatory and transformational approaches to Person Centred Nursing development, and is the first study exploring explicitly the potential contribution of Co-design to this area. Methods from Co-design allow older people to contribute as equals in a practice development project, co-design methods can facilitate nursing staff to engage meaningfully with older participants and develop a shared understanding and goals. The co-produced outputs of Co-design projects embody and value the expressed beliefs and values of staff and older people. © 2016 The Authors. Journal of Clinical Nursing Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Diaz-Merced, Wanda Liz; Casado, Johanna; Garcia, Beatriz; Aarnio, Alicia; Knierman, Karen; Monkiewicz, Jacqueline; Alicia Aarnio.
2018-01-01
Big Data" is a subject that has taken special relevance today, particularly in Astrophysics, where continuous advances in technology are leading to ever larger data sets. A multimodal approach in perception of astronomical data data (achieved through sonification used for the processing of data) increases the detection of signals in very low signal-to-noise ratio limits and is of special importance to achieve greater inclusion in the field of Astronomy. In the last ten years, different software tools have been developed that perform the sonification of astronomical data from tables or databases, among them the best known and in multiplatform development are Sonification Sandbox, MathTrack, and xSonify.In order to determine the accessibility of software we propose to start carrying out a conformity analysis of ISO (International Standard Organization) 9241-171171: 2008. This standard establishes the general guidelines that must be taken into account for accessibility in software design, and it is applied to software used in work, public places, and at home. To analyze the accessibility of web databases, we take into account the "Web Content Content Accessibility Guidelines (WCAG) 2.0", accepted and published by ISO in the ISO / IEC 40500: 2012 standard.In this poster, we present a User Centered Design (UCD), Human Computer Interaction (HCI), and User Experience (UX) framework to address a non-segregational provision of access to bibliographic databases and telemetry databases in Astronomy. Our framework is based on an ISO evaluation on a selection of data bases such as ADS, Simbad and SDSS. The WCAG 2.0 and ISO 9241-171171: 2008 should not be taken as absolute accessibility standards: these guidelines are very general, are not absolute, and do not address particularities. They are not to be taken as a substitute for UCD, HCI, UX design and evaluation. Based on our results, this research presents the framework for a focus group and qualitative data analysis aimed to lay the foundations for the employment of UCD functionalities on astronomical databases.
A Conceptual Framework for Systematic Reviews of Research in Educational Leadership and Management
ERIC Educational Resources Information Center
Hallinger, Philip
2013-01-01
Purpose: The purpose of this paper is to present a framework for scholars carrying out reviews of research that meet international standards for publication. Design/methodology/approach: This is primarily a conceptual paper focusing on the methodology of conducting systematic reviews of research. However, the paper draws on a database of reviews…
BioMart: a data federation framework for large collaborative projects.
Zhang, Junjun; Haider, Syed; Baran, Joachim; Cros, Anthony; Guberman, Jonathan M; Hsu, Jack; Liang, Yong; Yao, Long; Kasprzyk, Arek
2011-01-01
BioMart is a freely available, open source, federated database system that provides a unified access to disparate, geographically distributed data sources. It is designed to be data agnostic and platform independent, such that existing databases can easily be incorporated into the BioMart framework. BioMart allows databases hosted on different servers to be presented seamlessly to users, facilitating collaborative projects between different research groups. BioMart contains several levels of query optimization to efficiently manage large data sets and offers a diverse selection of graphical user interfaces and application programming interfaces to ensure that queries can be performed in whatever manner is most convenient for the user. The software has now been adopted by a large number of different biological databases spanning a wide range of data types and providing a rich source of annotation available to bioinformaticians and biologists alike.
Détienne, Françoise; Barcellini, Flore; Baker, Michael; Burkhardt, Jean-Marie; Fréard, Dominique
2012-01-01
This paper presents, illustrates and discusses a generic framework for studying knowledge co-elaboration in online epistemic communities ("OECs"). Our approach is characterised by: considering knowledge co-elaboration as a design activity; distinguishing discussion and production spaces in OECs; characterising participation via the notion of role; fine-grained analyses of meaning, content and communicative functions in interactions. On this basis, three key issues for ergonomics research on OECs are discussed and illustrated by results from our previous studies on OSS and Wikipedia. One issue concerns the interrelation between design (task) and regulation. Whereas design task-oriented activity is distributed among participants, we illustrate that OCEs function with specialised emerging roles of group regulation. However, the task-oriented activity also functions at an interpersonal level, as an interplay of knowledge-based discussion with negotiation of competencies. Another issue concerns the foci of activity on the (designed) knowledge object. Based on a generic task model, we illustrate asymmetry and distinctiveness in tasks' foci of participants. The last issue concerns how design-use mediation is ensured by specific forms of mediation roles in OECs. Finally we discuss the degree of generality of our framework and draw some perspectives for extending our framework to other OECs.
Ontology to relational database transformation for web application development and maintenance
NASA Astrophysics Data System (ADS)
Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful
2018-03-01
Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.
Chao, Yong-lie; Lui, Chang-hong; Li, Ning; Yang, Xiao-yu
2005-02-01
To investigate a kind of Co-Cr-Mo alloys used for both porcelain fused to metal (PFM) restorations and casting framework of removable partial dentures. The Co-Cr-Mo alloy underwent the design for elementary compositions of the alloys and the production from the raw materials by means of a vacuum melt furnace. The strength, hardness, plasticity and casting ability of the alloy were examined with metal tensile test. Vickers hardness test and grid casting were examined respectively. The microstructure of the Co-Cr-Mo alloy was also inspected by scanning electron microscope and X-ray diffraction analysis. The elementary composition of DA9-4 alloy mainly consisted of Co 54%-67%, Cr 21%-26%, Mo 5%-8%, W 5%-8%, Si 1%-3%, Mn 0.1%-0.25% and trace elements. The yield strength of the alloy was 584 MPa, while the tensile strength was 736 MPa. The coefficient of expansion was 15.0%, the Vickers hardness reached 322, and the casting ratio exibited 100%. The DA9-4 Co-Cr-Mo alloy used for PFM and framework shown in this paper can meet the clinical demands and have reached the objects of the experiment plan.
NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information
,
2004-01-01
Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.
A Python object-oriented framework for the CMS alignment and calibration data
NASA Astrophysics Data System (ADS)
Dawes, Joshua H.; CMS Collaboration
2017-10-01
The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined. For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements. This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies.
A concept ideation framework for medical device design.
Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar
2015-06-01
Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. Copyright © 2015 Elsevier Inc. All rights reserved.
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
2017-11-09
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Design of a component-based integrated environmental modeling framework
Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...
Yu, Jia; Wang, Yanlei; Mou, Lihui; Fang, Daliang; Chen, Shimou; Zhang, Suojiang
2018-02-27
In allusion to traditional transition-metal oxide (TMO) anodes for lithium-ion batteries, which face severe volume variation and poor conductivity, herein a bimetal oxide dual-composite strategy based on two-dimensional (2D)-mosaic three-dimensional (3D)-gradient design is proposed. Inspired by natural mosaic dominance phenomena, Zn 1-x Co x O/ZnCo 2 O 4 2D-mosaic-hybrid mesoporous ultrathin nanosheets serve as building blocks to assemble into a 3D Zn-Co hierarchical framework. Moreover, a series of derivative frameworks with high evolution are controllably synthesized, based on which a facile one-pot synthesis process can be developed. From a component-composite perspective, both Zn 1-x Co x O and ZnCo 2 O 4 provide superior conductivity due to bimetal doping effect, which is verified by density functional theory calculations. From a structure-composite perspective, 2D-mosaic-hybrid mode gives rise to ladder-type buffering and electrochemical synergistic effect, thus realizing mutual stabilization and activation between the mosaic pair, especially for Zn 1-x Co x O with higher capacity yet higher expansion. Moreover, the inside-out Zn-Co concentration gradient in 3D framework and rich oxygen vacancies further greatly enhance Li storage capability and stability. As a result, a high reversible capacity (1010 mA h g -1 ) and areal capacity (1.48 mA h cm -2 ) are attained, while ultrastable cyclability is obtained during high-rate and long-term cycles, rending great potential of our 2D-mosaic 3D-gradient design together with facile synthesis.
A framework for designing hand hygiene educational interventions in schools.
Appiah-Brempong, Emmanuel; Harris, Muriel J; Newton, Samuel; Gulis, Gabriel
2018-03-01
Hygiene education appears to be the commonest school-based intervention for preventing infectious diseases, especially in the developing world. Nevertheless, there remains a gap in literature regarding a school-specific theory-based framework for designing a hand hygiene educational intervention in schools. We sought to suggest a framework underpinned by psychosocial theories towards bridging this knowledge gap. Furthermore, we sought to propound a more comprehensive definition of hand hygiene which could guide the conceptualisation of hand hygiene interventions in varied settings. Literature search was guided by a standardized tool and literature was retrieved on the basis of a predetermined inclusion criteria. Databases consulted include PubMed, ERIC, and EBSCO host (Medline, CINAHL, PsycINFO, etc.). Evidence bordering on a theoretical framework to aid the design of school-based hand hygiene educational interventions is summarized narratively. School-based hand hygiene educational interventions seeking to positively influence behavioural outcomes could consider enhancing psychosocial variables including behavioural capacity, attitudes and subjective norms (normative beliefs and motivation to comply). A framework underpinned by formalized psychosocial theories has relevance and could enhance the design of hand hygiene educational interventions, especially in schools.
ERIC Educational Resources Information Center
Fillery-Travis, Annette Jayne
2014-01-01
This paper critically engages with the pedagogical design of a generic professional doctorate programme as a framework for creation of actionable knowledge within the practice of both adviser and candidate. Within this exploration the relational dimensions of the adviser-candidate interaction are identified and their potential impact partially…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
A spin transition mechanism for cooperative adsorption in metal-organic frameworks
NASA Astrophysics Data System (ADS)
Reed, Douglas A.; Keitz, Benjamin K.; Oktawiec, Julia; Mason, Jarad A.; Runčevski, Tomče; Xiao, Dianne J.; Darago, Lucy E.; Crocellà, Valentina; Bordiga, Silvia; Long, Jeffrey R.
2017-10-01
Cooperative binding, whereby an initial binding event facilitates the uptake of additional substrate molecules, is common in biological systems such as haemoglobin. It was recently shown that porous solids that exhibit cooperative binding have substantial energetic benefits over traditional adsorbents, but few guidelines currently exist for the design of such materials. In principle, metal-organic frameworks that contain coordinatively unsaturated metal centres could act as both selective and cooperative adsorbents if guest binding at one site were to trigger an electronic transformation that subsequently altered the binding properties at neighbouring metal sites. Here we illustrate this concept through the selective adsorption of carbon monoxide (CO) in a series of metal-organic frameworks featuring coordinatively unsaturated iron(II) sites. Functioning via a mechanism by which neighbouring iron(II) sites undergo a spin-state transition above a threshold CO pressure, these materials exhibit large CO separation capacities with only small changes in temperature. The very low regeneration energies that result may enable more efficient Fischer-Tropsch conversions and extraction of CO from industrial waste feeds, which currently underutilize this versatile carbon synthon. The electronic basis for the cooperative adsorption demonstrated here could provide a general strategy for designing efficient and selective adsorbents suitable for various separations.
Zhang, Peng; Guan, Bu Yuan; Yu, Le; Lou, Xiong Wen David
2017-06-12
Complex metal-organic frameworks used as precursors allow design and construction of various nanostructured functional materials which might not be accessible by other methods. Here, we develop a sequential chemical etching and sulfurization strategy to prepare well-defined double-shelled zinc-cobalt sulfide (Zn-Co-S) rhombic dodecahedral cages (RDCs). Yolk-shelled zinc/cobalt-based zeolitic imidazolate framework (Zn/Co-ZIF) RDCs are first synthesized by a controlled chemical etching process, followed by a hydrothermal sulfurization reaction to prepare double-shelled Zn-Co-S RDCs. Moreover, the strategy reported in this work enables easy control of the Zn/Co molar ratio in the obtained double-shelled Zn-Co-S RDCs. Owing to the structural and compositional benefits, the obtained double-shelled Zn-Co-S RDCs exhibit enhanced performance with high specific capacitance (1266 F g -1 at 1 A g -1 ), good rate capability and long-term cycling stability (91 % retention over 10,000 cycles) as a battery-type electrode material for hybrid supercapacitors. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computational materials chemistry for carbon capture using porous materials
NASA Astrophysics Data System (ADS)
Sharma, Abhishek; Huang, Runhong; Malani, Ateeque; Babarao, Ravichandar
2017-11-01
Control over carbon dioxide (CO2) release is extremely important to decrease its hazardous effects on the environment such as global warming, ocean acidification, etc. For CO2 capture and storage at industrial point sources, nanoporous materials offer an energetically viable and economically feasible approach compared to chemisorption in amines. There is a growing need to design and synthesize new nanoporous materials with enhanced capability for carbon capture. Computational materials chemistry offers tools to screen and design cost-effective materials for CO2 separation and storage, and it is less time consuming compared to trial and error experimental synthesis. It also provides a guide to synthesize new materials with better properties for real world applications. In this review, we briefly highlight the various carbon capture technologies and the need of computational materials design for carbon capture. This review discusses the commonly used computational chemistry-based simulation methods for structural characterization and prediction of thermodynamic properties of adsorbed gases in porous materials. Finally, simulation studies reported on various potential porous materials, such as zeolites, porous carbon, metal organic frameworks (MOFs) and covalent organic frameworks (COFs), for CO2 capture are discussed.
NASA Technical Reports Server (NTRS)
Davidson, Eric A.; Nepstad, Daniel C.; Trumbore, Susan E.
1995-01-01
This progress report covers the following efforts initiated for the year: year-round monthly soil CO2 flux measurements were started in both primary and secondary forests and in managed and degraded pastures; root sorting and weighing has begun and all four ecosystems at Paragominas have been analyzed through samples; regional modeling of soil water dynamics and minimum rooting depth has been done and the RADAMBRASIL soils database has been digitized and a 20 year record of the precipitation for the region has been produced, along with a hydrological ('bucket-tipping') model that will run within a GIS framework; prototype tension lysimeters have been designed and installed in soil pits to begin assessing the importance of DOC as a source of organic matter in deep soils; and many publications, listed in this document, have resulted from this year's research. Two of the papers published are included with this annual report document.
US Army Research Laboratory Visualization Framework Design Document
2016-01-01
This section highlights each module in the ARL-VF and subsequent sections provide details on how each module interacts . Fig. 2 ARL-VF with the...ConfigAgent MultiTouch VizDatabase VizController TUIO VizDatabase User VizDaemon VizDaemon VizDaemon VizDaemon VizDaemon TestPoint...received by the destination. The sequence diagram in Fig. 4 shows this interaction . Approved for public release; distribution unlimited. 13 Fig. 4
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
A comprehensive risk assessment framework for offsite transportation of inflammable hazardous waste.
Das, Arup; Gupta, A K; Mazumder, T N
2012-08-15
A framework for risk assessment due to offsite transportation of hazardous wastes is designed based on the type of event that can be triggered from an accident of a hazardous waste carrier. The objective of this study is to design a framework for computing the risk to population associated with offsite transportation of inflammable and volatile wastes. The framework is based on traditional definition of risk and is designed for conditions where accident databases are not available. The probability based variable in risk assessment framework is substituted by a composite accident index proposed in this study. The framework computes the impacts due to a volatile cloud explosion based on TNO Multi-energy model. The methodology also estimates the vulnerable population in terms of disability adjusted life years (DALY) which takes into consideration the demographic profile of the population and the degree of injury on mortality and morbidity sustained. The methodology is illustrated using a case study of a pharmaceutical industry in the Kolkata metropolitan area. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Feng, Xiaogeng; Bo, Xiangjie; Guo, Liping
2018-06-01
Rational synthesis and development of earth-abundant materials with efficient electrocatalytic activity and stability for water splitting is a critical but challenging step for sustainable energy application. Herein, a family of bimetal (CoFe, CoCu, CoNi) embedded nitrogen-doped carbon frameworks is developed through a facile and simple thermal conversion strategy of metal-doped zeolitic imidazolate frameworks. Thanks to collaborative superiorities of abundant M-N-C species, modulation action of secondary metal, cobalt-based electroactive phases, template effect of MOFs and unique porous structure, bimetal embedded nitrogen-doped carbon frameworks materials manifest good oxygen and hydrogen evolution catalytic activity. Especially, after modulating the species and molar ratio of metal sources, optimal Co0.75Fe0.25 nitrogen-doped carbon framework catalyst just requires a low overpotential of 303 mV to achieve 10 mA cm-2 with a low Tafel slope (39.49 mV dec-1) for oxygen evolution reaction, which even surpasses that of commercial RuO2. In addition, the optimal catalyst can function as an efficient bifunctional electrocatalyst for overall water splitting with satisfying activity and stability. This development offers an attractive direction for the rational design and fabrication of porous carbon materials for electrochemical energy applications.
The Mutable Nature of Risk and Acceptability: A Hybrid Risk Governance Framework.
Wong, Catherine Mei Ling
2015-11-01
This article focuses on the fluid nature of risk problems and the challenges it presents to establishing acceptability in risk governance. It introduces an actor-network theory (ANT) perspective as a way to deal with the mutable nature of risk controversies and the configuration of stakeholders. To translate this into a practicable framework, the article proposes a hybrid risk governance framework that combines ANT with integrative risk governance, deliberative democracy, and responsive regulation. This addresses a number of the limitations in existing risk governance models, including: (1) the lack of more substantive public participation throughout the lifecycle of a project; (2) hijacking of deliberative forums by particular groups; and (3) the treatment of risk problems and their associated stakeholders as immutable entities. The framework constitutes a five-stage process of co-selection, co-design, co-planning, and co-regulation to facilitate the co-production of collective interests and knowledge, build capacities, and strengthen accountability in the process. The aims of this article are twofold: conceptually, it introduces a framework of risk governance that accounts for the mutable nature of risk problems and configuration of stakeholders. In practice, this article offers risk managers and practitioners of risk governance a set of procedures with which to operationalize this conceptual approach to risk and stakeholder engagement. © 2015 Society for Risk Analysis.
ERIC Educational Resources Information Center
Funk, Mathias; van Diggelen, Migchiel
2017-01-01
In this paper, the authors describe how a study of a large database of written university teacher feedback in the department of Industrial Design led to the development of a new conceptual framework for feedback and the design of a new feedback tool. This paper focuses on the translation of related work in the area of feedback mechanisms for…
Human factors analysis and classification system-HFACS.
DOT National Transportation Integrated Search
2000-02-01
Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...
Chen, Bailian; Reynolds, Albert C.
2018-03-11
We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bailian; Reynolds, Albert C.
We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less
Zhang, Jian-Wei; Hu, Man-Cheng; Li, Shu-Ni; Jiang, Yu-Cheng; Zhai, Quan-Guo
2017-01-17
The synthetic design of new porous open-framework materials with pre-designed pore properties for desired applications such as gas adsorption and separation remains challenging. We proposed one such class of materials, rod metal-organic frameworks (rod MOFs), which can be tuned by using rod secondary building units (rod SBUs) with different geometrical and chemical features. Our approach takes advantage of the readily accessible metal-triazolate 1-D motifs as rod SBUs to combine with dicarboxylate ligands to prepare target rod MOFs. Herein we report three such metal-triazolate-dicarboxylate frameworks (SNNU-21, -22 and -23). During the formation of these three MOFs, Cd or Zn ions are firstly connected by 1,2,4-triazole through the N1,N2,N4-mode to form 1-D metal-organic ribbon-like rod SBUs, which further joint four adjacent rod SBUs via eight BDC linkers to give 3-D microporous frameworks. However, tuned by the different NH 2 groups from metal-triazolate rod SBUs, different space groups, pore sizes and shapes are observed for SNNU-21-23. All of these rod MOFs show not only remarkable CO 2 uptake capacity, but also high CO 2 over CH 4 and C 2 -hydrocarbons over CH 4 selectivity under ambient conditions. Specially, SNNU-23 exhibits a very high isosteric heat of adsorption (Q st ) for C 2 H 2 (62.2 kJ mol -1 ), which outperforms the values of all MOF materials reported to date including the famous MOF-74-Co.
ERIC Educational Resources Information Center
Pellas, Nikolaos
2017-01-01
The combination of Open Sim and Scratch4OS can be a worthwhile innovation for introductory programming courses, using a Community of Inquiry (CoI) model as a theoretical instructional design framework. This empirical study had a threefold purpose to present: (a) an instructional design framework for the beneficial formalization of a virtual…
Research progress and hotspot analysis of spatial interpolation
NASA Astrophysics Data System (ADS)
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
Real-time Author Co-citation Mapping for Online Searching.
ERIC Educational Resources Information Center
Lin, Xia; White, Howard D.; Buzydlowski, Jan
2003-01-01
Describes the design and implementation of a prototype visualization system, AuthorLink, to enhance author searching. AuthorLink is based on author co-citation analysis and visualization mapping algorithms. AuthorLink produces interactive author maps in real time from a database of 1.26 million records supplied by the Institute for Scientific…
The SBOL Stack: A Platform for Storing, Publishing, and Sharing Synthetic Biology Designs.
Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Pocock, Matthew; Flanagan, Keith; Hallinan, Jennifer; Wipat, Anil
2016-06-17
Recently, synthetic biologists have developed the Synthetic Biology Open Language (SBOL), a data exchange standard for descriptions of genetic parts, devices, modules, and systems. The goals of this standard are to allow scientists to exchange designs of biological parts and systems, to facilitate the storage of genetic designs in repositories, and to facilitate the description of genetic designs in publications. In order to achieve these goals, the development of an infrastructure to store, retrieve, and exchange SBOL data is necessary. To address this problem, we have developed the SBOL Stack, a Resource Description Framework (RDF) database specifically designed for the storage, integration, and publication of SBOL data. This database allows users to define a library of synthetic parts and designs as a service, to share SBOL data with collaborators, and to store designs of biological systems locally. The database also allows external data sources to be integrated by mapping them to the SBOL data model. The SBOL Stack includes two Web interfaces: the SBOL Stack API and SynBioHub. While the former is designed for developers, the latter allows users to upload new SBOL biological designs, download SBOL documents, search by keyword, and visualize SBOL data. Since the SBOL Stack is based on semantic Web technology, the inherent distributed querying functionality of RDF databases can be used to allow different SBOL stack databases to be queried simultaneously, and therefore, data can be shared between different institutes, centers, or other users.
Huber, Lara
2011-06-01
In the neurosciences digital databases more and more are becoming important tools of data rendering and distributing. This development is due to the growing impact of imaging based trial design in cognitive neuroscience, including morphological as much as functional imaging technologies. As the case of the 'Laboratory of Neuro Imaging' (LONI) is showing, databases are attributed a specific epistemological power: Since the 1990s databasing is seen to foster the integration of neuroscientific data, although local regimes of data production, -manipulation and--interpretation are also challenging this development. Databasing in the neurosciences goes along with the introduction of new structures of integrating local data, hence establishing digital spaces of knowledge (epistemic spaces): At this stage, inherent norms of digital databases are affecting regimes of imaging-based trial design, for example clinical research into Alzheimer's disease.
NASA Astrophysics Data System (ADS)
Wang, Tao; Shi, Li; Tang, Jing; Malgras, Victor; Asahina, Shunsuke; Liu, Guigao; Zhang, Huabin; Meng, Xianguang; Chang, Kun; He, Jianping; Terasaki, Osamu; Yamauchi, Yusuke; Ye, Jinhua
2016-03-01
Metal-organic frameworks (MOFs) are attracting considerable attention for their use as both the precursor and the template to prepare metal oxides or carbon-based materials. For the first time in this paper, the core-shell ZIF-8@ZIF-67 crystals are thermally converted into porous ZnO@Co3O4 composites by combining a seed-mediated growth process with a two-step calcination. The designed porous ZnO@Co3O4 composites exhibited the highest photocatalytic activity with an excellent stability for the reduction of CO2 among the commonly reported composite photocatalysts. Their superior photocatalytic performance is demonstrated to be resulting from the unique porous structure of ZnO@Co3O4 and the co-catalytic function of Co3O4 which can effectively suppress the photocorrosion of ZnO.Metal-organic frameworks (MOFs) are attracting considerable attention for their use as both the precursor and the template to prepare metal oxides or carbon-based materials. For the first time in this paper, the core-shell ZIF-8@ZIF-67 crystals are thermally converted into porous ZnO@Co3O4 composites by combining a seed-mediated growth process with a two-step calcination. The designed porous ZnO@Co3O4 composites exhibited the highest photocatalytic activity with an excellent stability for the reduction of CO2 among the commonly reported composite photocatalysts. Their superior photocatalytic performance is demonstrated to be resulting from the unique porous structure of ZnO@Co3O4 and the co-catalytic function of Co3O4 which can effectively suppress the photocorrosion of ZnO. Electronic supplementary information (ESI) available: Additional TG and DTA curves, XRD patterns, SEM images, TEM images, N2 adsorption-desorption isotherms, X-ray photoelectron spectroscopy and GC-MS spectra of the samples. See DOI: 10.1039/c5nr08747c
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
Modelling urban δ13C variations in the Greater Toronto Area
NASA Astrophysics Data System (ADS)
Pugliese, S.; Vogel, F. R.; Murphy, J. G.; Worthy, D. E. J.; Zhang, J.; Zheng, Q.; Moran, M. D.
2015-12-01
Even in urbanized regions, carbon dioxide (CO2) emissions are derived from a variety of biogenic and anthropogenic sources and are influenced by atmospheric transport across borders. As policies are introduced to reduce the emission of CO2, there is a need for independent verification of emissions reporting. In this work, we aim to use carbon isotope (13CO2 and 12CO2) simulations in combination with atmospheric measurements to distinguish between CO2 sources in the Greater Toronto Area (GTA), Canada. This is being done by developing an urban δ13C framework based on existing CO2 emission data and forward modelling using a chemistry transport model, CHIMERE. The framework is designed to use region specific δ13C signatures of the dominant CO2 sources together with a CO2 inventory at a fine spatial and temporal resolution; the product is compared against highly accurate 13CO2 and 12CO2 ambient data. The strength of this framework is its potential to estimate both locally produced and regionally transported CO2. Locally, anthropogenic CO2 in urban areas is often derived from natural gas combustion (for heating) and gasoline/diesel combustion (for transportation); the isotopic signatures of these processes are significantly different (approximately d13CVPDB = -40 ‰ and -26 ‰ respectively) and can be used to infer their relative contributions. Furthermore, the contribution of transported CO2 can also be estimated as nearby regions often rely on other sources of heating (e.g. coal combustion), which has a very different signature (approximately d13CVPDB = -23 ‰). We present an analysis of the GTA in contrast to Paris, France where atmospheric observations are also available and 13CO2 has been studied. Utilizing our δ13C framework and differences in sectoral isotopic signatures, we quantify the relative contribution of CO2 sources on the overall measured concentration and assess the ability of this framework as a tool for tracing the evolution of sector-specific emissions.
Identification of metal ion binding sites based on amino acid sequences
Cao, Xiaoyong; Zhang, Xiaojin; Gao, Sujuan; Ding, Changjiang; Feng, Yonge; Bao, Weihua
2017-01-01
The identification of metal ion binding sites is important for protein function annotation and the design of new drug molecules. This study presents an effective method of analyzing and identifying the binding residues of metal ions based solely on sequence information. Ten metal ions were extracted from the BioLip database: Zn2+, Cu2+, Fe2+, Fe3+, Ca2+, Mg2+, Mn2+, Na+, K+ and Co2+. The analysis showed that Zn2+, Cu2+, Fe2+, Fe3+, and Co2+ were sensitive to the conservation of amino acids at binding sites, and promising results can be achieved using the Position Weight Scoring Matrix algorithm, with an accuracy of over 79.9% and a Matthews correlation coefficient of over 0.6. The binding sites of other metals can also be accurately identified using the Support Vector Machine algorithm with multifeature parameters as input. In addition, we found that Ca2+ was insensitive to hydrophobicity and hydrophilicity information and Mn2+ was insensitive to polarization charge information. An online server was constructed based on the framework of the proposed method and is freely available at http://60.31.198.140:8081/metal/HomePage/HomePage.html. PMID:28854211
Identification of metal ion binding sites based on amino acid sequences.
Cao, Xiaoyong; Hu, Xiuzhen; Zhang, Xiaojin; Gao, Sujuan; Ding, Changjiang; Feng, Yonge; Bao, Weihua
2017-01-01
The identification of metal ion binding sites is important for protein function annotation and the design of new drug molecules. This study presents an effective method of analyzing and identifying the binding residues of metal ions based solely on sequence information. Ten metal ions were extracted from the BioLip database: Zn2+, Cu2+, Fe2+, Fe3+, Ca2+, Mg2+, Mn2+, Na+, K+ and Co2+. The analysis showed that Zn2+, Cu2+, Fe2+, Fe3+, and Co2+ were sensitive to the conservation of amino acids at binding sites, and promising results can be achieved using the Position Weight Scoring Matrix algorithm, with an accuracy of over 79.9% and a Matthews correlation coefficient of over 0.6. The binding sites of other metals can also be accurately identified using the Support Vector Machine algorithm with multifeature parameters as input. In addition, we found that Ca2+ was insensitive to hydrophobicity and hydrophilicity information and Mn2+ was insensitive to polarization charge information. An online server was constructed based on the framework of the proposed method and is freely available at http://60.31.198.140:8081/metal/HomePage/HomePage.html.
Materials Databases Infrastructure Constructed by First Principles Calculations: A Review
Lin, Lianshan
2015-10-13
The First Principles calculations, especially the calculation based on High-Throughput Density Functional Theory, have been widely accepted as the major tools in atom scale materials design. The emerging super computers, along with the powerful First Principles calculations, have accumulated hundreds of thousands of crystal and compound records. The exponential growing of computational materials information urges the development of the materials databases, which not only provide unlimited storage for the daily increasing data, but still keep the efficiency in data storage, management, query, presentation and manipulation. This review covers the most cutting edge materials databases in materials design, and their hotmore » applications such as in fuel cells. By comparing the advantages and drawbacks of these high-throughput First Principles materials databases, the optimized computational framework can be identified to fit the needs of fuel cell applications. The further development of high-throughput DFT materials database, which in essence accelerates the materials innovation, is discussed in the summary as well.« less
Space debris mitigation - engineering strategies
NASA Astrophysics Data System (ADS)
Taylor, E.; Hammond, M.
The problem of space debris pollution is acknowledged to be of growing concern by space agencies, leading to recent activities in the field of space debris mitigation. A review of the current (and near-future) mitigation guidelines, handbooks, standards and licensing procedures has identified a number of areas where further work is required. In order for space debris mitigation to be implemented in spacecraft manufacture and operation, the authors suggest that debris-related criteria need to become design parameters (following the same process as applied to reliability and radiation). To meet these parameters, spacecraft manufacturers and operators will need processes (supported by design tools and databases and implementation standards). A particular aspect of debris mitigation, as compared with conventional requirements (e.g. radiation and reliability) is the current and near-future national and international regulatory framework and associated liability aspects. A framework for these implementation standards is presented, in addition to results of in-house research and development on design tools and databases (including collision avoidance in GTO and SSTO and evaluation of failure criteria on composite and aluminium structures).
ERIC Educational Resources Information Center
Al-Azawei, Ahmed; Serenelli, Fabio; Lundqvist, Karsten
2016-01-01
The Universal Design for Learning (UDL) framework is increasingly drawing the attention of researchers and educators as an effective solution for filling the gap between learner ability and individual differences. This study aims to analyse the content of twelve papers, where the UDL was adopted. The articles were chosen from several databases and…
The Human Factors Analysis and Classification System : HFACS : final report.
DOT National Transportation Integrated Search
2000-02-01
Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...
Enhanced Living by Assessing Voice Pathology Using a Co-Occurrence Matrix
Muhammad, Ghulam; Alhamid, Mohammed F.; Hossain, M. Shamim; Almogren, Ahmad S.; Vasilakos, Athanasios V.
2017-01-01
A large number of the population around the world suffers from various disabilities. Disabilities affect not only children but also adults of different professions. Smart technology can assist the disabled population and lead to a comfortable life in an enhanced living environment (ELE). In this paper, we propose an effective voice pathology assessment system that works in a smart home framework. The proposed system takes input from various sensors, and processes the acquired voice signals and electroglottography (EGG) signals. Co-occurrence matrices in different directions and neighborhoods from the spectrograms of these signals were obtained. Several features such as energy, entropy, contrast, and homogeneity from these matrices were calculated and fed into a Gaussian mixture model-based classifier. Experiments were performed with a publicly available database, namely, the Saarbrucken voice database. The results demonstrate the feasibility of the proposed system in light of its high accuracy and speed. The proposed system can be extended to assess other disabilities in an ELE. PMID:28146069
Enhanced Living by Assessing Voice Pathology Using a Co-Occurrence Matrix.
Muhammad, Ghulam; Alhamid, Mohammed F; Hossain, M Shamim; Almogren, Ahmad S; Vasilakos, Athanasios V
2017-01-29
A large number of the population around the world suffers from various disabilities. Disabilities affect not only children but also adults of different professions. Smart technology can assist the disabled population and lead to a comfortable life in an enhanced living environment (ELE). In this paper, we propose an effective voice pathology assessment system that works in a smart home framework. The proposed system takes input from various sensors, and processes the acquired voice signals and electroglottography (EGG) signals. Co-occurrence matrices in different directions and neighborhoods from the spectrograms of these signals were obtained. Several features such as energy, entropy, contrast, and homogeneity from these matrices were calculated and fed into a Gaussian mixture model-based classifier. Experiments were performed with a publicly available database, namely, the Saarbrucken voice database. The results demonstrate the feasibility of the proposed system in light of its high accuracy and speed. The proposed system can be extended to assess other disabilities in an ELE.
Wang, Tao; Shi, Li; Tang, Jing; Malgras, Victor; Asahina, Shunsuke; Liu, Guigao; Zhang, Huabin; Meng, Xianguang; Chang, Kun; He, Jianping; Terasaki, Osamu; Yamauchi, Yusuke; Ye, Jinhua
2016-03-28
Metal-organic frameworks (MOFs) are attracting considerable attention for their use as both the precursor and the template to prepare metal oxides or carbon-based materials. For the first time in this paper, the core-shell ZIF-8@ZIF-67 crystals are thermally converted into porous ZnO@Co3O4 composites by combining a seed-mediated growth process with a two-step calcination. The designed porous ZnO@Co3O4 composites exhibited the highest photocatalytic activity with an excellent stability for the reduction of CO2 among the commonly reported composite photocatalysts. Their superior photocatalytic performance is demonstrated to be resulting from the unique porous structure of ZnO@Co3O4 and the co-catalytic function of Co3O4 which can effectively suppress the photocorrosion of ZnO.
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
ERIC Educational Resources Information Center
Rockinson-Szapkiw, Amanda J.; Wendt, Jillian; Wighting, Mervyn; Nisbet, Deanna
2016-01-01
The Community of Inquiry framework has been widely supported by research to provide a model of online learning that informs the design and implementation of distance learning courses. However, the relationship between elements of the CoI framework and perceived learning warrants further examination as a predictive model for online graduate student…
Co-governing decentralised water systems: an analytical framework.
Yu, C; Brown, R; Morison, P
2012-01-01
Current discourses in urban water management emphasise a diversity of water sources and scales of infrastructure for resilience and adaptability. During the last 2 decades, in particular, various small-scale systems emerged and developed so that the debate has largely moved from centralised versus decentralised water systems toward governing integrated and networked systems of provision and consumption where small-scale technologies are embedded in large-scale centralised infrastructures. However, while centralised systems have established boundaries of ownership and management, decentralised water systems (such as stormwater harvesting technologies for the street, allotment/house scales) do not, therefore the viability for adoption and/or continued use of decentralised water systems is challenged. This paper brings together insights from the literature on public sector governance, co-production and social practices model to develop an analytical framework for co-governing such systems. The framework provides urban water practitioners with guidance when designing co-governance arrangements for decentralised water systems so that these systems continue to exist, and become widely adopted, within the established urban water regime.
In vitro investigation of marginal accuracy of implant-supported screw-retained partial dentures.
Koke, U; Wolf, A; Lenz, P; Gilde, H
2004-05-01
Mismatch occurring during the fabrication of implant-supported dentures may induce stress to the peri-implant bone. The purpose of this study was to investigate the influence of two different alloys and the fabrication method on the marginal accuracy of cast partial dentures. Two laboratory implants were bonded into an aluminium block so that the distance between their longitudinal axes was 21 mm. Frameworks designed for screw-retained partial dentures were cast either with pure titanium (rematitan) or with a CoCr-alloy (remanium CD). Two groups of 10 frameworks were cast in a single piece. The first group was made of pure titanium, and the second group of a CoCr-alloy (remanium CD). A third group of 10 was cast in two pieces and then laser-welded onto a soldering model. This latter group was also made of the CoCr-alloy. All the frameworks were screwed to the original model with defined torque. Using light microscopy, marginal accuracy was determined by measuring vertical gaps at eight defined points around each implant. Titanium frameworks cast in a single piece demonstrated mean vertical gaps of 40 microm (s.d. = 11 microm) compared with 72 microm (s.d. = 40 microm) for CoCr-frameworks. These differences were not significant (U-test, P = 0.124) because of a considerable variation of the values for CoCr-frameworks (minimum: 8 microm and maximum: 216 microm). However, frameworks cast in two pieces and mated with a laser showed significantly better accuracy in comparison with the other experimental groups (mean: 17 microm +/- 6; P < 0.01). (i) The fit of implant-supported partial dentures cast with pure titanium in a single piece is preferable to that of those made with the CoCr-alloy and (ii) the highest accuracy can be achieved by using a two-piece casting technique combined with laser welding. Manufacturing the framework pieces separately and then welding them together provides the best marginal fit.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-07-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
The Use of User-Centered Participatory Design in Serious Games for Anxiety and Depression.
Dekker, Maria R; Williams, Alishia D
2017-12-01
There is increasing interest in using serious games to deliver or complement healthcare interventions for mental health, particularly for the most common mental health conditions such as anxiety and depression. Initial results seem promising, yet variations exist in the effectiveness of serious games, highlighting the importance of understanding optimal design features. It has been suggested that the involvement of end-users in the design and decision-making process could influence game effectiveness. In user-centered design (UCD) or participatory design (PD), users are involved in stages of the process, including planning, designing, implementing, and testing the serious game. To the authors' knowledge, no literature review to date has assessed the use of UCD/PD in games that are designed for mental health, specifically for anxiety or depression. The aim of this review is, therefore, to document the extent to which published studies of serious games that are designed to prevent or treat anxiety and depression have adopted a PD framework. A search of keywords in PubMed and PsychINFO databases through to December 2016 was conducted. We identified 20 serious games developed to prevent, treat or complement existing therapies for anxiety and/or depression. Half (N = 10; 50%) of these games were developed with input from the intended end-users, in either informant (N = 7; 70%) or full participatory co-design roles (N = 3; 30%). Less than half of games (45%) included users only in the testing phase.
The purpose of the Interagency Steering Committee on Multimedia Environmental Modeling (ISCMEM) is to foster the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases that are all in the public domain. It is compos...
Kiranyaz, Serkan; Mäkinen, Toni; Gabbouj, Moncef
2012-10-01
In this paper, we propose a novel framework based on a collective network of evolutionary binary classifiers (CNBC) to address the problems of feature and class scalability. The main goal of the proposed framework is to achieve a high classification performance over dynamic audio and video repositories. The proposed framework adopts a "Divide and Conquer" approach in which an individual network of binary classifiers (NBC) is allocated to discriminate each audio class. An evolutionary search is applied to find the best binary classifier in each NBC with respect to a given criterion. Through the incremental evolution sessions, the CNBC framework can dynamically adapt to each new incoming class or feature set without resorting to a full-scale re-training or re-configuration. Therefore, the CNBC framework is particularly designed for dynamically varying databases where no conventional static classifiers can adapt to such changes. In short, it is entirely a novel topology, an unprecedented approach for dynamic, content/data adaptive and scalable audio classification. A large set of audio features can be effectively used in the framework, where the CNBCs make appropriate selections and combinations so as to achieve the highest discrimination among individual audio classes. Experiments demonstrate a high classification accuracy (above 90%) and efficiency of the proposed framework over large and dynamic audio databases. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Sarirete, Akila; Chikh, Azeddine; Noble, Elizabeth
2011-01-01
Purpose: The purpose of this paper is to define a community memory for a virtual communities of practice (CoP) based on organizational learning (OL) concept and ontologies. Design/methodology/approach: The paper focuses on applying the OL concept to virtual CoP and proposes a framework for building the CoP memory by identifying several layers of…
Migration from relational to NoSQL database
NASA Astrophysics Data System (ADS)
Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar
2017-11-01
Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.
Common hyperspectral image database design
NASA Astrophysics Data System (ADS)
Tian, Lixun; Liao, Ningfang; Chai, Ali
2009-11-01
This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.
ERIC Educational Resources Information Center
Wong, Lung-Hsiang; Chai, Ching Sing; Zhang, Xujuan; King, Ronnel B.
2015-01-01
Integrating technologies into teaching and learning poses a significant challenge for many teachers who lack socio-techno-pedagogical know-how and time to design interventions. A possible solution is to design sound technology-enhanced learning (TEL) environments with relevant content and pedagogical tools to reduce teachers' design efforts.…
ARACHNID: A prototype object-oriented database tool for distributed systems
NASA Technical Reports Server (NTRS)
Younger, Herbert; Oreilly, John; Frogner, Bjorn
1994-01-01
This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.
Witman, Matthew; Ling, Sanliang; Gladysiak, Andrzej; ...
2016-12-16
Here, we present the in silico design of a MOF-74 analogue, hereon known as M 2(DHFUMA) [M = Mg, Fe, Co, Ni, Zn], with enhanced small-molecule adsorption properties over the original M 2(DOBDC) series. Constructed from 2,3-dihydroxyfumarate (DHFUMA), an aliphatic ligand which is smaller than the aromatic 2,5-dioxidobenzene-1,4-dicarboxylate (DOBDC), the M 2(DHFUMA) framework has a reduced channel diameter, resulting in higher volumetric density of open metal sites and significantly improved volumetric hydrogen (H 2) storage potential. Furthermore, the reduced distance between two adjacent open metal sites in the pore channel leads to a CO 2 binding mode of one moleculemore » per two adjacent metals with markedly stronger binding energetics. Through dispersion-corrected density functional theory (DFT) calculations of guest–framework interactions and classical simulation of the adsorption behavior of binary CO 2:H 2O mixtures, we theoretically predict the M 2(DHFUMA) series as an improved alternative for carbon capture over the M 2(DOBDC) series when adsorbing from wet flue gas streams. The improved CO 2 uptake and humidity tolerance in our simulations is tunable based upon metal selection and adsorption temperature which, combined with the significantly reduced ligand expense, elevates this material’s potential for CO 2 capture and H 2 storage. The dynamical and elastic stabilities of Mg 2(DHFUMA) were verified by hybrid DFT calculations, demonstrating its significant potential for experimental synthesis.« less
2016-01-01
We present the in silico design of a MOF-74 analogue, hereon known as M2(DHFUMA) [M = Mg, Fe, Co, Ni, Zn], with enhanced small-molecule adsorption properties over the original M2(DOBDC) series. Constructed from 2,3-dihydroxyfumarate (DHFUMA), an aliphatic ligand which is smaller than the aromatic 2,5-dioxidobenzene-1,4-dicarboxylate (DOBDC), the M2(DHFUMA) framework has a reduced channel diameter, resulting in higher volumetric density of open metal sites and significantly improved volumetric hydrogen (H2) storage potential. Furthermore, the reduced distance between two adjacent open metal sites in the pore channel leads to a CO2 binding mode of one molecule per two adjacent metals with markedly stronger binding energetics. Through dispersion-corrected density functional theory (DFT) calculations of guest–framework interactions and classical simulation of the adsorption behavior of binary CO2:H2O mixtures, we theoretically predict the M2(DHFUMA) series as an improved alternative for carbon capture over the M2(DOBDC) series when adsorbing from wet flue gas streams. The improved CO2 uptake and humidity tolerance in our simulations is tunable based upon metal selection and adsorption temperature which, combined with the significantly reduced ligand expense, elevates this material’s potential for CO2 capture and H2 storage. The dynamical and elastic stabilities of Mg2(DHFUMA) were verified by hybrid DFT calculations, demonstrating its significant potential for experimental synthesis. PMID:28127415
Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503
Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.
Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun
2015-01-01
B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q2 = 0.621, r2pred = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained. PMID:26035757
Xie, Huiding; Chen, Lijun; Zhang, Jianqiang; Xie, Xiaoguang; Qiu, Kaixiong; Fu, Jijun
2015-05-29
B-Raf kinase is an important target in treatment of cancers. In order to design and find potent B-Raf inhibitors (BRIs), 3D pharmacophore models were created using the Genetic Algorithm with Linear Assignment of Hypermolecular Alignment of Database (GALAHAD). The best pharmacophore model obtained which was used in effective alignment of the data set contains two acceptor atoms, three donor atoms and three hydrophobes. In succession, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were performed on 39 imidazopyridine BRIs to build three dimensional quantitative structure-activity relationship (3D QSAR) models based on both pharmacophore and docking alignments. The CoMSIA model based on the pharmacophore alignment shows the best result (q(2) = 0.621, r(2)(pred) = 0.885). This 3D QSAR approach provides significant insights that are useful for designing potent BRIs. In addition, the obtained best pharmacophore model was used for virtual screening against the NCI2000 database. The hit compounds were further filtered with molecular docking, and their biological activities were predicted using the CoMSIA model, and three potential BRIs with new skeletons were obtained.
CICS Region Virtualization for Cost Effective Application Development
ERIC Educational Resources Information Center
Khan, Kamal Waris
2012-01-01
Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…
Polyethyleneimine Incorporated Metal-Organic Frameworks Adsorbent for Highly Selective CO2 Capture
Lin, Yichao; Yan, Qiuju; Kong, Chunlong; Chen, Liang
2013-01-01
A series of polyethyleneimine (PEI) incorporated MIL-101 adsorbents with different PEI loadings were reported for the first time in the present work. Although the surface area and pore volume of MIL-101 decreased significantly after loading PEI, all the resulting composites exhibited dramatically enhanced CO2 adsorption capacity at low pressures. At 100 wt% PEI loading, the CO2 adsorption capacity at 0.15 bar reached a very competitive value of 4.2 mmol g−1 at 25°C, and 3.4 mmol g−1 at 50°C. More importantly, the resulting adsorbents displayed rapid adsorption kinetics and ultrahigh selectivity for CO2 over N2 in the designed flue gas with 0.15 bar CO2 and 0.75 bar N2. The CO2 over N2 selectivity was up to 770 at 25°C, and 1200 at 50°C. We believe that the PEI based metal-organic frameworks is an attractive adsorbent for CO2 capture. PMID:23681218
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Remote visual analysis of large turbulence databases at multiple scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulido, Jesus; Livescu, Daniel; Kanov, Kalin
The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
Remote visual analysis of large turbulence databases at multiple scales
Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...
2018-06-15
The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less
A Framework for Cloudy Model Optimization and Database Storage
NASA Astrophysics Data System (ADS)
Calvén, Emilia; Helton, Andrew; Sankrit, Ravi
2018-01-01
We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.
Lu, Songjian; Lu, Kevin N.; Cheng, Shi-Yuan; Hu, Bo; Ma, Xiaojun; Nystrom, Nicholas; Lu, Xinghua
2015-01-01
An important goal of cancer genomic research is to identify the driving pathways underlying disease mechanisms and the heterogeneity of cancers. It is well known that somatic genome alterations (SGAs) affecting the genes that encode the proteins within a common signaling pathway exhibit mutual exclusivity, in which these SGAs usually do not co-occur in a tumor. With some success, this characteristic has been utilized as an objective function to guide the search for driver mutations within a pathway. However, mutual exclusivity alone is not sufficient to indicate that genes affected by such SGAs are in common pathways. Here, we propose a novel, signal-oriented framework for identifying driver SGAs. First, we identify the perturbed cellular signals by mining the gene expression data. Next, we search for a set of SGA events that carries strong information with respect to such perturbed signals while exhibiting mutual exclusivity. Finally, we design and implement an efficient exact algorithm to solve an NP-hard problem encountered in our approach. We apply this framework to the ovarian and glioblastoma tumor data available at the TCGA database, and perform systematic evaluations. Our results indicate that the signal-oriented approach enhances the ability to find informative sets of driver SGAs that likely constitute signaling pathways. PMID:26317392
The JANA calibrations and conditions database API
NASA Astrophysics Data System (ADS)
Lawrence, David
2010-04-01
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.
VizieR Online Data Catalog: Spectroscopic Indicators in SeisMic Archive (SISMA) (Rainer+, 2016)
NASA Astrophysics Data System (ADS)
Rainer, M.; Poretti, E.; Misto, A.; Panzera, M. R.; Molinaro, M.; Cepparo, F.; Roth, M.; Michel, E.; Monteiro, M. J. P. F. G.
2017-02-01
We created a large database of physical parameters and variability indicators by fully reducing and analyzing the large number of spectra taken to complement the asteroseismic observations of the COnvection, ROtation and planetary Transits (CoRoT) satellite. CoRoT was launched on 2006 December 27 and it was retired on 2013 June 24. 7103 spectra of 261 stars obtained with the ESO echelle spectrograph High Accuracy Radial velocity Planet Searcher (HARPS) have been stored in the VO-compliant database Spectroscopic Indicators in a SeisMic Archive (SISMA; http://sisma.brera.inaf.it/), along with the CoRoT photometric data of the 72 CoRoT asteroseismic targets. The ground-based activities started with the Large Programme 178.D-0361 using the FEROS spectrograph at the 2.2m telescope of the ESO-La Silla Observatory, and continued with the Large Programmes LP182.D-0356 and LP185.D-0056 using the HARPS instrument at the 3.6m ESO telescope. In the framework of the awarded two HARPS Large Programmes, 15 nights were allocated each semester over nine semesters, from 2008 December to 2013 January, for a total of 135 nights. The HARPS spectrograph covers the spectral range from 3780 to 6910Å, distributed over echelle orders 89-161. We usually used it in the high-efficiency mode EGGS, with resolving power R=80000 to obtain high signal-to-noise ratio (S/N) spectroscopic time series. All of the data (reduced spectra, indicators, and photometric series) are stored as either FITS or PDF files in the SISMA archive and can be accessed at http://sisma.brera.inaf.it/. The data can also be accessed through the Seismic Plus portal (http://voparis-spaceinn.obspm.fr/seismic-plus/), developed in the framework of the SpaceInn project in order to gather and help coordinated access to several different solar and stellar seismic data sources. (1 data file).
First Toronto Conference on Database Users. Systems that Enhance User Performance.
ERIC Educational Resources Information Center
Doszkocs, Tamas E.; Toliver, David
1987-01-01
The first of two papers discusses natural language searching as a user performance enhancement tool, focusing on artificial intelligence applications for information retrieval and problems with natural language processing. The second presents a conceptual framework for further development and future design of front ends to online bibliographic…
ERIC Educational Resources Information Center
Coghlan, David; Coughlan, Paul
2006-01-01
Purpose: The purpose of this article is to provide a design and implementation framework for ALAR (action learning action research) programme which aims to address collaborative improvement in the extended manufacturing enterprise. Design/methodology/approach: This article demonstrates the design of a programme in which action learning and action…
Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo
2015-01-01
Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Thoemel, J.; Cosson, E.; Chazot, O.
2009-01-01
In the framework of the creation of an aerothermodynamic database for the design the Intermediate Experimental Vehicle, surface properties of heat shield materials that represent the boundary conditions are reviewed. Catalytic and radiative characteristics available in the literature are critically analyzed and summarized. It turns out that large uncertainties on the parameters exist. Finally, simple and conservative values are proposed.
FAIL-SAFE: Fault Aware IntelLigent Software for Exascale
2016-06-13
and that these programs can continue to correct solutions. To broaden the impact of this research, we also needed to be able to ameliorate errors...designing an interface between the application and an introspection framework for resilience ( IFR ) based on the inference engine SHINE; (4) using...the ROSE compiler to translate annotations into reasoning rules for the IFR ; and (5) designing a Knowledge/Experience Database, which will store
Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.
2011-01-01
Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-01-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604
Barriers to Co-Designing Mobile Technology with Persons with Dementia and Their Carers.
O'Connor, Siobhan; Bouamrane, Matt-Mouley; O'Donnell, Catherine A; Mair, Frances S
2016-01-01
Mobile applications can be used to help manage different aspects of long-term illnesses but many are not designed to meet the specific needs of people with dementia or their carers. This case study explores the barriers experienced when co-producing a memory and reminiscence app. A focus group and interviews were conducted with patient/carer dyads, an occupational therapist, project manager and software engineer involved in the design of the app. Data was analysed thematically using the framework approach. Several limitations such as poor technical knowledge and skills, negative attitudes and inaccurate perceptions of people with dementia slowed down or changed how the mobile app was developed. Compromises also had to be made over the final design of the app. More research to explore how mobile apps are co-designed with patients is needed.
Malfait, Simon; Eeckloo, Kristof; Lust, Elisa; Van Biesen, Wim; Van Hecke, Ann
2017-02-01
To evaluate the feasibility, appropriateness, meaningfulness and effectiveness of bedside shift reporting in a minimum of five interventions and five control wards. Hospitals continually improve their quality of care. Next to improvements in clinical performance, more patient participation is stimulated through different methods. Methods to enhance patient participation such as bedside shift reporting lack rigorously performed research to determine their feasibility, appropriateness, meaningfulness and effectiveness. Small-scale research and a previous pilot study indicate that bedside shift reporting improves patient participation, nurse-nurse communication and nurse-patient communication. The development, implementation and evaluation of bedside shift report are based on the Medical Research Council framework for complex interventions in health care. A matched, controlled, mixed-method, longitudinal study design will be used. The Feasibility-Appropriateness-Meaningfulness-Effectiveness framework will be applied for the quantitative and qualitative evaluation of bedside shift report. A tailored intervention and implementation process for bedside shift report will be developed using diagnostic interviews, co-design and acceptability testing. The intervention will be evaluated before implementation and three times after implementation. Individual and focus group interviews will be performed. Questionnaires, observations and analysis of the medical records and administrative databases will be completed. This study was funded in October 2015. Research Ethics Committee approval was granted in March 2016. There is a pressing need for rigorous research into the effects of interventions for improving patient participation. This study addresses the significance of bedside shift report as an intervention to improve quality of care, communication and patient participation within a large-scale, matched, controlled research design. © 2016 John Wiley & Sons Ltd.
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L.; Sanders, Brian; Grethe, Jeffrey S.; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W.; Martone, Maryann E.
2009-01-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov. PMID:18958629
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L; Sanders, Brian; Grethe, Jeffrey S; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W; Martone, Maryann E
2008-09-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov.
Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System
Beruvides, Gerardo
2017-01-01
Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors’ knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions. PMID:28906450
Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System.
Castaño, Fernando; Beruvides, Gerardo; Haber, Rodolfo E; Artuñedo, Antonio
2017-09-14
Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors' knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.
Study on Full Supply Chain Quality and Safetytraceability Systems For Cereal And Oilproducts
NASA Astrophysics Data System (ADS)
Liu, Shihong; Zheng, Huoguo; Meng, Hong; Hu, Haiyan; Wu, Jiangshou; Li, Chunhua
Global food industry and Governments in many countries are putting increasing emphasis on establishment of food traceability systems. Food traceability has become an effective way in food safety management. Aimed at the major quality problems of cereal and oil products existing in the production, processing, warehousing, distribution and other links in the supply chain, this paper firstly proposes a new traceability framework combines the information flow with critical control points and quality indicators. Then it introduces traceability database design and data access mode to realize the framework. In practice, Code design for tracing goods is a challenge thing, so this paper put forward a code system based on UCC/EAN-128 standard.Middleware and Electronic terminal design are also briefly introduced to accomplish traceability system for cereal and oil products.
Practice-Based Knowledge Discovery for Comparative Effectiveness Research: An Organizing Framework
Lucero, Robert J.; Bakken, Suzanne
2014-01-01
Electronic health information systems can increase the ability of health-care organizations to investigate the effects of clinical interventions. The authors present an organizing framework that integrates outcomes and informatics research paradigms to guide knowledge discovery in electronic clinical databases. They illustrate its application using the example of hospital acquired pressure ulcers (HAPU). The Knowledge Discovery through Informatics for Comparative Effectiveness Research (KDI-CER) framework was conceived as a heuristic to conceptualize study designs and address potential methodological limitations imposed by using a single research perspective. Advances in informatics research can play a complementary role in advancing the field of outcomes research including CER. The KDI-CER framework can be used to facilitate knowledge discovery from routinely collected electronic clinical data. PMID:25278645
Fossil-Fuel C02 Emissions Database and Exploration System
NASA Astrophysics Data System (ADS)
Krassovski, M.; Boden, T.
2012-04-01
Fossil-Fuel C02 Emissions Database and Exploration System Misha Krassovski and Tom Boden Carbon Dioxide Information Analysis Center Oak Ridge National Laboratory The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) quantifies the release of carbon from fossil-fuel use and cement production each year at global, regional, and national spatial scales. These estimates are vital to climate change research given the strong evidence suggesting fossil-fuel emissions are responsible for unprecedented levels of carbon dioxide (CO2) in the atmosphere. The CDIAC fossil-fuel emissions time series are based largely on annual energy statistics published for all nations by the United Nations (UN). Publications containing historical energy statistics make it possible to estimate fossil-fuel CO2 emissions back to 1751 before the Industrial Revolution. From these core fossil-fuel CO2 emission time series, CDIAC has developed a number of additional data products to satisfy modeling needs and to address other questions aimed at improving our understanding of the global carbon cycle budget. For example, CDIAC also produces a time series of gridded fossil-fuel CO2 emission estimates and isotopic (e.g., C13) emissions estimates. The gridded data are generated using the methodology described in Andres et al. (2011) and provide monthly and annual estimates for 1751-2008 at 1° latitude by 1° longitude resolution. These gridded emission estimates are being used in the latest IPCC Scientific Assessment (AR4). Isotopic estimates are possible thanks to detailed information for individual nations regarding the carbon content of select fuels (e.g., the carbon signature of natural gas from Russia). CDIAC has recently developed a relational database to house these baseline emissions estimates and associated derived products and a web-based interface to help users worldwide query these data holdings. Users can identify, explore and download desired CDIAC fossil-fuel CO2 emissions data. This presentation introduces the architecture and design of the new relational database and web interface, summarizes the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System, and highlights future plans for expansion of the relational database and interface.
GIS Application System Design Applied to Information Monitoring
NASA Astrophysics Data System (ADS)
Qun, Zhou; Yujin, Yuan; Yuena, Kang
Natural environment information management system involves on-line instrument monitoring, data communications, database establishment, information management software development and so on. Its core lies in collecting effective and reliable environmental information, increasing utilization rate and sharing degree of environment information by advanced information technology, and maximizingly providing timely and scientific foundation for environmental monitoring and management. This thesis adopts C# plug-in application development and uses a set of complete embedded GIS component libraries and tools libraries provided by GIS Engine to finish the core of plug-in GIS application framework, namely, the design and implementation of framework host program and each functional plug-in, as well as the design and implementation of plug-in GIS application framework platform. This thesis adopts the advantages of development technique of dynamic plug-in loading configuration, quickly establishes GIS application by visualized component collaborative modeling and realizes GIS application integration. The developed platform is applicable to any application integration related to GIS application (ESRI platform) and can be as basis development platform of GIS application development.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Management system for the SND experiments
NASA Astrophysics Data System (ADS)
Pugachev, K.; Korol, A.
2017-09-01
A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).
NASA Technical Reports Server (NTRS)
Arnold, Steven M. (Editor); Wong, Terry T. (Editor)
2011-01-01
Topics covered include: An Annotative Review of Multiscale Modeling and its Application to Scales Inherent in the Field of ICME; and A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures.
A data management infrastructure for bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Byun, Jaewook; Kim, Daeyoung; Sohn, Hoon; Bae, In Hwan; Law, Kincho H.
2015-04-01
This paper discusses a data management infrastructure framework for bridge monitoring applications. As sensor technologies mature and become economically affordable, their deployment for bridge monitoring will continue to grow. Data management becomes a critical issue not only for storing the sensor data but also for integrating with the bridge model to support other functions, such as management, maintenance and inspection. The focus of this study is on the effective data management of bridge information and sensor data, which is crucial to structural health monitoring and life cycle management of bridge structures. We review the state-of-the-art of bridge information modeling and sensor data management, and propose a data management framework for bridge monitoring based on NoSQL database technologies that have been shown useful in handling high volume, time-series data and to flexibly deal with unstructured data schema. Specifically, Apache Cassandra and Mongo DB are deployed for the prototype implementation of the framework. This paper describes the database design for an XML-based Bridge Information Modeling (BrIM) schema, and the representation of sensor data using Sensor Model Language (SensorML). The proposed prototype data management framework is validated using data collected from the Yeongjong Bridge in Incheon, Korea.
Lin, Chun-Yu; Zhang, Lipeng; Zhao, Zhenghang; Xia, Zhenhai
2017-05-01
Covalent organic frameworks (COFs), an emerging class of framework materials linked by covalent bonds, hold potential for various applications such as efficient electrocatalysts, photovoltaics, and sensors. To rationally design COF-based electrocatalysts for oxygen reduction and evolution reactions in fuel cells and metal-air batteries, activity descriptors, derived from orbital energy and bonding structures, are identified with the first-principle calculations for the COFs, which correlate COF structures with their catalytic activities. The calculations also predict that alkaline-earth metal-porphyrin COFs could catalyze the direct production of H 2 O 2 , a green oxidizer and an energy carrier. These predictions are supported by experimental data, and the design principles derived from the descriptors provide an approach for rational design of new electrocatalysts for both clean energy conversion and green oxidizer production. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Chanut, Nicolas; Bourrelly, Sandrine; Kuchta, Bogdan; Serre, Christian; Chang, Jong-San; Wright, Paul A; Llewellyn, Philip L
2017-04-10
A simple laboratory-scale protocol that enables the evaluation of the effect of adsorbed water on CO 2 uptake is proposed. 45 metal-organic frameworks (MOFs) were compared against reference zeolites and active carbons. It is possible to classify materials with different trends in CO 2 uptake with varying amounts of pre-adsorbed water, including cases in which an increase in CO 2 uptake is observed for samples with a given amount of pre-adsorbed water. Comparing loss in CO 2 uptake between "wet" and "dry" samples with the Henry constant calculated from the water adsorption isotherm results in a semi-logarithmic trend for the majority of samples allowing predictions to be made. Outliers from this trend may be of particular interest and an explanation for the behaviour for each of the outliers is proposed. This thus leads to propositions for designing or choosing MOFs for CO 2 capture in applications where humidity is present. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Kumar, Swapna; Ritzhaupt, Albert D.
2014-01-01
A cohort-based online professional doctorate program that consisted of both online coursework and research activities was designed using Garrison et al's community of inquiry (CoI) framework. The evaluation of the program proved a challenge because all existing CoI assessment methods in the past have dealt with online courses, not with online…
ERIC Educational Resources Information Center
Pellas, Nikolaos; Boumpa, Anna
2017-01-01
This study seeks to investigate the effect of pre-service foreign language teachers' interactions on their continuing professional development (CPD), using a theoretical instructional design framework consisted of the three presence indicators of a Community of Inquiry (CoI) model and the Jigsaw teaching technique. The investigation was performed…
Suchard, Marc A; Zorych, Ivan; Simpson, Shawn E; Schuemie, Martijn J; Ryan, Patrick B; Madigan, David
2013-10-01
The self-controlled case series (SCCS) offers potential as an statistical method for risk identification involving medical products from large-scale observational healthcare data. However, analytic design choices remain in encoding the longitudinal health records into the SCCS framework and its risk identification performance across real-world databases is unknown. To evaluate the performance of SCCS and its design choices as a tool for risk identification in observational healthcare data. We examined the risk identification performance of SCCS across five design choices using 399 drug-health outcome pairs in five real observational databases (four administrative claims and one electronic health records). In these databases, the pairs involve 165 positive controls and 234 negative controls. We also consider several synthetic databases with known relative risks between drug-outcome pairs. We evaluate risk identification performance through estimating the area under the receiver-operator characteristics curve (AUC) and bias and coverage probability in the synthetic examples. The SCCS achieves strong predictive performance. Twelve of the twenty health outcome-database scenarios return AUCs >0.75 across all drugs. Including all adverse events instead of just the first per patient and applying a multivariate adjustment for concomitant drug use are the most important design choices. However, the SCCS as applied here returns relative risk point-estimates biased towards the null value of 1 with low coverage probability. The SCCS recently extended to apply a multivariate adjustment for concomitant drug use offers promise as a statistical tool for risk identification in large-scale observational healthcare databases. Poor estimator calibration dampens enthusiasm, but on-going work should correct this short-coming.
NASA Astrophysics Data System (ADS)
Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.
2017-09-01
We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Bo; Jiang, Nan; Sheng, Meili
2015-11-05
The design of active, robust, and nonprecious electrocatalysts with both H 2 and O 2 evolution reaction (HER and OER) activities for overall water splitting is highly desirable but remains a grand challenge. Here in this article, we report a facile two-step method to synthesize porous Co-P/NC nanopolyhedrons composed of CoP x (a mixture of CoP and Co 2P) nanoparticles embedded in N-doped carbon matrices as electrocatalysts for overall water splitting. The Co-P/NC catalysts were prepared by direct carbonization of Co-based zeolitic imidazolate framework (ZIF-67) followed by phosphidation. Benefiting from the large specific surface area, controllable pore texture, and highmore » nitrogen content of ZIF (a subclass of metal–organic frameworks), the optimal Co-P/NC showed high specific surface area of 183 m 2 g -1 and large mesopores, and exhibited remarkable catalytic performance for both HER and OER in 1.0 M KOH, affording a current density of 10 mA cm -2 at low overpotentials of -154 mV for HER and 319 mV for OER, respectively. Furthermore, a Co-P/NC-based alkaline electrolyzer approached 165 mA cm -2 at 2.0 V, superior to that of Pt/IrO 2 couple, along with strong stability. Various characterization techniques including X-ray absorption spectroscopy (XAS) revealed that the superior activity and strong stability of Co-P/NC originated from its 3D interconnected mesoporosity with high specific surface area, high conductivity, and synergistic effect of CoP x encapsulated within N-doped carbon matrices.« less
ERIC Educational Resources Information Center
Rice, Michael; Gladstone, William; Weir, Michael
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…
Multidisciplinary analysis and design of printed wiring boards
NASA Astrophysics Data System (ADS)
Fulton, Robert E.; Hughes, Joseph L.; Scott, Waymond R., Jr.; Umeagukwu, Charles; Yeh, Chao-Pin
1991-04-01
Modern printed wiring board design depends on electronic prototyping using computer-based simulation and design tools. Existing electrical computer-aided design (ECAD) tools emphasize circuit connectivity with only rudimentary analysis capabilities. This paper describes a prototype integrated PWB design environment denoted Thermal Structural Electromagnetic Testability (TSET) being developed at Georgia Tech in collaboration with companies in the electronics industry. TSET provides design guidance based on enhanced electrical and mechanical CAD capabilities including electromagnetic modeling testability analysis thermal management and solid mechanics analysis. TSET development is based on a strong analytical and theoretical science base and incorporates an integrated information framework and a common database design based on a systematic structured methodology.
Exploring molecular networks using MONET ontology.
Silva, João Paulo Müller da; Lemke, Ney; Mombach, José Carlos; Souza, José Guilherme Camargo de; Sinigaglia, Marialva; Vieira, Renata
2006-03-31
The description of the complex molecular network responsible for cell behavior requires new tools to integrate large quantities of experimental data in the design of biological information systems. These tools could be used in the characterization of these networks and in the formulation of relevant biological hypotheses. The building of an ontology is a crucial step because it integrates in a coherent framework the concepts necessary to accomplish such a task. We present MONET (molecular network), an extensible ontology and an architecture designed to facilitate the integration of data originating from different public databases in a single- and well-documented relational database, that is compatible with MONET formal definition. We also present an example of an application that can easily be implemented using these tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allan, Benjamin A.
We report on the use and design of a portable, extensible performance data collection tool motivated by modeling needs of the high performance computing systems co-design com- munity. The lightweight performance data collectors with Eiger support is intended to be a tailorable tool, not a shrink-wrapped library product, as pro ling needs vary widely. A single code markup scheme is reported which, based on compilation ags, can send perfor- mance data from parallel applications to CSV les, to an Eiger mysql database, or (in a non-database environment) to at les for later merging and loading on a host with mysqlmore » available. The tool supports C, C++, and Fortran applications.« less
ExaSAT: An exascale co-design tool for performance modeling
Unat, Didem; Chan, Cy; Zhang, Weiqun; ...
2015-02-09
One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jingtian; Luo, Deliang; Yang, Chengju
2013-07-15
Three copper(II) imidazolate frameworks were synthesized by a hydrothermal (or precipitation) reaction. The catalysts were characterized by X-ray diffraction (XRD), nitrogen adsorption, transmission electron microscopy (TEM), ultraviolet–visible spectroscopy (UV–vis), Fourier transform infrared spectra (FTIR), thermogravimetry (TG). Meanwhile, the photocatalytic activities of the samples for reduction of CO{sub 2} into methanol and degradation of methylene blue (MB) under visible light irradiation were also investigated. The results show that the as-prepared samples exhibit better photocatalytic activities for the reduction of carbon dioxide into methanol with water and degradation of MB under visible light irradiation. The orthorhombic copper(II) imidazolate frameworks with a bandmore » gap of 2.49 eV and green (G) color has the best photocatalytic activity for reduction of CO{sub 2} into methanol, 1712.7 μmol/g over 5 h, which is about three times as large as that of monoclinic copper(II) imidazolate frameworks with a band gap 2.70 eV and blue (J) color. The degradation kinetics of MB over three photocatalysts fitted well to the apparent first-order rate equation and the apparent rate constants for the degradation of MB over G, J and P (with pink color) are 0.0038, 0.0013 and 0.0016 min{sup −1}, respectively. The synergistic effects of smallest band gap and orthorhombic crystal phase structure are the critical factors for the better photocatalytic activities of G. Moreover, three frameworks can also be stable up to 250 °C. The investigation of Cu-based zeolitic imidazolate frameworks maybe provide a design strategy for a new class of photocatalysts applied in degradation of contaminations, reduction of CO{sub 2}, and even water splitting into hydrogen and oxygen under visible light. - Graphical abstract: Carbon dioxide was reduced into methanol with water over copper(II) imidazolate frameworks under visible light irradiation. - Highlights: • Three copper(II) imidazolate frameworks were first applied in the photo-reduction of CO{sub 2}. • The photocatalytic activities of the frameworks depend on their band gap and phase structures. • The photocatalytic activity of orthorhombic frameworks is 3 times that of monoclinic frameworks. • The degradation kinetics of MB over three photocatalysts followed the first-order rate equation. • The largest yield for reduction of CO{sub 2} into methanol on green framworks was 1712.7 μmol/g over 5 h.« less
NASA Astrophysics Data System (ADS)
Zhao, Siqi; Zhang, Guanglong; Xia, Shuwei; Yu, Liangmin
2018-06-01
As a group of diversified frameworks, quinazolin derivatives displayed a broad field of biological functions, especially as anticancer. To investigate the quantitative structure-activity relationship, 3D-QSAR models were generated with 24 quinazolin scaffold molecules. The experimental and predicted pIC50 values for both training and test set compounds showed good correlation, which proved the robustness and reliability of the generated QSAR models. The most effective CoMFA and CoMSIA were obtained with correlation coefficient r 2 ncv of 1.00 (both) and leave-one-out coefficient q 2 of 0.61 and 0.59, respectively. The predictive abilities of CoMFA and CoMSIA were quite good with the predictive correlation coefficients ( r 2 pred ) of 0.97 and 0.91. In addition, the statistic results of CoMFA and CoMSIA were used to design new quinazolin molecules.
Sequence modelling and an extensible data model for genomic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peter Wei-Der
1992-01-01
The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less
Sequence modelling and an extensible data model for genomic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peter Wei-Der
1992-01-01
The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less
Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning
NASA Astrophysics Data System (ADS)
de, Sandip; Ceriotti, Michele
Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.
Toyao, Takashi; Fujiwaki, Mika; Miyahara, Kenta; Kim, Tae-Ho; Horiuchi, Yu; Matsuoka, Masaya
2015-11-01
Various N-doped nanoporous carbons containing metal species were prepared by direct thermal conversion of zeolitic imidazolate frameworks (ZIFs; ZIF-7, -8, -9, and -67) at different temperatures (600, 800, and 1000 °C). These materials were utilized as bifunctional acid-base catalysts to promote the reaction of CO2 with epoxides to form cyclic carbonates under 0.6 MPa of CO2 at 80 °C. The catalyst generated by thermal conversion of ZIF-9 at 600 °C (C600-ZIF-9) was found to exhibit a higher catalytic activity than the other ZIFs, other conventional catalysts, and other metal-organic framework catalysts. The results of various characterization techniques including elemental analysis, X-ray diffraction, X-ray photoelectron spectroscopy, X-ray absorption spectroscopy, and transmission electron microscopy show that C600-ZIF-9 contains partly oxidized Co nanoparticles and N species. Temperature-programmed desorption measurements by using CO2 and NH3 as probe molecules revealed that C600-ZIF-9 has both Lewis acid and Lewis base catalytic sites. Finally, the substrate scope was extended to seven other kinds of epoxides. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lee, Hiang Kwee; Lee, Yih Hong; Morabito, Joseph V; Liu, Yejing; Koh, Charlynn Sher Lin; Phang, In Yee; Pedireddy, Srikanth; Han, Xuemei; Chou, Lien-Yang; Tsung, Chia-Kuang; Ling, Xing Yi
2017-08-23
We demonstrate a molecular-level observation of driving CO 2 molecules into a quasi-condensed phase on the solid surface of metal nanoparticles (NP) under ambient conditions of 1 bar and 298 K. This is achieved via a CO 2 accumulation in the interface between a metal-organic framework (MOF) and a metal NP surface formed by coating NPs with a MOF. Using real-time surface-enhanced Raman scattering spectroscopy, a >18-fold enhancement of surface coverage of CO 2 is observed at the interface. The high surface concentration leads CO 2 molecules to be in close proximity with the probe molecules on the metal surface (4-methylbenzenethiol), and transforms CO 2 molecules into a bent conformation without the formation of chemical bonds. Such linear-to-bent transition of CO 2 is unprecedented at ambient conditions in the absence of chemical bond formation, and is commonly observed only in pressurized systems (>10 5 bar). The molecular-level observation of a quasi-condensed phase induced by MOF coating could impact the future design of hybrid materials in diverse applications, including catalytic CO 2 conversion and ambient solid-gas operation.
Shale gas development: a smart regulation framework.
Konschnik, Katherine E; Boling, Mark K
2014-01-01
Advances in directional drilling and hydraulic fracturing have sparked a natural gas boom from shale formations in the United States. Regulators face a rapidly changing industry comprised of hundreds of players, operating tens of thousands of wells across 30 states. They are often challenged to respond by budget cuts, a brain drain to industry, regulations designed for conventional gas developments, insufficient information, and deeply polarized debates about hydraulic fracturing and its regulation. As a result, shale gas governance remains a halting patchwork of rules, undermining opportunities to effectively characterize and mitigate development risk. The situation is dynamic, with research and incremental regulatory advances underway. Into this mix, we offer the CO/RE framework--characterization of risk, optimization of mitigation strategies, regulation, and enforcement--to design tailored governance strategies. We then apply CO/RE to three types of shale gas risks, to illustrate its potential utility to regulators.
2012-06-01
technology originally developed on the Java platform. The Hibernate framework supports rapid development of a data access layer without requiring a...31 viii 2. Hibernate ................................................................................ 31 3. Database Design...protect from security threats; o Easy aggregate management operations via file tags; 2. Hibernate We recommend using Hibernate technology for object
An Approach for Selecting a Theoretical Framework for the Evaluation of Training Programs
ERIC Educational Resources Information Center
Tasca, Jorge Eduardo; Ensslin, Leonardo; Ensslin, Sandra Rolim; Alves, Maria Bernardete Martins
2010-01-01
Purpose: This research paper proposes a method for selecting references related to a research topic, and seeks to exemplify it for the case of a study evaluating training programs. The method is designed to identify references with high academic relevance in databases accessed via the internet, using a bibliometric analysis to sift the selected…
XRootD popularity on hadoop clusters
NASA Astrophysics Data System (ADS)
Meoni, Marco; Boccali, Tommaso; Magini, Nicolò; Menichetti, Luca; Giordano, Domenico;
2017-10-01
Performance data and metadata of the computing operations at the CMS experiment are collected through a distributed monitoring infrastructure, currently relying on a traditional Oracle database system. This paper shows how to harness Big Data architectures in order to improve the throughput and the efficiency of such monitoring. A large set of operational data - user activities, job submissions, resources, file transfers, site efficiencies, software releases, network traffic, machine logs - is being injected into a readily available Hadoop cluster, via several data streamers. The collected metadata is further organized running fast arbitrary queries; this offers the ability to test several Map&Reduce-based frameworks and measure the system speed-up when compared to the original database infrastructure. By leveraging a quality Hadoop data store and enabling an analytics framework on top, it is possible to design a mining platform to predict dataset popularity and discover patterns and correlations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Foster, I.; Gawor, J.
In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Linking science and decision making to promote an ecology for the city: practices and opportunities
Morgan Grove; Daniel L. Childers; Michael Galvin; Sarah J. Hines; Tischa Munoz-Erickson; Erika S. Svendsen
2016-01-01
To promote urban sustainability and resilience, there is an increasing demand for actionable science that links science and decision making based on socialâecological knowledge. Approaches, frameworks, and practices for such actionable science are needed and have only begun to emerge. We propose that approaches based on the co- design and co- production of knowledge...
Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K
1999-01-01
A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.
Improved Structural Design and CO 2 Capture of Porous Hydroxy-Rich Polymeric Organic Frameworks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidder, Michelle K.; Earl, Lyndsey D.; de Almeida, Valmor F.
2016-04-16
Polymeric organic frameworks (POFs) are tunable and robust porous materials with potential applications for gas capture, catalysis, and separations technologies. A series of new porous POFs have been synthesized from the reaction of phloroglucinol or resorcinol derivatives with aryl aldehyde precursors. The monomers have various molecular shapes including linear, bent, trigonal, and tetrahedral geometries. Depending on the size and geometric matching of the monomers, the polymers are dominantly microporous with some mesoporous character or they are non-porous. In addition to standard spectroscopic and surface characterization, the materials were screened as adsorbents for carbon dioxide capture at low pressure (0-1 bar).more » The best performing material (POF 1D) has a CO 2 capture capacity of 9.0 wt. % (2.04 mmol g -1) at 298 K and 1 bar which is comparable to other polymeric organic frameworks. Isosteric heats of adsorption for POF 1A, POF 2A, and POF 2B were found to be dependent on the weight percent of CO 2 adsorbed: this suggests there are both chemisorptive and physisorptive components of CO 2 capture by the POFs.« less
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
The Brain Database: A Multimedia Neuroscience Database for Research and Teaching
Wertheim, Steven L.
1989-01-01
The Brain Database is an information tool designed to aid in the integration of clinical and research results in neuroanatomy and regional biochemistry. It can handle a wide range of data types including natural images, 2 and 3-dimensional graphics, video, numeric data and text. It is organized around three main entities: structures, substances and processes. The database will support a wide variety of graphical interfaces. Two sample interfaces have been made. This tool is intended to serve as one component of a system that would allow neuroscientists and clinicians 1) to represent clinical and experimental data within a common framework 2) to compare results precisely between experiments and among laboratories, 3) to use computing tools as an aid in collaborative work and 4) to contribute to a shared and accessible body of knowledge about the nervous system.
Yin, Yichao; Liu, Xiaofang; Wei, Xiaojun; Yu, Ronghai; Shui, Jianglan
2016-12-21
Porous carbon nanotubes/cobalt nanoparticles (CNTs/Co) composite with dodecahedron morphology was synthesized by in situ pyrolysis of the Co-based zeolitic imidazolate framework in a reducing atmosphere. The morphology and microstructure of the composite can be well tuned by controlling the pyrolysis conditions. At lower pyrolysis temperature, the CNTs/Co composite is composed of well-dispersed Co nanoparticles and short CNT clusters with low graphitic degree. The increase of pyrolysis temperature/time promotes the growth and graphitization of CNTs and leads to the aggregation of Co nanoparticles. The optimized CNTs/Co composite exhibits strong dielectric and magnetic losses as well as a good impedance matching property. Interestingly, the CNTs/Co composite displays extremely strong electromagnetic wave absorption with a maximum reflection loss of -60.4 dB. More importantly, the matching thickness of the absorber is as thin as 1.81 mm, and the filler loading of composite in the matrix is only 20 wt %. The highly efficient absorption is closely related to the well-designed structure and the synergistic effect between CNTs and Co nanoparticles. The excellent absorbing performance together with lightweight and ultrathin thickness endows the CNTs/Co composite with the potential for application in the electromagnetic wave absorbing field.
Proton conduction in metal-organic frameworks and related modularly built porous solids.
Yoon, Minyoung; Suh, Kyungwon; Natarajan, Srinivasan; Kim, Kimoon
2013-03-04
Proton-conducting materials are an important component of fuel cells. Development of new types of proton-conducting materials is one of the most important issues in fuel-cell technology. Herein, we present newly developed proton-conducting materials, modularly built porous solids, including coordination polymers (CPs) or metal-organic frameworks (MOFs). The designable and tunable nature of the porous materials allows for fast development in this research field. Design and synthesis of the new types of proton-conducting materials and their unique proton-conduction properties are discussed. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dhakshinamoorthy, Amarajothi; Asiri, Abdullah M; García, Hermenegildo
2016-04-25
Metal-organic frameworks (MOFs) are crystalline porous materials formed from bi- or multipodal organic linkers and transition-metal nodes. Some MOFs have high structural stability, combined with large flexibility in design and post-synthetic modification. MOFs can be photoresponsive through light absorption by the organic linker or the metal oxide nodes. Photoexcitation of the light absorbing units in MOFs often generates a ligand-to-metal charge-separation state that can result in photocatalytic activity. In this Review we discuss the advantages and uniqueness that MOFs offer in photocatalysis. We present the best practices to determine photocatalytic activity in MOFs and for the deposition of co-catalysts. In particular we give examples showing the photocatalytic activity of MOFs in H2 evolution, CO2 reduction, photooxygenation, and photoreduction. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Watershed-based survey designs
Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.
2005-01-01
Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.
The Neuroscience Information Framework: A Data and Knowledge Environment for Neuroscience
Akil, Huda; Ascoli, Giorgio A.; Bowden, Douglas M.; Bug, William; Donohue, Duncan E.; Goldberg, David H.; Grafstein, Bernice; Grethe, Jeffrey S.; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N.; Marenco, Luis; Martone, Maryann E.; Miller, Perry L.; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M.; Sternberg, Paul W.; Van Essen, David C.; Williams, Robert W.
2009-01-01
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience’s Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov, http://neurogateway.org, and other sites as they come on line. PMID:18946742
The neuroscience information framework: a data and knowledge environment for neuroscience.
Gardner, Daniel; Akil, Huda; Ascoli, Giorgio A; Bowden, Douglas M; Bug, William; Donohue, Duncan E; Goldberg, David H; Grafstein, Bernice; Grethe, Jeffrey S; Gupta, Amarnath; Halavi, Maryam; Kennedy, David N; Marenco, Luis; Martone, Maryann E; Miller, Perry L; Müller, Hans-Michael; Robert, Adrian; Shepherd, Gordon M; Sternberg, Paul W; Van Essen, David C; Williams, Robert W
2008-09-01
With support from the Institutes and Centers forming the NIH Blueprint for Neuroscience Research, we have designed and implemented a new initiative for integrating access to and use of Web-based neuroscience resources: the Neuroscience Information Framework. The Framework arises from the expressed need of the neuroscience community for neuroinformatic tools and resources to aid scientific inquiry, builds upon prior development of neuroinformatics by the Human Brain Project and others, and directly derives from the Society for Neuroscience's Neuroscience Database Gateway. Partnered with the Society, its Neuroinformatics Committee, and volunteer consultant-collaborators, our multi-site consortium has developed: (1) a comprehensive, dynamic, inventory of Web-accessible neuroscience resources, (2) an extended and integrated terminology describing resources and contents, and (3) a framework accepting and aiding concept-based queries. Evolving instantiations of the Framework may be viewed at http://nif.nih.gov , http://neurogateway.org , and other sites as they come on line.
Design document for the Surface Currents Data Base (SCDB) Management System (SCDBMS), version 1.0
NASA Technical Reports Server (NTRS)
Krisnnamagaru, Ramesh; Cesario, Cheryl; Foster, M. S.; Das, Vishnumohan
1994-01-01
The Surface Currents Database Management System (SCDBMS) provides access to the Surface Currents Data Base (SCDB) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The SCDBMS incorporates database technology in providing seamless access to surface current data. The SCDBMS is an interactive software application with a graphical user interface (GUI) that supports user control of SCDBMS functional capabilities. The purpose of this document is to define and describe the structural framework and logistical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as the SCDBMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).
Global carbon dioxide emissions from inland waters
Raymond, Peter A.; Hartmann, Jens; Lauerwald, Ronny; Sobek, Sebastian; McDonald, Cory P.; Hoover, Mark; Butman, David; Striegl, Robert G.; Mayorga, Emilio; Humborg, Christoph; Kortelainen, Pirkko; Durr, Hans H.; Meybeck, Michel; Ciais, Philippe; Guth, Peter
2013-01-01
Carbon dioxide (CO2) transfer from inland waters to the atmosphere, known as CO2 evasion, is a component of the global carbon cycle. Global estimates of CO2 evasion have been hampered, however, by the lack of a framework for estimating the inland water surface area and gas transfer velocity and by the absence of a global CO2 database. Here we report regional variations in global inland water surface area, dissolved CO2 and gas transfer velocity. We obtain global CO2 evasion rates of 1.8 petagrams of carbon (Pg C) per year from streams and rivers and 0.32 Pg C yr−1 from lakes and reservoirs, where the upper and lower limits are respectively the 5th and 95th confidence interval percentiles. The resulting global evasion rate of 2.1 Pg C yr−1 is higher than previous estimates owing to a larger stream and river evasion rate. Our analysis predicts global hotspots in stream and river evasion, with about 70 per cent of the flux occurring over just 20 per cent of the land surface. The source of inland water CO2 is still not known with certainty and new studies are needed to research the mechanisms controlling CO2 evasion globally.
Yilmaz, Gamze; Yam, Kah Meng; Zhang, Chun; Fan, Hong Jin; Ho, Ghim Wei
2017-07-01
Direct adoption of metal-organic frameworks (MOFs) as electrode materials shows impoverished electrochemical performance owing to low electrical conductivity and poor chemical stability. In this study, we demonstrate self-templated pseudomorphic transformation of MOF into surface chemistry rich hollow framework that delivers highly reactive, durable, and universal electrochemically active energy conversion and storage functionalities. In situ pseudomorphic transformation of MOF-derived hollow rhombic dodecahedron template and sulfurization of nickel cobalt layered double hydroxides (NiCo-LDHs) lead to the construction of interlayered metal sulfides (NiCo-LDH/Co 9 S 8 ) system. The embedment of metal sulfide species (Co 9 S 8 ) at the LDH intergalleries offers optimal interfacing of the hybrid constituent elements and materials stability. The hybrid NiCo-LDH/Co 9 S 8 system collectively presents an ideal porous structure, rich redox chemistry, and high electrical conductivity matrix. This leads to a significant enhancement in its complementary electrocatalytic hydrogen evolution and supercapacitive energy storage properties. This work establishes the potential of MOF derived scaffold for designing of novel class hybrid inorganic-organic functional materials for electrochemical applications and beyond. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Rosen, Brian A; Hod, Idan
2018-04-25
Electrochemical CO 2 reduction provides a clean and viable alternative for mitigating the environmental aspects of global greenhouse gas emissions. To date, the simultaneous goals of CO 2 reduction at high selectivity and activity have yet to be achieved. Here, the importance of engineering both sides of the electrode-electrolyte interface as a rational strategy for achieving this milestone is highlighted. An emphasis is placed on researchers contributing to the design of solid electrodes based on metal-organic frameworks (MOFs) and electrolytes based on room-temperature ionic liquids (RTILs). Future research geared toward optimizing the electrode-electrolyte interface for efficient and selective CO 2 reduction can be achieved by understanding the structure of newly designed RTILs at the electrified interface, as well as structure-activity relationships in highly tunable MOF platforms. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Marshall, James; Hauze, Sean; Denman, Phil; Frazee, James; Laumakis, Mark
2017-01-01
San Diego State University's Course Design Institute (CDI) provides a semester-long opportunity for faculty to design and prepare to teach their first online courses. Guided by the Community of Inquiry (CoI) model and the California State University Quality Online Learning and Teaching (QOLT) principles, participants work together to produce, and…
Wang, Ziyun; Wang, Hai-Feng; Hu, P
2015-10-01
The current theory of catalyst activity in heterogeneous catalysis is mainly obtained from the study of catalysts with mono-phases, while most catalysts in real systems consist of multi-phases, the understanding of which is far short of chemists' expectation. Density functional theory (DFT) and micro-kinetics simulations are used to investigate the activities of six mono-phase and nine bi-phase catalysts, using CO hydrogenation that is arguably the most typical reaction in heterogeneous catalysis. Excellent activities that are beyond the activity peak of traditional mono-phase volcano curves are found on some bi-phase surfaces. By analyzing these results, a new framework to understand the unexpected activities of bi-phase surfaces is proposed. Based on the framework, several principles for the design of multi-phase catalysts are suggested. The theoretical framework extends the traditional catalysis theory to understand more complex systems.
Advanced Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Technical Exchange Meeting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis
2013-09-01
During FY13, the INL developed an advanced SMR PRA framework which has been described in the report Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Technical Framework Specification, INL/EXT-13-28974 (April 2013). In this framework, the various areas are considered: Probabilistic models to provide information specific to advanced SMRs Representation of specific SMR design issues such as having co-located modules and passive safety features Use of modern open-source and readily available analysis methods Internal and external events resulting in impacts to safety All-hazards considerations Methods to support the identification of design vulnerabilities Mechanistic and probabilistic data needs to support modelingmore » and tools In order to describe this framework more fully and obtain feedback on the proposed approaches, the INL hosted a technical exchange meeting during August 2013. This report describes the outcomes of that meeting.« less
A distributed cloud-based cyberinfrastructure framework for integrated bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.
2017-04-01
This paper describes a cloud-based cyberinfrastructure framework for the management of the diverse data involved in bridge monitoring. Bridge monitoring involves various hardware systems, software tools and laborious activities that include, for examples, a structural health monitoring (SHM), sensor network, engineering analysis programs and visual inspection. Very often, these monitoring systems, tools and activities are not coordinated, and the collected information are not shared. A well-designed integrated data management framework can support the effective use of the data and, thereby, enhance bridge management and maintenance operations. The cloud-based cyberinfrastructure framework presented herein is designed to manage not only sensor measurement data acquired from the SHM system, but also other relevant information, such as bridge engineering model and traffic videos, in an integrated manner. For the scalability and flexibility, cloud computing services and distributed database systems are employed. The information stored can be accessed through standard web interfaces. For demonstration, the cyberinfrastructure system is implemented for the monitoring of the bridges located along the I-275 Corridor in the state of Michigan.
NASA Astrophysics Data System (ADS)
Shanafield, Harold; Shamblin, Stephanie; Devarakonda, Ranjeet; McMurry, Ben; Walker Beaty, Tammy; Wilson, Bruce; Cook, Robert B.
2011-02-01
The FLUXNET global network of regional flux tower networks serves to coordinate the regional and global analysis of eddy covariance based CO2, water vapor and energy flux measurements taken at more than 500 sites in continuous long-term operation. The FLUXNET database presently contains information about the location, characteristics, and data availability of each of these sites. To facilitate the coordination and distribution of this information, we redesigned the underlying database and associated web site. We chose the PostgreSQL database as a platform based on its performance, stability and GIS extensions. PostreSQL allows us to enhance our search and presentation capabilities, which will in turn provide increased functionality for users seeking to understand the FLUXNET data. The redesigned database will also significantly decrease the burden of managing such highly varied data. The website is being developed using the Drupal content management system, which provides many community-developed modules and a robust framework for custom feature development. In parallel, we are working with the regional networks to ensure that the information in the FLUXNET database is identical to that in the regional networks. Going forward, we also plan to develop an automated way to synchronize information with the regional networks.
Miyoshi, Newton Shydeo Brandão; Pinheiro, Daniel Guariz; Silva, Wilson Araújo; Felipe, Joaquim Cezar
2013-06-06
The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. We have implemented an extension of Chado - the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different "omics" technologies with patient's clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans.
NASA Technical Reports Server (NTRS)
Brenton, J. C.; Barbre, R. E.; Decker, R. K.; Orcutt, J. M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) provides atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large datasets consists of ensuring erroneous data are removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, development methodologies, and periods of record. The goal of this activity is to use the previous efforts to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, It is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
NASA Astrophysics Data System (ADS)
Fan, Xin; Chen, Weiliang; Pang, Shuhua; Lu, Wei; Zhao, Yu; Liu, Zheng; Fang, Dong
2017-12-01
In the present work, asymmetric supercapacitors (ASCs) are assembled using a highly conductive N-doped nanocarbon (NDC) material derived from a polyaniline hydrogel as a cathode, and Ni foam covered with flower-like Co3O4 nanosheets (Co3O4-Ni) prepared from a zeolitic imidazolate metal-organic framework as a single precursor serves as a high gravimetric capacitance anode. At a current of 0.2 A g-1, the Co3O4-Ni electrode provides a gravimetric capacitance of 637.7 F g-1, and the NDC electrode provides a gravimetric capacitance of 359.6 F g-1. The ASC assembled with an optimal active material loading operates within a wide potential window of 0-1.1 V, and provides a high areal capacitance of 25.7 mF cm-2. The proposed ASC represents a promising strategy for designing high-performance supercapacitors.
Decision Manifold Approximation for Physics-Based Simulations
NASA Technical Reports Server (NTRS)
Wong, Jay Ming; Samareh, Jamshid A.
2016-01-01
With the recent surge of success in big-data driven deep learning problems, many of these frameworks focus on the notion of architecture design and utilizing massive databases. However, in some scenarios massive sets of data may be difficult, and in some cases infeasible, to acquire. In this paper we discuss a trajectory-based framework that quickly learns the underlying decision manifold of binary simulation classifications while judiciously selecting exploratory target states to minimize the number of required simulations. Furthermore, we draw particular attention to the simulation prediction application idealized to the case where failures in simulations can be predicted and avoided, providing machine intelligence to novice analysts. We demonstrate this framework in various forms of simulations and discuss its efficacy.
A Proposed Collaborative Framework for Prefabricated Housing Construction Using RFID Technology
NASA Astrophysics Data System (ADS)
Charnwasununth, Phatsaphan; Yabuki, Nobuyoshi; Tongthong, Tanit
Despite the popularity of prefabricated housing construction in Thailand and many other countries, due to the lack of collaboration in current practice, undesired low productivity and a number of mistakes are identified. This research proposes a framework to raise the collaborative level for improving productivity and reducing mistake occurrences at sites. In this framework, RFID system bridges the gap between the real situation and the design, and the proposed system can cope with the unexpected construction conditions by generating proper alternatives. This system is composed of PDAs, RFID readers, laptop PCs, and a desktop PC. Six main modules and a database system are implemented in laptop PCs for recording actual site conditions, generating working alternatives, providing related information, and evaluating the work.
NASA Technical Reports Server (NTRS)
Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.
2018-01-01
This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration.
Langner, Ricardo; Horak, Tom; Dachselt, Raimund
2017-08-29
We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.
The CoFactor database: organic cofactors in enzyme catalysis.
Fischer, Julia D; Holliday, Gemma L; Thornton, Janet M
2010-10-01
Organic enzyme cofactors are involved in many enzyme reactions. Therefore, the analysis of cofactors is crucial to gain a better understanding of enzyme catalysis. To aid this, we have created the CoFactor database. CoFactor provides a web interface to access hand-curated data extracted from the literature on organic enzyme cofactors in biocatalysis, as well as automatically collected information. CoFactor includes information on the conformational and solvent accessibility variation of the enzyme-bound cofactors, as well as mechanistic and structural information about the hosting enzymes. The database is publicly available and can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/CoFactor.
Engineering an Escherichia coli platform to synthesize designer biodiesels.
Wierzbicki, Michael; Niraula, Narayan; Yarrabothula, Akshitha; Layton, Donovan S; Trinh, Cong T
2016-04-20
Biodiesels, fatty acid esters (FAEs), can be synthesized by condensation of fatty acid acyl CoAs and alcohols via a wax ester synthase in living cells. Biodiesels have advantageous characteristics over petrodiesels such as biodegradability, a higher flash point, and less emission. Controlling fatty acid and alcohol moieties are critical to produce designer biodiesels with desirable physiochemical properties (e.g., high cetane number, low kinematic viscosity, high oxidative stability, and low cloud point). Here, we developed a flexible framework to engineer Escherichia coli cell factories to synthesize designer biodiesels directly from fermentable sugars. In this framework, we designed each FAE pathway as a biodiesel exchangeable production module consisting of acyl CoA, alcohol, and wax ester synthase submodules. By inserting the FAE modules in an engineered E. coli modular chassis cell, we generated E. coli cell factories to produce targeted biodiesels (e.g., fatty acid ethyl (FAEE) and isobutyl (FAIbE) esters) with tunable and controllable short-chain alcohol moieties. The engineered E. coli chassis carrying the FAIbE production module produced 54mg/L FAIbEs with high specificity, accounting for>90% of the total synthesized FAEs and ∼4.7 fold increase in FAIbE production compared to the wildtype. Fed-batch cultures further improved FAIbE production up to 165mg/L. By mixing ethanol and isobutanol submodules, we demonstrated controllable production of mixed FAEEs and FAIbEs. We envision the developed framework offers a flexible, alternative route to engineer designer biodiesels with tunable and controllable properties using biomass-derived fermentable sugars. Copyright © 2016 Elsevier B.V. All rights reserved.
Operational and environmental determinants of in-vehicle CO and PM2.5 exposure.
Alameddine, I; Abi Esber, L; Bou Zeid, E; Hatzopoulou, M; El-Fadel, M
2016-05-01
This study presents a modeling framework to quantify the complex roles that traffic, seasonality, vehicle characteristics, ventilation, meteorology, and ambient air quality play in dictating in-vehicle commuter exposure to CO and PM2.5. For this purpose, a comprehensive one-year monitoring program of 25 different variables was coupled with a multivariate regression analysis to develop models to predict in-vehicle CO and PM2.5 exposure using a database of 119 mobile tests and 120 fume leakage tests. The study aims to improve the understanding of in-cabin exposure, as well as interior-exterior pollutant exchange. Model results highlighted the strong correlation between out-vehicle and in-vehicle concentrations, with the effect of ventilation type only discerned for PM2.5 levels. Car type, road conditions, as well as meteorological conditions all played a significant role in modulating in-vehicle exposure. The CO and PM2.5 exposure models were able to explain 72 and 92% of the variability in measured concentrations, respectively. Both models exhibited robustness and no-evidence of over-fitting. Copyright © 2016 Elsevier B.V. All rights reserved.
Seoane, Beatriz; Coronas, Joaquin; Gascon, Ignacio; Benavides, Miren Etxeberria; Karvan, Oğuz; Caro, Jürgen; Kapteijn, Freek
2015-01-01
The field of metal–organic framework based mixed matrix membranes (M4s) is critically reviewed, with special emphasis on their application in CO2 capture during energy generation. After introducing the most relevant parameters affecting membrane performance, we define targets in terms of selectivity and productivity based on existing literature on process design for pre- and post-combustion CO2 capture. Subsequently, the state of the art in M4s is reviewed against these targets. Because final application of these membranes will only be possible if thin separation layers can be produced, the latest advances in the manufacture of M4 hollow fibers are discussed. Finally, the recent efforts in understanding the separation performance of these complex composite materials and future research directions are outlined. PMID:25692487
Yin, Dongming; Huang, Gang; Zhang, Feifei; Qin, Yuling; Na, Zhaolin; Wu, Yaoming; Wang, Limin
2016-01-22
Rational composite materials made from transition metal sulfides and reduced graphene oxide (rGO) are highly desirable for designing high-performance lithium-ion batteries (LIBs). Here, rGO-coated or sandwiched CoSx composites are fabricated through facile thermal sulfurization of metal-organic framework/GO precursors. By scrupulously changing the proportion of Co(2+) and organic ligands and the solvent of the reaction system, we can tune the forms of GO as either a coating or a supporting layer. Upon testing as anode materials for LIBs, the as-prepared CoSx -rGO-CoSx and rGO@CoSx composites demonstrate brilliant electrochemical performances such as high initial specific capacities of 1248 and 1320 mA h g(-1) , respectively, at a current density of 100 mA g(-1) , and stable cycling abilities of 670 and 613 mA h g(-1) , respectively, after 100 charge/discharge cycles, as well as superior rate capabilities. The excellent electrical conductivity and porous structure of the CoSx /rGO composites can promote Li(+) transfer and mitigate internal stress during the charge/discharge process, thus significantly improving the electrochemical performance of electrode materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tian, Dan; Chen, Qiang; Li, Yue; Zhang, Ying-Hui; Chang, Ze; Bu, Xian-He
2014-01-13
A mixed molecular building block (MBB) strategy for the synthesis of double-walled cage-based porous metal-organic frameworks (MOFs) is presented. By means of this method, two isostructural porous MOFs built from unprecedented double-walled metal-organic octahedron were obtained by introducing two size-matching C3 -symmetric molecular building blocks with different rigidities. With their unique framework structures, these MOFs provide, to the best of our knowledge, the first examples of double-walled octahedron-based MOFs. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Static-dynamic hybrid communication scheduling and control co-design for networked control systems.
Wen, Shixi; Guo, Ge
2017-11-01
In this paper, the static-dynamic hybrid communication scheduling and control co-design is proposed for the networked control systems (NCSs) to solve the capacity limitation of the wireless communication network. The analytical most regular binary sequences (MRBSs) are used as the communication scheduling function for NCSs. When the communication conflicts yielded in the binary sequence MRBSs, a dynamic scheduling strategy is proposed to on-line reallocate the medium access status for each plant. Under such static-dynamic hybrid scheduling policy, plants in NCSs are described as the non-uniform sampled-control systems, whose controller have a group of controller gains and switch according to the sampling interval yielded by the binary sequence. A useful communication scheduling and control co-design framework is proposed for the NCSs to simultaneously decide the controller gains and the parameters used to generate the communication sequences MRBS. Numerical example and realistic example are respectively given to demonstrate the effectiveness of the proposed co-design method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Persuasive Technology in Mobile Applications Promoting Physical Activity: a Systematic Review.
Matthews, John; Win, Khin Than; Oinas-Kukkonen, Harri; Freeman, Mark
2016-03-01
Persuasive technology in mobile applications can be used to influence the behaviour of users. A framework known as the Persuasive Systems Design model has been developed for designing and evaluating systems that influence the attitudes or behaviours of users. This paper reviews the current state of mobile applications for health behavioural change with an emphasis on applications that promote physical activity. The inbuilt persuasive features of mobile applications were evaluated using the Persuasive Systems Design model. A database search was conducted to identify relevant articles. Articles were then reviewed using the Persuasive Systems Design model as a framework for analysis. Primary task support, dialogue support, and social support were found to be moderately represented in the selected articles. However, system credibility support was found to have only low levels of representation as a persuasive systems design feature in mobile applications for supporting physical activity. To ensure that available mobile technology resources are best used to improve the wellbeing of people, it is important that the design principles that influence the effectiveness of persuasive technology be understood.
Gibon, Thomas; Wood, Richard; Arvesen, Anders; Bergesen, Joseph D; Suh, Sangwon; Hertwich, Edgar G
2015-09-15
Climate change mitigation demands large-scale technological change on a global level and, if successfully implemented, will significantly affect how products and services are produced and consumed. In order to anticipate the life cycle environmental impacts of products under climate mitigation scenarios, we present the modeling framework of an integrated hybrid life cycle assessment model covering nine world regions. Life cycle assessment databases and multiregional input-output tables are adapted using forecasted changes in technology and resources up to 2050 under a 2 °C scenario. We call the result of this modeling "technology hybridized environmental-economic model with integrated scenarios" (THEMIS). As a case study, we apply THEMIS in an integrated environmental assessment of concentrating solar power. Life-cycle greenhouse gas emissions for this plant range from 33 to 95 g CO2 eq./kWh across different world regions in 2010, falling to 30-87 g CO2 eq./kWh in 2050. Using regional life cycle data yields insightful results. More generally, these results also highlight the need for systematic life cycle frameworks that capture the actual consequences and feedback effects of large-scale policies in the long term.
NASA Astrophysics Data System (ADS)
Dervos, D. A.; Skourlas, C.; Laiho, M.
2015-02-01
"DBTech VET Teachers" project is Leonardo da Vinci Multilateral Transfer of Innovation Project co-financed by the European Commission's Lifelong Learning Programme. The aim of the project is to renew the teaching of database technologies in VET (Vocational Education and Training) institutes, on the basis of the current and real needs of ICT industry in Europe. Training of the VET teachers is done with the systems used in working life and they are taught to guide students to learning by verifying. In this framework, a course module on SQL transactions is prepared and offered. In this paper we present and briefly discuss some qualitative/quantitative data collected from its first pilot offering to an international audience in Greece during May-June 2013. The questionnaire/evaluation results, and the types of participants who have attended the course offering, are presented. Conclusions are also presented.
Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A
2012-04-01
Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of two drug safety signals in a pharmacovigilance data mining framework.
Tubert-Bitter, Pascale; Bégaud, Bernard; Ahmed, Ismaïl
2016-04-01
Since adverse drug reactions are a major public health concern, early detection of drug safety signals has become a top priority for regulatory agencies and the pharmaceutical industry. Quantitative methods for analyzing spontaneous reporting material recorded in pharmacovigilance databases through data mining have been proposed in the last decades and are increasingly used to flag potential safety problems. While automated data mining is motivated by the usually huge size of pharmacovigilance databases, it does not systematically produce relevant alerts. Moreover, each detected signal requires appropriate assessment that may involve investigation of the whole therapeutic class. The goal of this article is to provide a methodology for comparing two detected signals. It is nested within the automated surveillance framework as (1) no extra information is required and (2) no simple inference on the actual risks can be extrapolated from spontaneous reporting data. We designed our methodology on the basis of two classical methods used for automated signal detection: the Bayesian Gamma Poisson Shrinker and the frequentist Proportional Reporting Ratio. A simulation study was conducted to assess the performances of both proposed methods. The latter were used to compare cardiovascular signals for two HIV treatments from the French pharmacovigilance database. © The Author(s) 2012.
Clinical research in a hospital--from the lone rider to teamwork.
Hannisdal, E
1996-01-01
Clinical research of high international standard is very demanding and requires clinical data of high quality, software, hardware and competence in research design and statistical treatment of data. Most busy clinicians have little time allocated for clinical research and this increases the need for a potent infrastructure. This paper describes how the Norwegian Radium Hospital, a specialized cancer hospital, has reorganized the clinical research process. This includes a new department, the Clinical Research Office, which serves the formal framework, a central Diagnosis Registry, clinical databases and multicentre studies. The department assists about 120 users, mainly clinicians. Installation of a network software package with over 10 programs has strongly provided an internal standardization, reduced the costs and saved clinicians a great deal of time. The hospital is building up about 40 diagnosis-specific clinical databases with up to 200 variables registered. These databases are shared by the treatment group and seem to be important tools for quality assurance. We conclude that the clinical research process benefits from a firm infrastructure facilitating teamwork through extensive use of modern information technology. We are now ready for the next phase, which is to work for a better external technical framework for cooperation with other institutions throughout the world.
Dynamically Reconfigurable Systolic Array Accelorators
NASA Technical Reports Server (NTRS)
Dasu, Aravind (Inventor); Barnes, Robert C. (Inventor)
2014-01-01
A polymorphic systolic array framework that works in conjunction with an embedded microprocessor on an FPGA, that allows for dynamic and complimentary scaling of acceleration levels of two algorithms active concurrently on the FPGA. Use is made of systolic arrays and hardware-software co-design to obtain an efficient multi-application acceleration system. The flexible and simple framework allows hosting of a broader range of algorithms and extendable to more complex applications in the area of aerospace embedded systems.
Li, Qing; Xu, Ping; Gao, Wei; Ma, Shuguo; Zhang, Guoqi; Cao, Ruiguo; Cho, Jaephil; Wang, Hsing-Lin; Wu, Gang
2014-03-05
Nitrogen-doped graphene/graphene-tube nanocomposites are prepared by a hightemperature approach using a newly designed cage-containing metal-organic framework (MOF) to template nitrogen/carbon (dicyandiamide) and iron precursors. The resulting N-Fe-MOF catalysts universally exhibit high oxygen-reduction activity in acidic, alkaline, and non-aqueous electrolytes and superior cathode performance in Li-O2 batteries. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
The Iranian National Geodata Revision Strategy and Realization Based on Geodatabase
NASA Astrophysics Data System (ADS)
Haeri, M.; Fasihi, A.; Ayazi, S. M.
2012-07-01
In recent years, using of spatial database for storing and managing spatial data has become a hot topic in the field of GIS. Accordingly National Cartographic Center of Iran (NCC) produces - from time to time - some spatial data which is usually included in some databases. One of the NCC major projects was designing National Topographic Database (NTDB). NCC decided to create National Topographic Database of the entire country-based on 1:25000 coverage maps. The standard of NTDB was published in 1994 and its database was created at the same time. In NTDB geometric data was stored in MicroStation design format (DGN) which each feature has a link to its attribute data (stored in Microsoft Access file). Also NTDB file was produced in a sheet-wise mode and then stored in a file-based style. Besides map compilation, revision of existing maps has already been started. Key problems of NCC are revision strategy, NTDB file-based style storage and operator challenges (NCC operators are almost preferred to edit and revise geometry data in CAD environments). A GeoDatabase solution for national Geodata, based on NTDB map files and operators' revision preferences, is introduced and released herein. The proposed solution extends the traditional methods to have a seamless spatial database which it can be revised in CAD and GIS environment, simultaneously. The proposed system is the common data framework to create a central data repository for spatial data storage and management.
Design of Stratified Functional Nanoporous Materials for CO 2 Capture and Conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J. Karl; Ye, Jingyun
The objective of this project is to develop novel nanoporous materials for CO 2 capture and conversion. The motivation of this work is that capture of CO 2 from flue gas or the atmosphere coupled with catalytic hydrogenation of CO 2 into valuable chemicals and fuels can reduce the net amount of CO 2 in the atmosphere while providing liquid transportation fuels and other commodity chemicals. One approach to increasing the economic viability of carbon capture and conversion is to design a single material that can be used for both the capture and catalytic conversion of CO 2, because suchmore » a material could increase efficiency through process intensification. We have used density functional theory (DFT) methods to design catalytic moieties that can be incorporated into various metal organic framework (MOF) materials. We chose to work with MOFs because they are highly tailorable, can be functionalized, and have been shown to selectively adsorb CO 2 over N 2, which is a requirement for CO 2 capture from flue gas. Moreover, the incorporation of molecular catalytic moieties into MOF, through covalent bonding, produces a heterogeneous catalytic material having activities and selectivities close to those of homogeneous catalysts, but without the draw-backs associated with homogeneous catalysis.« less
Pang, Yujia; Li, Wenliang; Zhang, Jingping
2017-09-15
A novel type of porous organic frameworks, based on Mg-porphyrin, with diamond-like topology, named POF-Mgs is computationally designed, and the gas uptakes of CO 2 , H 2 , N 2 , and H 2 O in POF-Mgs are investigated by Grand canonical Monte Carlo simulations based on first-principles derived force fields (FF). The FF, which describes the interactions between POF-Mgs and gases, are fitted by dispersion corrected double-hybrid density functional theory, B2PLYP-D3. The good agreement between the obtained FF and the first-principle energies data confirms the reliability of the FF. Furthermore our simulation shows the presence of a small amount of H 2 O (≤ 0.01 kPa) does not much affect the adsorption quantity of CO 2 , but the presence of higher partial pressure of H 2 O (≥ 0.1 kPa) results in the CO 2 adsorption decrease significantly. The good performance of POF-Mgs in the simulation inspires us to design novel porous materials experimentally for gas adsorption and purification. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A generic biogeochemical module for earth system models
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.-Y.; Leung, L. R.
2013-06-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into earth system models (e.g. community land models - CLM), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into the CLM model. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems.
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review.
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-04-01
To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert's evaluation. The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Lawrence
Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and themore » environment relieving developers from implementing such details.« less
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
Remote sensing and GIS technology in the Global Land Ice Measurements from Space (GLIMS) Project
Raup, B.; Kääb, Andreas; Kargel, J.S.; Bishop, M.P.; Hamilton, G.; Lee, E.; Paul, F.; Rau, F.; Soltesz, D.; Khalsa, S.J.S.; Beedle, M.; Helm, C.
2007-01-01
Global Land Ice Measurements from Space (GLIMS) is an international consortium established to acquire satellite images of the world's glaciers, analyze them for glacier extent and changes, and to assess these change data in terms of forcings. The consortium is organized into a system of Regional Centers, each of which is responsible for glaciers in their region of expertise. Specialized needs for mapping glaciers in a distributed analysis environment require considerable work developing software tools: terrain classification emphasizing snow, ice, water, and admixtures of ice with rock debris; change detection and analysis; visualization of images and derived data; interpretation and archival of derived data; and analysis to ensure consistency of results from different Regional Centers. A global glacier database has been designed and implemented at the National Snow and Ice Data Center (Boulder, CO); parameters have been expanded from those of the World Glacier Inventory (WGI), and the database has been structured to be compatible with (and to incorporate) WGI data. The project as a whole was originated, and has been coordinated by, the US Geological Survey (Flagstaff, AZ), which has also led the development of an interactive tool for automated analysis and manual editing of glacier images and derived data (GLIMSView). This article addresses remote sensing and Geographic Information Science techniques developed within the framework of GLIMS in order to fulfill the goals of this distributed project. Sample applications illustrating the developed techniques are also shown. ?? 2006 Elsevier Ltd. All rights reserved.
Farmer, Jane; Carlisle, Karen; Dickson-Swift, Virginia; Teasdale, Simon; Kenny, Amanda; Taylor, Judy; Croker, Felicity; Marini, Karen; Gussy, Mark
2018-01-31
Citizen participation in health service co-production is increasingly enacted. A reason for engaging community members is to co-design services that are locally-appropriate and harness local assets. To date, much literature examines processes of involving participants, with little consideration of innovative services are designed, how innovations emerge, develop and whether they sustain or diffuse. This paper addresses this gap by examining co-designed initiatives through the lens of social innovation - a conceptualisation more attuned to analysing grassroots innovation than common health services research approaches considering top-down, technical innovations. This paper considers whether social innovation is a useful frame for examining co-designed services. Eighty-eight volunteer community-based participants from six rural Australian communities were engaged using the same, tested co-design framework for a 12-month design and then 12-month implementation phase, in 24 workshops (2014-16). Mixed, qualitative data were collected and used to formulate five case studies of community co-designed innovations. A social innovation theory, derived from literature, was applied as an analytical frame to examine co-design cases at 3 stages: innovation growth, development and sustainability/diffusion. Social innovation theory was found relevant in examining and understanding what occurred at each stage of innovation development. Innovations themselves were all adaptations of existing ideas. They emerged due to local participants combining knowledge from local context, own experiences and exemplars. External facilitation brought resources together. The project provided a protective niche in which pilot innovations developed, but they needed support from managers and/or policymakers to be implemented; and to be compatible with existing health system practices. For innovations to move to sustainability/diffusion required political relationships. Challenging existing practice without these was problematical. Social innovation provides a useful lens to understand the grassroots innovation process implied in community participation in service co-design. It helps to show problems in co-design processes and highlights the need for strong partnerships and advocacy beyond the immediate community for new ideas to thrive. Regional commissioning organisations are intended to diffuse useful, co-designed service innovations. Efforts are required to develop an innovation system to realise the potential of community involvement in co-design.
ERIC Educational Resources Information Center
Lloyd-Strovas, Jenny D.; Arsuffi, Thomas L.
2016-01-01
We examined the diversity of environmental education (EE) in Texas, USA, by developing a framework to assess EE organizations and programs at a large scale: the Environmental Education Database of Organizations and Programs (EEDOP). This framework consisted of the following characteristics: organization/visitor demographics, pedagogy/curriculum,…
In silico design and screening of hypothetical MOF-74 analogs and their experimental synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witman, Matthew; Ling, Sanliang; Anderson, Samantha
2016-01-01
We present thein silico designof MOFs exhibiting 1-dimensional rod topologies by enumerating MOF-74-type analogs based on the PubChem Compounds database. We simulate the adsorption behavior of CO 2in the generated analogs and experimentally validate a novel MOF-74 analog, Mg 2(olsalazine).
The Ruby UCSC API: accessing the UCSC genome database using Ruby.
Mishima, Hiroyuki; Aerts, Jan; Katayama, Toshiaki; Bonnal, Raoul J P; Yoshiura, Koh-ichiro
2012-09-21
The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast.The API uses the bin index-if available-when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/.
The Ruby UCSC API: accessing the UCSC genome database using Ruby
2012-01-01
Background The University of California, Santa Cruz (UCSC) genome database is among the most used sources of genomic annotation in human and other organisms. The database offers an excellent web-based graphical user interface (the UCSC genome browser) and several means for programmatic queries. A simple application programming interface (API) in a scripting language aimed at the biologist was however not yet available. Here, we present the Ruby UCSC API, a library to access the UCSC genome database using Ruby. Results The API is designed as a BioRuby plug-in and built on the ActiveRecord 3 framework for the object-relational mapping, making writing SQL statements unnecessary. The current version of the API supports databases of all organisms in the UCSC genome database including human, mammals, vertebrates, deuterostomes, insects, nematodes, and yeast. The API uses the bin index—if available—when querying for genomic intervals. The API also supports genomic sequence queries using locally downloaded *.2bit files that are not stored in the official MySQL database. The API is implemented in pure Ruby and is therefore available in different environments and with different Ruby interpreters (including JRuby). Conclusions Assisted by the straightforward object-oriented design of Ruby and ActiveRecord, the Ruby UCSC API will facilitate biologists to query the UCSC genome database programmatically. The API is available through the RubyGem system. Source code and documentation are available at https://github.com/misshie/bioruby-ucsc-api/ under the Ruby license. Feedback and help is provided via the website at http://rubyucscapi.userecho.com/. PMID:22994508
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
Das, Narendra; Stampoulis, Dimitrios; Ines, Amor; Fisher, Joshua B.; Granger, Stephanie; Kawata, Jessie; Han, Eunjin; Behrangi, Ali
2017-01-01
The Regional Hydrologic Extremes Assessment System (RHEAS) is a prototype software framework for hydrologic modeling and data assimilation that automates the deployment of water resources nowcasting and forecasting applications. A spatially-enabled database is a key component of the software that can ingest a suite of satellite and model datasets while facilitating the interfacing with Geographic Information System (GIS) applications. The datasets ingested are obtained from numerous space-borne sensors and represent multiple components of the water cycle. The object-oriented design of the software allows for modularity and extensibility, showcased here with the coupling of the core hydrologic model with a crop growth model. RHEAS can exploit multi-threading to scale with increasing number of processors, while the database allows delivery of data products and associated uncertainty through a variety of GIS platforms. A set of three example implementations of RHEAS in the United States and Kenya are described to demonstrate the different features of the system in real-world applications. PMID:28545077
Andreadis, Konstantinos M; Das, Narendra; Stampoulis, Dimitrios; Ines, Amor; Fisher, Joshua B; Granger, Stephanie; Kawata, Jessie; Han, Eunjin; Behrangi, Ali
2017-01-01
The Regional Hydrologic Extremes Assessment System (RHEAS) is a prototype software framework for hydrologic modeling and data assimilation that automates the deployment of water resources nowcasting and forecasting applications. A spatially-enabled database is a key component of the software that can ingest a suite of satellite and model datasets while facilitating the interfacing with Geographic Information System (GIS) applications. The datasets ingested are obtained from numerous space-borne sensors and represent multiple components of the water cycle. The object-oriented design of the software allows for modularity and extensibility, showcased here with the coupling of the core hydrologic model with a crop growth model. RHEAS can exploit multi-threading to scale with increasing number of processors, while the database allows delivery of data products and associated uncertainty through a variety of GIS platforms. A set of three example implementations of RHEAS in the United States and Kenya are described to demonstrate the different features of the system in real-world applications.
PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.
Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X
2017-01-01
Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.
Uses of the Drupal CMS Collaborative Framework in the Woods Hole Scientific Community (Invited)
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T. T.; Shorthouse, D.; Furfey, J.; Miller, H.
2010-12-01
Organizations that comprise the Woods Hole scientific community (Woods Hole Oceanographic Institution, Marine Biological Laboratory, USGS Woods Hole Coastal and Marine Science Center, Woods Hole Research Center, NOAA NMFS Northeast Fisheries Science Center, SEA Education Association) have a long history of collaborative activity regarding computing, computer network and information technologies that support common, inter-disciplinary science needs. Over the past several years there has been growing interest in the use of the Drupal Content Management System (CMS) playing a variety of roles in support of research projects resident at several of these organizations. Many of these projects are part of science programs that are national and international in scope. Here we survey the current uses of Drupal within the Woods Hole scientific community and examine reasons it has been adopted. The promise of emerging semantic features in the Drupal framework is examined and projections of how pre-existing Drupal-based websites might benefit are made. Closer examination of Drupal software design exposes it as more than simply a content management system. The flexibility of its architecture; the power of its taxonomy module; the care taken in nurturing the open-source developer community that surrounds it (including organized and often well-attended code sprints); the ability to bind emerging software technologies as Drupal modules; the careful selection process used in adopting core functionality; multi-site hosting and cross-site deployment of updates and a recent trend towards development of use-case inspired Drupal distributions casts Drupal as a general-purpose application deployment framework. Recent work in the semantic arena casts Drupal as an emerging RDF framework as well. Examples of roles played by Drupal-based websites within the Woods Hole scientific community that will be discussed include: science data metadata database, organization main website, biological taxonomy development, bibliographic database, physical media data archive inventory manager, disaster-response website development framework, science project task management, science conference planning, and spreadsheet-to-database converter.
2010-10-01
Requirements Application Server BEA Weblogic Express 9.2 or higher Java v5Apache Struts v2 Hibernate v2 C3PO SQL*Net client / JDBC Database Server...designed for the desktop o An HTML and JavaScript browser-based front end designed for mobile Smartphones - A Java -based framework utilizing Apache...Technology Requirements The recommended technologies are as follows: Technology Use Requirements Java Application Provides the backend application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Lingguang; Gu Lina; Hu Gang
2009-03-15
Modular design method for designing and synthesizing microporous metal-organic frameworks (MOFs) with selective catalytical activity was described. MOFs with both nano-sized channels and potential catalytic activities could be obtained through self-assembly of a framework unit and a catalyst unit. By selecting hexaaquo metal complexes and the ligand BTC (BTC=1,3,5-benzenetricarboxylate) as framework-building blocks and using the metal complex [M(phen){sub 2}(H{sub 2}O){sub 2}]{sup 2+} (phen=1,10-phenanthroline) as a catalyst unit, a series of supramolecular MOFs 1-7 with three-dimensional nano-sized channels, i.e. [M{sup 1}(H{sub 2}O){sub 6}].[M{sup 2}(phen){sub 2}(H{sub 2}O){sub 2}]{sub 2}.2(BTC).xH{sub 2}O (M{sup 1}, M{sup 2}=Co(II), Ni(II), Cu(II), Zn(II), or Mn(II), phen=1,10-phenanthroline, BTC=1,3,5-benzenetricarboxylate, x=22-24),more » were synthesized through self-assembly, and their structures were characterized by IR, elemental analysis, and single-crystal X-ray diffraction. These supramolecular microporous MOFs showed significant size and shape selectivity in the catalyzed oxidation of phenols, which is due to catalytic reactions taking place in the channels of the framework. Design strategy, synthesis, and self-assembly mechanism for the construction of these porous MOFs were discussed. - Grapical abstract: A modular design strategy has been developed to synthesize microporous metal-organic frameworks with potential catalytic activity by self-assembly of the framework-building blocks and the catalyst unit.« less
Liu, Yixin; Zhou, Kai; Lei, Yu
2015-01-01
High temperature gas sensors have been highly demanded for combustion process optimization and toxic emissions control, which usually suffer from poor selectivity. In order to solve this selectivity issue and identify unknown reducing gas species (CO, CH 4 , and CH 8 ) and concentrations, a high temperature resistive sensor array data set was built in this study based on 5 reported sensors. As each sensor showed specific responses towards different types of reducing gas with certain concentrations, based on which calibration curves were fitted, providing benchmark sensor array response database, then Bayesian inference framework was utilized to process themore » sensor array data and build a sample selection program to simultaneously identify gas species and concentration, by formulating proper likelihood between input measured sensor array response pattern of an unknown gas and each sampled sensor array response pattern in benchmark database. This algorithm shows good robustness which can accurately identify gas species and predict gas concentration with a small error of less than 10% based on limited amount of experiment data. These features indicate that Bayesian probabilistic approach is a simple and efficient way to process sensor array data, which can significantly reduce the required computational overhead and training data.« less
MAGA, a new database of gas natural emissions: a collaborative web environment for collecting data.
NASA Astrophysics Data System (ADS)
Cardellini, Carlo; Chiodini, Giovanni; Frigeri, Alessandro; Bagnato, Emanuela; Frondini, Francesco; Aiuppa, Alessandro
2014-05-01
The data on volcanic and non-volcanic gas emissions available online are, as today, are incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various scales. A new and detailed web database (MAGA: MApping GAs emissions) has been developed, and recently improved, to collect data on carbon degassing form volcanic and non-volcanic environments. MAGA database allows researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and with the ingestion in to the database of the data from: i) a literature survey on publications on volcanic gas fluxes including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores, and ii) the revision and update of Googas database on non-volcanic emission of the Italian territory (Chiodini et al., 2008), in the framework of the Deep Earth Carbon Degassing (DECADE) research initiative of the Deep Carbon Observatory (DCO). For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of each site. In this phase data can be accessed on the network from a web interface, and data-driven web service, where software clients can request data directly from the database, are planned to be implemented shortly. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) could easily access the database, and data could be exchanged with other database. At the moment the database includes: i) more than 1000 flux data about volcanic plume degassing from Etna and Stromboli volcanoes, ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, several data on fumarolic emissions (~ 7 sites) with CO2 fluxes; iii) data from ~ 270 non volcanic gas emission site in Italy. We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates
SM-TF: A structural database of small molecule-transcription factor complexes.
Xu, Xianjin; Ma, Zhiwei; Sun, Hongmin; Zou, Xiaoqin
2016-06-30
Transcription factors (TFs) are the proteins involved in the transcription process, ensuring the correct expression of specific genes. Numerous diseases arise from the dysfunction of specific TFs. In fact, over 30 TFs have been identified as therapeutic targets of about 9% of the approved drugs. In this study, we created a structural database of small molecule-transcription factor (SM-TF) complexes, available online at http://zoulab.dalton.missouri.edu/SM-TF. The 3D structures of the co-bound small molecule and the corresponding binding sites on TFs are provided in the database, serving as a valuable resource to assist structure-based drug design related to TFs. Currently, the SM-TF database contains 934 entries covering 176 TFs from a variety of species. The database is further classified into several subsets by species and organisms. The entries in the SM-TF database are linked to the UniProt database and other sequence-based TF databases. Furthermore, the druggable TFs from human and the corresponding approved drugs are linked to the DrugBank. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The Material Supply Adjustment Process in RAMF-SM, Step 2
2016-06-01
contain. The Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) is a suite of mathematical models and databases that has been...Risk Assessment and Mitigation Framework for Strategic Materials (RAMF-SM) is a suite of mathematical models and databases used to support the...and computes material shortfalls.1 Several mathematical models and dozens of databases, encompassing thousands of data items, support the
NASA Astrophysics Data System (ADS)
Sorce, Salvatore; Malizia, Alessio; Jiang, Pingfei; Atherton, Mark; Harrison, David
2018-04-01
One of the main time and money consuming tasks in the design of industrial devices and parts is the checking of possible patent infringements. Indeed, the great number of documents to be mined and the wide variety of technical language used to describe inventions are reasons why considerable amounts of time may be needed. On the other hand, the early detection of a possible patent conflict, in addition to reducing the risk of legal disputes, could stimulate a designers’ creativity to overcome similarities in overlapping patents. For this reason, there are a lot of existing patent analysis systems, each with its own features and access modes. We have designed a visual interface providing an intuitive access to such systems, freeing the designers from the specific knowledge of querying languages and providing them with visual clues. We tested the interface on a framework aimed at representing mechanical engineering patents; the framework is based on a semantic database and provides patent conflict analysis for early-stage designs. The interface supports a visual query composition to obtain a list of potentially overlapping designs.
NASA Astrophysics Data System (ADS)
Kim, Jungrack; Kim, Younghwi; Park, Minseong
2016-10-01
At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has succeeded only a very few times—for example, in the Nili Patera study (Bridges et al. 2012) using change-detection algorithms and orbital imagery. Therefore, in this study, we developed a generic procedure to precisely measure the migration of dune fields with recently introduced 25-cm resolution orbital imagery specifically using a high-accuracy photogrammetric processor. The processor was designed to trace estimated dune migration, albeit slight, over the Martian surface by 1) the introduction of very high resolution ortho images and stereo analysis based on hierarchical geodetic control for better initial point settings; 2) positioning error removal throughout the sensor model refinement with a non-rigorous bundle block adjustment, which makes possible the co-alignment of all images in a time series; and 3) improved sub-pixel co-registration algorithms using optical flow with a refinement stage conducted on a pyramidal grid processor and a blunder classifier. Moreover, volumetric changes of Martian dunes were additionally traced by means of stereo analysis and photoclinometry. The established algorithms have been tested using high-resolution HIRISE time-series images over several Martian dune fields. Dune migrations were iteratively processed both spatially and volumetrically, and the results were integrated to be compared to the Martian climate model. Migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database (Millour et al. 2015). As a result, a number of measurements over dune fields in the Mars Global Dune Database (Hayward et al. 2014) covering polar areas and mid-latitude will be demonstrated. Acknowledgements:The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement Nr. 607379.
A framework for integration of scientific applications into the OpenTopography workflow
NASA Astrophysics Data System (ADS)
Nandigam, V.; Crosby, C.; Baru, C.
2012-12-01
The NSF-funded OpenTopography facility provides online access to Earth science-oriented high-resolution LIDAR topography data, online processing tools, and derivative products. The underlying cyberinfrastructure employs a multi-tier service oriented architecture that is comprised of an infrastructure tier, a processing services tier, and an application tier. The infrastructure tier consists of storage, compute resources as well as supporting databases. The services tier consists of the set of processing routines each deployed as a Web service. The applications tier provides client interfaces to the system. (e.g. Portal). We propose a "pluggable" infrastructure design that will allow new scientific algorithms and processing routines developed and maintained by the community to be integrated into the OpenTopography system so that the wider earth science community can benefit from its availability. All core components in OpenTopography are available as Web services using a customized open-source Opal toolkit. The Opal toolkit provides mechanisms to manage and track job submissions, with the help of a back-end database. It allows monitoring of job and system status by providing charting tools. All core components in OpenTopography have been developed, maintained and wrapped as Web services using Opal by OpenTopography developers. However, as the scientific community develops new processing and analysis approaches this integration approach is not scalable efficiently. Most of the new scientific applications will have their own active development teams performing regular updates, maintenance and other improvements. It would be optimal to have the application co-located where its developers can continue to actively work on it while still making it accessible within the OpenTopography workflow for processing capabilities. We will utilize a software framework for remote integration of these scientific applications into the OpenTopography system. This will be accomplished by virtually extending the OpenTopography service over the various infrastructures running these scientific applications and processing routines. This involves packaging and distributing a customized instance of the Opal toolkit that will wrap the software application as an OPAL-based web service and integrate it into the OpenTopography framework. We plan to make this as automated as possible. A structured specification of service inputs and outputs along with metadata annotations encoded in XML can be utilized to automate the generation of user interfaces, with appropriate tools tips and user help features, and generation of other internal software. The OpenTopography Opal toolkit will also include the customizations that will enable security authentication, authorization and the ability to write application usage and job statistics back to the OpenTopography databases. This usage information could then be reported to the original service providers and used for auditing and performance improvements. This pluggable framework will enable the application developers to continue to work on enhancing their application while making the latest iteration available in a timely manner to the earth sciences community. This will also help us establish an overall framework that other scientific application providers will also be able to use going forward.
Kamal, Noreen; Fels, Sidney
2013-01-01
Positive health behaviour is critical to preventing illness and managing chronic conditions. A user-centred methodology was employed to design an online social network to motivate health behaviour change. The methodology was augmented by utilizing the Appeal, Belonging, Commitment (ABC) Framework, which is based on theoretical models for health behaviour change and use of online social networks. The user-centred methodology included four phases: 1) initial user inquiry on health behaviour and use of online social networks; 2) interview feedback on paper prototypes; 2) laboratory study on medium fidelity prototype; and 4) a field study on the high fidelity prototype. The points of inquiry through these phases were based on the ABC Framework. This yielded an online social network system that linked to external third party databases to deploy to users via an interactive website.
NASA Astrophysics Data System (ADS)
Hume, Anne; Berry, Amanda
2013-10-01
This paper reports findings from an ongoing study exploring how the Content Representation (CoRe) design can be used as a tool to help chemistry student teachers begin acquiring the professional knowledge required to become expert chemistry teachers. Phase 2 of the study, reported in this paper, investigated how collaboration with school-based mentors (associate teachers) on teaching practice (practicum) might impact on this process and student teachers' development of their pedagogical content knowledge (PCK). The collaboration involved identifying and discussing pedagogical issues related to a practicum-teaching topic using a student teacher's draft CoRe as a starting point and ongoing focus for the professional dialogue. Practicum offered an opportunity for aspects of student teachers' PCK, as embodied in their draft CoRes, to be explored and expanded upon in classroom programmes with the support and input of associate teachers. The findings were influenced by different contextual factors; however, the student teachers found their CoRes to be very useful frameworks for engaging in focussed professional dialogue with their teaching mentors. They valued the expertise, currency of knowledge and mentoring of their associates and reported positively about the contribution this support made to their PCK development via the CoRe design process and the transformation of the CoRe into classroom teaching.
Automated Database Mediation Using Ontological Metadata Mappings
Marenco, Luis; Wang, Rixin; Nadkarni, Prakash
2009-01-01
Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801
Evaluation of Conceptual Frameworks Applicable to the Study of Isolation Precautions Effectiveness
Crawford, Catherine; Shang, Jingjing
2015-01-01
Aims A discussion of conceptual frameworks applicable to the study of isolation precautions effectiveness according to Fawcett and DeSanto-Madeya’s (2013) evaluation technique and their relative merits and drawbacks for this purpose Background Isolation precautions are recommended to control infectious diseases with high morbidity and mortality, but effectiveness is not established due to numerous methodological challenges. These challenges, such as identifying empirical indicators and refining operational definitions, could be alleviated though use of an appropriate conceptual framework. Design Discussion paper Data Sources In mid-April 2014, the primary author searched five electronic, scientific literature databases for conceptual frameworks applicable to study isolation precautions, without limiting searches by publication date. Implications for Nursing By reviewing promising conceptual frameworks to support isolation precautions effectiveness research, this paper exemplifies the process to choose an appropriate conceptual framework for empirical research. Hence, researchers may build on these analyses to improve study design of empirical research in multiple disciplines, which may lead to improved research and practice. Conclusion Three frameworks were reviewed: the epidemiologic triad of disease, Donabedian’s healthcare quality framework and the Quality Health Outcomes model. Each has been used in nursing research to evaluate health outcomes and contains concepts relevant to nursing domains. Which framework can be most useful likely depends on whether the study question necessitates testing multiple interventions, concerns pathogen-specific characteristics and yields cross-sectional or longitudinal data. The Quality Health Outcomes model may be slightly preferred as it assumes reciprocal relationships, multi-level analysis and is sensitive to cultural inputs. PMID:26179813
NASA Astrophysics Data System (ADS)
Broderick, Scott R.; Santhanam, Ganesh Ram; Rajan, Krishna
2016-08-01
As the size of databases has significantly increased, whether through high throughput computation or through informatics-based modeling, the challenge of selecting the optimal material for specific design requirements has also arisen. Given the multiple, and often conflicting, design requirements, this selection process is not as trivial as sorting the database for a given property value. We suggest that the materials selection process should minimize selector bias, as well as take data uncertainty into account. For this reason, we discuss and apply decision theory for identifying chemical additions to Ni-base alloys. We demonstrate and compare results for both a computational array of chemistries and standard commercial superalloys. We demonstrate how we can use decision theory to select the best chemical additions for enhancing both property and processing, which would not otherwise be easily identifiable. This work is one of the first examples of introducing the mathematical framework of set theory and decision analysis into the domain of the materials selection process.
Xu, Bin; Yang, Daipeng; Shi, Zhongke; Pan, Yongping; Chen, Badong; Sun, Fuchun
2017-09-25
This paper investigates the online recorded data-based composite neural control of uncertain strict-feedback systems using the backstepping framework. In each step of the virtual control design, neural network (NN) is employed for uncertainty approximation. In previous works, most designs are directly toward system stability ignoring the fact how the NN is working as an approximator. In this paper, to enhance the learning ability, a novel prediction error signal is constructed to provide additional correction information for NN weight update using online recorded data. In this way, the neural approximation precision is highly improved, and the convergence speed can be faster. Furthermore, the sliding mode differentiator is employed to approximate the derivative of the virtual control signal, and thus, the complex analysis of the backstepping design can be avoided. The closed-loop stability is rigorously established, and the boundedness of the tracking error can be guaranteed. Through simulation of hypersonic flight dynamics, the proposed approach exhibits better tracking performance.
Clarke, David; Jones, Fiona; Harris, Ruth; Robert, Glenn
2017-01-01
Background Co-production is defined as the voluntary or involuntary involvement of users in the design, management, delivery and/or evaluation of services. Interest in co-production as an intervention for improving healthcare quality is increasing. In the acute healthcare context, co-production is promoted as harnessing the knowledge of patients, carers and staff to make changes about which they care most. However, little is known regarding the impact of co-production on patient, staff or organisational outcomes in these settings. Aims To identify and appraise reported outcomes of co-production as an intervention to improve quality of services in acute healthcare settings. Design Rapid evidence synthesis. Data sources Medline, Cinahl, Web of Science, Embase, HMIC, Cochrane Database of Systematic Reviews, SCIE, Proquest Dissertation and Theses, EThOS, OpenGrey; CoDesign; The Design Journal; Design Issues. Study selection Studies reporting patient, staff or organisational outcomes associated with using co-production in an acute healthcare setting. Findings 712 titles and abstracts were screened; 24 papers underwent full-text review, and 11 papers were included in the evidence synthesis. One study was a feasibility randomised controlled trial, three were process evaluations and seven used descriptive qualitative approaches. Reported outcomes related to (a) the value of patient and staff involvement in co-production processes; (b) the generation of ideas for changes to processes, practices and clinical environments; and (c) tangible service changes and impacts on patient experiences. Only one study included cost analysis; none reported an economic evaluation. No studies assessed the sustainability of any changes made. Conclusions Despite increasing interest in and advocacy for co-production, there is a lack of rigorous evaluation in acute healthcare settings. Future studies should evaluate clinical and service outcomes as well as the cost-effectiveness of co-production relative to other forms of quality improvement. Potentially broader impacts on the values and behaviours of participants should also be considered. PMID:28701409
Monge, Aurélien; Arrault, Alban; Marot, Christophe; Morin-Allory, Luc
2006-08-01
The data for 3.8 million compounds from structural databases of 32 providers were gathered and stored in a single chemical database. Duplicates are removed using the IUPAC International Chemical Identifier. After this, 2.6 million compounds remain. Each database and the final one were studied in term of uniqueness, diversity, frameworks, 'drug-like' and 'lead-like' properties. This study also shows that there are more than 87 000 frameworks in the database. It contains 2.1 million 'drug-like' molecules among which, more than one million are 'lead-like'. This study has been carried out using 'ScreeningAssistant', a software dedicated to chemical databases management and screening sets generation. Compounds are stored in a MySQL database and all the operations on this database are carried out by Java code. The druglikeness and leadlikeness are estimated with 'in-house' scores using functions to estimate convenience to properties; unicity using the InChI code and diversity using molecular frameworks and fingerprints. The software has been conceived in order to facilitate the update of the database. 'ScreeningAssistant' is freely available under the GPL license.
DOT National Transportation Integrated Search
2007-01-01
An Internet-based, spatiotemporal Geotechnical Database Management System (GDBMS) Framework was implemented at the Virginia Department of Transportation (VDOT) in 2002 to manage geotechnical data using a distributed Geographical Information System (G...
Henderson, Jette; Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C
2018-05-04
Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. ©Jette Henderson, Junyuan Ke, Joyce C Ho, Joydeep Ghosh, Byron C Wallace. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.05.2018.
Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C
2018-01-01
Background Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. Objective The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. Methods PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET’s phenotype representation with PheKnow-Cloud’s by using PheKnow-Cloud’s experimental setup. In PIVET’s framework, we also introduce a statistical model trained on domain expert–verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. Results PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET’s analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes. Conclusions Our study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy. PMID:29728351
HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.
O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D
2015-04-01
The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.
Health-Terrain: Visualizing Large Scale Health Data
2014-12-01
systems can only be realized if the quality of emerging large medical databases can be characterized and the meaning of the data understood. For this...Designed and tested an evaluation procedure for health data visualization system. This visualization framework offers a real time and web-based solution...rule is shown in the table, with the quality measures of each rule including the support, confidence, Laplace, Gain, p-s, lift and Conviction. We
An HL7/CDA Framework for the Design and Deployment of Telemedicine Services
2001-10-25
schemes and prescription databases. Furthermore, interoperability with the Electronic Health Re- cord ( EHR ) facilitates automatic retrieval of relevant...local EHR system or the integrated electronic health record (I- EHR ) [9], which indexes all medical contacts of a patient in the regional net- work...suspected medical problem. Interoperability with middleware services of the HII and other data sources such as the local EHR sys- tem affects
Data Structures for Extreme Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahan, Simon
As computing problems of national importance grow, the government meets the increased demand by funding the development of ever larger systems. The overarching goal of the work supported in part by this grant is to increase efficiency of programming and performing computations on these large computing systems. In past work, we have demonstrated that some of these computations once thought to require expensive hardware designs and/or complex, special-purpose programming may be executed efficiently on low-cost commodity cluster computing systems using a general-purpose “latency-tolerant” programming framework. One important developed application of the ideas underlying this framework is graph database technology supportingmore » social network pattern matching used by US intelligence agencies to more quickly identify potential terrorist threats. This database application has been spun out by the Pacific Northwest National Laboratory, a Department of Energy Laboratory, into a commercial start-up, Trovares Inc. We explore an alternative application of the same underlying ideas to a well-studied challenge arising in engineering: solving unstructured sparse linear equations. Solving these equations is key to predicting the behavior of large electronic circuits before they are fabricated. Predicting that behavior ahead of fabrication means that designs can optimized and errors corrected ahead of the expense of manufacture.« less
ERIC Educational Resources Information Center
Alkaher, Iris; Avissar, Ilana
2018-01-01
This study focuses on the impact of a sustainability leadership development program (SLDP) designed to develop staff members as leaders who encourage sustainability practices within institutions of higher education (IHE). Using the framework of community of practice (CoP), we explored the program's contribution by interviewing 16 staff members who…
ERIC Educational Resources Information Center
Chia, Noel Kok Hwee; Kee, Norman Kiak Nam
2014-01-01
In Singapore, the Special Education for Autism (SEA) calls for a more focused, systematically structured framework to cater to the needs of children with autism in schools. As autism is a syndrome with co-morbid subtypes and different degrees of severity, a universal design for both learning and living becomes necessary to meet all the various…
Food Composition Database Format and Structure: A User Focused Approach
Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine
2015-01-01
This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836
NASA Astrophysics Data System (ADS)
Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.
2013-12-01
The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.
Chong, Xinyuan; Kim, Ki-joong; Zhang, Yujing; ...
2017-06-06
In this letter, we present a nanophotonic device consisting of plasmonic nanopatch array (NPA) with integrated metal–organic framework (MOF) for enhanced infrared absorption gas sensing. By designing a gold NPA on a sapphire substrate, we are able to achieve enhanced optical field that spatially overlaps with the MOF layer, which can adsorb carbon dioxide (CO 2) with high capacity. Additionally, experimental results show that this hybrid plasmonic–MOF device can effectively increase the infrared absorption path of on-chip gas sensors by more than 1100-fold. Lastly, the demonstration of infrared absorption spectroscopy of CO 2 using the hybrid plasmonic–MOF device proves amore » promising strategy for future on-chip gas sensing with ultra-compact size.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chong, Xinyuan; Kim, Ki-joong; Zhang, Yujing
In this letter, we present a nanophotonic device consisting of plasmonic nanopatch array (NPA) with integrated metal–organic framework (MOF) for enhanced infrared absorption gas sensing. By designing a gold NPA on a sapphire substrate, we are able to achieve enhanced optical field that spatially overlaps with the MOF layer, which can adsorb carbon dioxide (CO 2) with high capacity. Additionally, experimental results show that this hybrid plasmonic–MOF device can effectively increase the infrared absorption path of on-chip gas sensors by more than 1100-fold. Lastly, the demonstration of infrared absorption spectroscopy of CO 2 using the hybrid plasmonic–MOF device proves amore » promising strategy for future on-chip gas sensing with ultra-compact size.« less
Alaerts, Luc; Séguin, Etienne; Poelman, Hilde; Thibault-Starzyk, Frédéric; Jacobs, Pierre A; De Vos, Dirk E
2006-09-25
An optimized procedure was designed for the preparation of the microporous metal-organic framework (MOF) [Cu3(btc)2] (BTC=benzene-1,3,5-tricarboxylate). The crystalline material was characterized by X-ray diffraction, optical microscopy, SEM, X-ray photoelectron spectroscopy, N2 sorption, thermogravimetry, and IR spectroscopy of adsorbed CO. CO adsorbs on a small number of Cu2O impurities, and particularly on the free CuII coordination sites in the framework. [Cu3(btc)2] is a highly selective Lewis acid catalyst for the isomerization of terpene derivatives, such as the rearrangement of alpha-pinene oxide to campholenic aldehyde and the cyclization of citronellal to isopulegol. By using the ethylene ketal of 2-bromopropiophenone as a test substrate, it was demonstrated that the active sites in [Cu3(btc)2] are hard Lewis acids. Catalyst stability, re-usability, and heterogeneity are critically assessed.
Experiment Management System for the SND Detector
NASA Astrophysics Data System (ADS)
Pugachev, K.
2017-10-01
We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). An important part to report about is access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. User interaction comes true using web-interface. The server side includes several logical layers: user interface templates; template variables description and initialization; implementation details. The templates are meant to involve as less IT knowledge as possible. Experiment configuration, conditions and metadata are stored in a database. To implement the server side Node.js, a modern JavaScript framework, has been chosen. A new template engine having an interesting feature is designed. A part of the system is put into production. It includes templates dealing with showing and editing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index.
Development of user-friendly and interactive data collection system for cerebral palsy.
Raharjo, I; Burns, T G; Venugopalan, J; Wang, M D
2016-02-01
Cerebral palsy (CP) is a permanent motor disorder that appears in early age and it requires multiple tests to assess the physical and mental capabilities of the patients. Current medical record data collection systems, e.g., EPIC, employed for CP are very general, difficult to navigate, and prone to errors. The data cannot easily be extracted which limits data analysis on this rich source of information. To overcome these limitations, we designed and prototyped a database with a graphical user interface geared towards clinical research specifically in CP. The platform with MySQL and Java framework is reliable, secure, and can be easily integrated with other programming languages for data analysis such as MATLAB. This database with GUI design is a promising tool for data collection and can be applied in many different fields aside from CP to infer useful information out of the vast amount of data being collected.
Development of user-friendly and interactive data collection system for cerebral palsy
Raharjo, I.; Burns, T. G.; Venugopalan, J.; Wang., M. D.
2016-01-01
Cerebral palsy (CP) is a permanent motor disorder that appears in early age and it requires multiple tests to assess the physical and mental capabilities of the patients. Current medical record data collection systems, e.g., EPIC, employed for CP are very general, difficult to navigate, and prone to errors. The data cannot easily be extracted which limits data analysis on this rich source of information. To overcome these limitations, we designed and prototyped a database with a graphical user interface geared towards clinical research specifically in CP. The platform with MySQL and Java framework is reliable, secure, and can be easily integrated with other programming languages for data analysis such as MATLAB. This database with GUI design is a promising tool for data collection and can be applied in many different fields aside from CP to infer useful information out of the vast amount of data being collected. PMID:28133638
Design and construction of porous metal-organic frameworks based on flexible BPH pillars
NASA Astrophysics Data System (ADS)
Hao, Xiang-Rong; Yang, Guang-sheng; Shao, Kui-Zhan; Su, Zhong-Min; Yuan, Gang; Wang, Xin-Long
2013-02-01
Three metal-organic frameworks (MOFs), [Co2(BPDC)2(4-BPH)·3DMF]n (1), [Cd2(BPDC)2(4-BPH)2·2DMF]n (2) and [Ni2(BDC)2(3-BPH)2 (H2O)·4DMF]n (3) (H2BPDC=biphenyl-4,4'-dicarboxylic acid, H2BDC=terephthalic acid, BPH=bis(pyridinylethylidene)hydrazine and DMF=N,N'-dimethylformamide), have been solvothermally synthesized based on the insertion of heterogeneous BPH pillars. Framework 1 has "single-pillared" MOF-5-like motif with inner cage diameters of up to 18.6 Å. Framework 2 has "double pillared" MOF-5-like motif with cage diameters of 19.2 Å while 3 has "double pillared" 8-connected framework with channel diameters of 11.0 Å. Powder X-ray diffraction (PXRD) shows that 3 is a dynamic porous framework.
Gonzalez, Miguel I; Mason, Jarad A; Bloch, Eric D; Teat, Simon J; Gagnon, Kevin J; Morrison, Gregory Y; Queen, Wendy L; Long, Jeffrey R
2017-06-01
The crystallographic characterization of framework-guest interactions in metal-organic frameworks allows the location of guest binding sites and provides meaningful information on the nature of these interactions, enabling the correlation of structure with adsorption behavior. Here, techniques developed for in situ single-crystal X-ray diffraction experiments on porous crystals have enabled the direct observation of CO, CH 4 , N 2 , O 2 , Ar, and P 4 adsorption in Co 2 (dobdc) (dobdc 4- = 2,5-dioxido-1,4-benzenedicarboxylate), a metal-organic framework bearing coordinatively unsaturated cobalt(ii) sites. All these molecules exhibit such weak interactions with the high-spin cobalt(ii) sites in the framework that no analogous molecular structures exist, demonstrating the utility of metal-organic frameworks as crystalline matrices for the isolation and structural determination of unstable species. Notably, the Co-CH 4 and Co-Ar interactions observed in Co 2 (dobdc) represent, to the best of our knowledge, the first single-crystal structure determination of a metal-CH 4 interaction and the first crystallographically characterized metal-Ar interaction. Analysis of low-pressure gas adsorption isotherms confirms that these gases exhibit mainly physisorptive interactions with the cobalt(ii) sites in Co 2 (dobdc), with differential enthalpies of adsorption as weak as -17(1) kJ mol -1 (for Ar). Moreover, the structures of Co 2 (dobdc)·3.8N 2 , Co 2 (dobdc)·5.9O 2 , and Co 2 (dobdc)·2.0Ar reveal the location of secondary (N 2 , O 2 , and Ar) and tertiary (O 2 ) binding sites in Co 2 (dobdc), while high-pressure CO 2 , CO, CH 4 , N 2 , and Ar adsorption isotherms show that these binding sites become more relevant at elevated pressures.
Tabassum, Hassina; Mahmood, Asif; Wang, Qingfei; Xia, Wei; Liang, Zibin; Qiu, Bin; zhao, Ruo; Zou, Ruqiang
2017-01-01
To cater for the demands of electrochemical energy storage system, the development of cost effective, durable and highly efficient electrode materials is desired. Here, a novel electrode material based on redox active β-Co(OH)2 and B, N co-doped graphene nanohybrid is presented for electrochemical supercapacitor by employing a facile metal-organic frameworks (MOFs) route through pyrolysis and hydrothermal treatment. The Co(OH)2 could be firmly stabilized by dual protection of N-doped carbon polyhedron (CP) and B/N co-doped graphene (BCN) nanosheets. Interestingly, the porous carbon and BCN nanosheets greatly improve the charge storage, wettability, and redox activity of electrodes. Thus the hybrid delivers specific capacitance of 1263 F g−1 at a current density of 1A g−1 with 90% capacitance retention over 5000 cycles. Furthermore, the new aqueous asymmetric supercapacitor (ASC) was also designed by using Co(OH)2@CP@BCN nanohybrid and BCN nanosheets as positive and negative electrodes respectively, which leads to high energy density of 20.25 Whkg−1. This device also exhibits excellent rate capability with energy density of 15.55 Whkg−1 at power density of 9331 Wkg−1 coupled long termed stability up to 6000 cycles. PMID:28240224
Tabassum, Hassina; Mahmood, Asif; Wang, Qingfei; Xia, Wei; Liang, Zibin; Qiu, Bin; Zhao, Ruo; Zou, Ruqiang
2017-02-27
To cater for the demands of electrochemical energy storage system, the development of cost effective, durable and highly efficient electrode materials is desired. Here, a novel electrode material based on redox active β-Co(OH) 2 and B, N co-doped graphene nanohybrid is presented for electrochemical supercapacitor by employing a facile metal-organic frameworks (MOFs) route through pyrolysis and hydrothermal treatment. The Co(OH) 2 could be firmly stabilized by dual protection of N-doped carbon polyhedron (CP) and B/N co-doped graphene (BCN) nanosheets. Interestingly, the porous carbon and BCN nanosheets greatly improve the charge storage, wettability, and redox activity of electrodes. Thus the hybrid delivers specific capacitance of 1263 F g -1 at a current density of 1A g -1 with 90% capacitance retention over 5000 cycles. Furthermore, the new aqueous asymmetric supercapacitor (ASC) was also designed by using Co(OH) 2 @CP@BCN nanohybrid and BCN nanosheets as positive and negative electrodes respectively, which leads to high energy density of 20.25 Whkg -1 . This device also exhibits excellent rate capability with energy density of 15.55 Whkg -1 at power density of 9331 Wkg -1 coupled long termed stability up to 6000 cycles.
Assessing the impact of healthcare research: A systematic review of methodological frameworks.
Cruz Rivera, Samantha; Kyte, Derek G; Aiyegbusi, Olalekan Lee; Keeley, Thomas J; Calvert, Melanie J
2017-08-01
Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix. Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) 'primary research-related impact', (2) 'influence on policy making', (3) 'health and health systems impact', (4) 'health-related and societal impact', and (5) 'broader economic impact'. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched. The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.
Elliot, Joshua; Sharma, Bhavna; Best, Neil; Glotter, Michael; Dunn, Jennifer B.; Foster, Ian; Miguez, Fernando; Mueller, Steffen; Wang, Michael
2014-01-01
We present a novel bottom-up approach to estimate biofuel-induced land-use change (LUC) and resulting CO2 emissions in the U.S. from 2010 to 2022, based on a consistent methodology across four essential components: land availability, land suitability, LUC decision-making, and induced CO2 emissions. Using highresolution geospatial data and modeling, we construct probabilistic assessments of county-, state-, and national-level LUC and emissions for macroeconomic scenarios. We use the Cropland Data Layer and the Protected Areas Database to characterize availability of land for biofuel crop cultivation, and the CERES-Maize and BioCro biophysical crop growth models to estimate the suitability (yield potential) of available lands for biofuel crops. For LUC decisionmaking, we use a county-level stochastic partial-equilibrium modeling framework and consider five scenarios involving annual ethanol production scaling to 15, 22, and 29 BG, respectively, in 2022, with corn providing feedstock for the first 15 BG and the remainder coming from one of two dedicated energy crops. Finally, we derive high-resolution above-ground carbon factors from the National Biomass and Carbon Data set to estimate emissions from each LUC pathway. Based on these inputs, we obtain estimates for average total LUC emissions of 6.1, 2.2, 1.0, 2.2, and 2.4 gCO2e/MJ for Corn-15 Billion gallons (BG), Miscanthus × giganteus (MxG)-7 BG, Switchgrass (SG)-7 BG, MxG-14 BG, and SG-14 BG scenarios, respectively.
Reducing Risk in CO2 Sequestration: A Framework for Integrated Monitoring of Basin Scale Injection
NASA Astrophysics Data System (ADS)
Seto, C. J.; Haidari, A. S.; McRae, G. J.
2009-12-01
Geological sequestration of CO2 is an option for stabilization of atmospheric CO2 concentrations. Technical ability to safely store CO2 in the subsurface has been demonstrated through pilot projects and a long history of enhanced oil recovery and acid gas disposal operations. To address climate change, current injection operations must be scaled up by a factor of 100, raising issues of safety and security. Monitoring and verification is an essential component in ensuring safe operations and managing risk. Monitoring provides assurance that CO2 is securely stored in the subsurface, and the mechanisms governing transport and storage are well understood. It also provides an early warning mechanism for identification of anomalies in performance, and a means for intervention and remediation through the ability to locate the CO2. Through theoretical studies, bench scale experiments and pilot tests, a number of technologies have demonstrated their ability to monitor CO2 in the surface and subsurface. Because the focus of these studies has been to demonstrate feasibility, individual techniques have not been integrated to provide a more robust method for monitoring. Considering the large volumes required for injection, size of the potential footprint, length of time a project must be monitored and uncertainty, operational considerations of cost and risk must balance safety and security. Integration of multiple monitoring techniques will reduce uncertainty in monitoring injected CO2, thereby reducing risk. We present a framework for risk management of large scale injection through model based monitoring network design. This framework is applied to monitoring CO2 in a synthetic reservoir where there is uncertainty in the underlying permeability field controlling fluid migration. Deformation and seismic data are used to track plume migration. A modified Ensemble Kalman filter approach is used to estimate flow properties by jointly assimilating flow and geomechanical observations. Issues of risk, cost and uncertainty are considered.
A database de-identification framework to enable direct queries on medical data for secondary use.
Erdal, B S; Liu, J; Ding, J; Chen, J; Marsh, C B; Kamal, J; Clymer, B D
2012-01-01
To qualify the use of patient clinical records as non-human-subject for research purpose, electronic medical record data must be de-identified so there is minimum risk to protected health information exposure. This study demonstrated a robust framework for structured data de-identification that can be applied to any relational data source that needs to be de-identified. Using a real world clinical data warehouse, a pilot implementation of limited subject areas were used to demonstrate and evaluate this new de-identification process. Query results and performances are compared between source and target system to validate data accuracy and usability. The combination of hashing, pseudonyms, and session dependent randomizer provides a rigorous de-identification framework to guard against 1) source identifier exposure; 2) internal data analyst manually linking to source identifiers; and 3) identifier cross-link among different researchers or multiple query sessions by the same researcher. In addition, a query rejection option is provided to refuse queries resulting in less than preset numbers of subjects and total records to prevent users from accidental subject identification due to low volume of data. This framework does not prevent subject re-identification based on prior knowledge and sequence of events. Also, it does not deal with medical free text de-identification, although text de-identification using natural language processing can be included due its modular design. We demonstrated a framework resulting in HIPAA Compliant databases that can be directly queried by researchers. This technique can be augmented to facilitate inter-institutional research data sharing through existing middleware such as caGrid.
Bibliography on CO2 Effects on Vegetation and Ecosystems: 1990-1999 Literature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Michael H.; Curtis, Peter S.; Institute for Scientific Information
This database provides complete bibliographic citations (plus abstracts and keywords, when available) for more than 2700 references published between 1990 and 1999 on the direct effects of elevated atmospheric concentrations of carbon dioxide (CO2) on vegetation, ecosystems, their components and interactions. This bibliography is an update to Direct Effects of Atmospheric CO2 Enrichment on Plants and Ecosystems: An Updated Bibliographic Data Base (ORNL/CDIAC-70), edited by Boyd R. Strain and Jennifer D. Cure, which covered literature from 1980 to 1994. This bibliography was developed to support the Carbon Dioxide Meta-Analysis Project (CO2MAP) at The Ohio State University, but was designed tomore » be useful for a wide variety of purposes related to the effects of elevated CO2 on vegetation and ecosystems.« less
A Framework for Preliminary Design of Aircraft Structures Based on Process Information. Part 1
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
1998-01-01
This report discusses the general framework and development of a computational tool for preliminary design of aircraft structures based on process information. The described methodology is suitable for multidisciplinary design optimization (MDO) activities associated with integrated product and process development (IPPD). The framework consists of three parts: (1) product and process definitions; (2) engineering synthesis, and (3) optimization. The product and process definitions are part of input information provided by the design team. The backbone of the system is its ability to analyze a given structural design for performance as well as manufacturability and cost assessment. The system uses a database on material systems and manufacturing processes. Based on the identified set of design variables and an objective function, the system is capable of performing optimization subject to manufacturability, cost, and performance constraints. The accuracy of the manufacturability measures and cost models discussed here depend largely on the available data on specific methods of manufacture and assembly and associated labor requirements. As such, our focus in this research has been on the methodology itself and not so much on its accurate implementation in an industrial setting. A three-tier approach is presented for an IPPD-MDO based design of aircraft structures. The variable-complexity cost estimation methodology and an approach for integrating manufacturing cost assessment into design process are also discussed. This report is presented in two parts. In the first part, the design methodology is presented, and the computational design tool is described. In the second part, a prototype model of the preliminary design Tool for Aircraft Structures based on Process Information (TASPI) is described. Part two also contains an example problem that applies the methodology described here for evaluation of six different design concepts for a wing spar.
Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.
Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond
2016-11-01
An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre, Robert E.; Orcutt, John M.; Decker, Ryan K.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER is one of the most heavily instrumented sites in the United States measuring various atmospheric parameters on a continuous basis. An inherent challenge with the large databases that EV44 receives from the ER consists of ensuring erroneous data are removed from the databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments; however, no standard QC procedures for all databases currently exist resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build flags within the meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC checks are described. The flagged data points will be plotted in a graphical user interface (GUI) as part of a manual confirmation that the flagged data do indeed need to be removed from the archive. As the rate of launches increases with additional launch vehicle programs, more emphasis is being placed to continually update and check weather databases for data quality before use in launch vehicle design and certification analyses.
Discovering Knowledge from Noisy Databases Using Genetic Programming.
ERIC Educational Resources Information Center
Wong, Man Leung; Leung, Kwong Sak; Cheng, Jack C. Y.
2000-01-01
Presents a framework that combines Genetic Programming and Inductive Logic Programming, two approaches in data mining, to induce knowledge from noisy databases. The framework is based on a formalism of logic grammars and is implemented as a data mining system called LOGENPRO (Logic Grammar-based Genetic Programming System). (Contains 34…
Renehan, Emma; Goeman, Dianne; Koch, Susan
2017-07-20
In Australia, dementia is a national health priority. With the rising number of people living with dementia and shortage of formal and informal carers predicted in the near future, developing approaches to coordinating services in quality-focused ways is considered an urgent priority. Key worker support models are one approach that have been used to assist people living with dementia and their caring unit coordinate services and navigate service systems; however, there is limited literature outlining comprehensive frameworks for the implementation of community dementia key worker roles in practice. In this paper an optimised key worker framework for people with dementia, their family and caring unit living in the community is developed and presented. A number of processes were undertaken to inform the development of a co-designed optimised key worker framework: an expert working and reference group; a systematic review of the literature; and a qualitative evaluation of 14 dementia key worker models operating in Australia involving 14 interviews with organisation managers, 19 with key workers and 15 with people living with dementia and/or their caring unit. Data from the systematic review and evaluation of dementia key worker models were analysed by the researchers and the expert working and reference group using a constant comparative approach to define the essential components of the optimised framework. The developed framework consisted of four main components: overarching philosophies; organisational context; role definition; and key worker competencies. A number of more clearly defined sub-themes sat under each component. Reflected in the framework is the complexity of the dementia journey and the difficulty in trying to develop a 'one size fits all' approach. This co-designed study led to the development of an evidence based framework which outlines a comprehensive synthesis of components viewed as being essential to the implementation of a dementia key worker model of care in the community. The framework was informed and endorsed by people living with dementia and their caring unit, key workers, managers, Australian industry experts, policy makers and researchers. An evaluation of its effectiveness and relevance for practice within the dementia care space is required.
A Step-by-Step Framework on Discrete Events Simulation in Emergency Department; A Systematic Review
Dehghani, Mahsa; Moftian, Nazila; Rezaei-Hachesu, Peyman; Samad-Soltani, Taha
2017-01-01
Objective: To systematically review the current literature of simulation in healthcare including the structured steps in the emergency healthcare sector by proposing a framework for simulation in the emergency department. Methods: For the purpose of collecting the data, PubMed and ACM databases were used between the years 2003 and 2013. The inclusion criteria were to select English-written articles available in full text with the closest objectives from among a total of 54 articles retrieved from the databases. Subsequently, 11 articles were selected for further analysis. Results: The studies focused on the reduction of waiting time and patient stay, optimization of resources allocation, creation of crisis and maximum demand scenarios, identification of overcrowding bottlenecks, investigation of the impact of other systems on the existing system, and improvement of the system operations and functions. Subsequently, 10 simulation steps were derived from the relevant studies after an expert’s evaluation. Conclusion: The 10-steps approach proposed on the basis of the selected studies provides simulation and planning specialists with a structured method for both analyzing problems and choosing best-case scenarios. Moreover, following this framework systematically enables the development of design processes as well as software implementation of simulation problems. PMID:28507994
High-Performance Data Analytics Beyond the Relational and Graph Data Models with GEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Minutoli, Marco; Bhatt, Shreyansh
Graphs represent an increasingly popular data model for data-analytics, since they can naturally represent relationships and interactions between entities. Relational databases and their pure table-based data model are not well suitable to store and process sparse data. Consequently, graph databases have gained interest in the last few years and the Resource Description Framework (RDF) became the standard data model for graph data. Nevertheless, while RDF is well suited to analyze the relationships between the entities, it is not efficient in representing their attributes and properties. In this work we propose the adoption of a new hybrid data model, based onmore » attributed graphs, that aims at overcoming the limitations of the pure relational and graph data models. We present how we have re-designed the GEMS data-analytics framework to fully take advantage of the proposed hybrid data model. To improve analysts productivity, in addition to a C++ API for applications development, we adopt GraQL as input query language. We validate our approach implementing a set of queries on net-flow data and we compare our framework performance against Neo4j. Experimental results show significant performance improvement over Neo4j, up to several orders of magnitude when increasing the size of the input data.« less
Wranik, W Dominika; Hayden, Jill A; Price, Sheri; Parker, Robin M N; Haydt, Susan M; Edwards, Jeanette M; Suter, Esther; Katz, Alan; Gambold, Liesl L; Levy, Adrian R
2016-10-04
Western publicly funded health care systems increasingly rely on interdisciplinary teams to support primary care delivery and management of chronic conditions. This knowledge synthesis focuses on what is known in the academic and grey literature about optimal structural characteristics of teams. Its goal is to assess which factors contribute to the effective functioning of interdisciplinary primary care teams and improved health system outcomes, with specific focus on (i) team structure contribution to team process, (ii) team process contribution to primary care goals, and (iii) team structure contribution to primary care goals. The systematic search of academic literature focuses on four chronic conditions and co-morbidities. Within this scope, qualitative and quantitative studies that assess the effects of team characteristics (funding, governance, organization) on care process and patient outcomes will be searched. Electronic databases (Ovid MEDLINE, Embase, CINAHL, PAIS, Web of Science) will be searched systematically. Online web-based searches will be supported by the Grey Matters Tool. Studies will be included, if they report on interdisciplinary primary care in publicly funded Western health systems, and address the relationships between team structure, process, and/or patient outcomes. Studies will be selected in a three-stage screening process (title/abstract/full text) by two independent reviewers in each stage. Study quality will be assessed using the Mixed Methods Assessment Tool. An a priori framework will be applied to data extraction, and a narrative framework approach is used for the synthesis. Using an integrated knowledge translation approach, an electronic decision support tool will be developed for decision makers. It will be searchable along two axes of inquiry: (i) what primary care goals are supported by specific team characteristics and (ii) how should teams be structured to support specific primary care goals? The results of this evidence review will contribute directly to the design of interdisciplinary primary care teams. The optimized design will support the goals of primary care, contributing to the improved health of populations. PROSPERO CRD42016041884.
Low-carbon building assessment and multi-scale input-output analysis
NASA Astrophysics Data System (ADS)
Chen, G. Q.; Chen, H.; Chen, Z. M.; Zhang, Bo; Shao, L.; Guo, S.; Zhou, S. Y.; Jiang, M. M.
2011-01-01
Presented as a low-carbon building evaluation framework in this paper are detailed carbon emission account procedures for the life cycle of buildings in terms of nine stages as building construction, fitment, outdoor facility construction, transportation, operation, waste treatment, property management, demolition, and disposal for buildings, supported by integrated carbon intensity databases based on multi-scale input-output analysis, essential for low-carbon planning, procurement and supply chain design, and logistics management.
2011-03-01
functions of the vignette editor include visualizing the state of the UAS team, creating T&E scenarios, monitoring the UAS team performance, and...These behaviors are then executed by the robot sequentially (Figure 2). A state machine mission editor allows mission builders to use behaviors from the...include control, robotics, distributed applications, multimedia applications, databases, design patterns, and software engineering. Mr. Lenzi is the
Managing Objects in a Relational Framework
1989-01-01
Database Week, San Jose CA, May.1983, pp.107-113. [Stonebraker 85] Stonebraker,M. and Rowe,L.: "The Design of POSTGRES " Tech.Report UC Berkeley, Nov...latter is equivalent to the definition of an attribute in a POSTGRES relation using the generic Quel facility. Recently, recursive query languages have...utilize rewrite rules. OSQL [Lynl 88] provides a language for associative access. 2. The POSTGRES model [Sto 86] allows Quel and C-procedures as the
Advanced CO2 Removal Technology Development
NASA Technical Reports Server (NTRS)
Finn, John E.; Verma, Sunita; Forrest, Kindall; LeVan, M. Douglas
2001-01-01
The Advanced CO2 Removal Technical Task Agreement covers three active areas of research and development. These include a study of the economic viability of a hybrid membrane/adsorption CO2 removal system, sorbent materials development, and construction of a database of adsorption properties of important fixed gases on several adsorbent material that may be used in CO2 removal systems. The membrane/adsorption CO2 removal system was proposed as a possible way to reduce the energy consumption of the four-bed molecular sieve system now in use. Much of the energy used by the 4BMS is used to desorb water removed in the device s desiccant beds. These beds might be replaced by a desiccating membrane that moves the water from [he incoming stream directly into the outlet stream. The approach may allow the CO2 removal beds to operate at a lower temperature. A comparison between models of the 4BMS and hybrid systems is underway at Vanderbilt University. NASA Ames Research Center has been investigating a Ag-exchanged zeolites as a possible improvement over currently used Ca and Na zeolites for CO2 removal. Silver ions will complex with n:-bonds in hydrocarbons such as ethylene, giving remarkably improved selectivity for adsorption of those materials. Bonds with n: character are also present in carbon oxides. NASA Ames is also continuing to build a database for adsorption isotherms of CO2, N2, O2, CH4, and Ar on a variety of sorbents. This information is useful for analysis of existing hardware and design of new processes.
Survey of Analysis of Crime Detection Techniques Using Data Mining and Machine Learning
NASA Astrophysics Data System (ADS)
Prabakaran, S.; Mitra, Shilpa
2018-04-01
Data mining is the field containing procedures for finding designs or patterns in a huge dataset, it includes strategies at the convergence of machine learning and database framework. It can be applied to various fields like future healthcare, market basket analysis, education, manufacturing engineering, crime investigation etc. Among these, crime investigation is an interesting application to process crime characteristics to help the society for a better living. This paper survey various data mining techniques used in this domain. This study may be helpful in designing new strategies for crime prediction and analysis.
Zhang, Cheng; Zang, Yaping; Zhang, Fengjiao; Diao, Ying; McNeill, Christopher R; Di, Chong-An; Zhu, Xiaozhang; Zhu, Daoben
2016-10-01
"Molecule-framework" and "side-chain" engineering is powerful for the design of high-performance organic semiconductors. Based on 2DQTTs, the relationship between molecular structure, film microstructure, and charge-transport property in organic thin-film transistors (OTFTs) is studied. 2DQTT-o-B exhibits outstanding electron mobilities of 5.2 cm 2 V -1 s -1 , which is a record for air-stable solution-processable n-channel small-molecule OTFTs to date. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhu, Liangliang; Fu Tan, Chuan; Gao, Minmin; Ho, Ghim Wei
2015-12-16
A macroporous carbon network combined with mesoporous catalyst immobilization by a template method gives a metal-oxide-organic framework (MoOF) foam microreactor that readily soaks up pollutants and localizes solar energy in itself, leading to effective degradation of water pollutants (e.g., methyl orange (MO) and also hydrogen generation. The cleaned-up water can be removed from the microreactor simply by compression, and the microreactor used repeatedly. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Molecular Framework for Understanding DCIS
2016-10-01
well. Pathologic and Clinical Annotation Database A clinical annotation database titled the Breast Oncology Database has been established to...complement the procured SPORE sample characteristics and annotated pathology data. This Breast Oncology Database is an offsite clinical annotation...database adheres to CSMC Enterprise Information Services (EIS) research database security standards. The Breast Oncology Database consists of: 9 Baseline
Using greenhouse gas fluxes to define soil functional types
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrakis, Sandra; Barba, Josep; Bond-Lamberty, Ben
Soils provide key ecosystem services and directly control ecosystem functions; thus, there is a need to define the reference state of soil functionality. Most common functional classifications of ecosystems are vegetation-centered and neglect soil characteristics and processes. We propose Soil Functional Types (SFTs) as a conceptual approach to represent and describe the functionality of soils based on characteristics of their greenhouse gas (GHG) flux dynamics. We used automated measurements of CO2, CH4 and N2O in a forested area to define SFTs following a simple statistical framework. This study supports the hypothesis that SFTs provide additional insights on the spatial variabilitymore » of soil functionality beyond information represented by commonly measured soil parameters (e.g., soil moisture, soil temperature, litter biomass). We discuss the implications of this framework at the plot-scale and the potential of this approach at larger scales. This approach is a first step to provide a framework to define SFTs, but a community effort is necessary to harmonize any global classification for soil functionality. A global application of the proposed SFT framework will only be possible if there is a community-wide effort to share data and create a global database of GHG emissions from soils.« less
LHCb experience with LFC replication
NASA Astrophysics Data System (ADS)
Bonifazi, F.; Carbone, A.; Perez, E. D.; D'Apice, A.; dell'Agnello, L.; Duellmann, D.; Girone, M.; Re, G. L.; Martelli, B.; Peco, G.; Ricci, P. P.; Sapunenko, V.; Vagnoni, V.; Vitlacil, D.
2008-07-01
Database replication is a key topic in the framework of the LHC Computing Grid to allow processing of data in a distributed environment. In particular, the LHCb computing model relies on the LHC File Catalog, i.e. a database which stores information about files spread across the GRID, their logical names and the physical locations of all the replicas. The LHCb computing model requires the LFC to be replicated at Tier-1s. The LCG 3D project deals with the database replication issue and provides a replication service based on Oracle Streams technology. This paper describes the deployment of the LHC File Catalog replication to the INFN National Center for Telematics and Informatics (CNAF) and to other LHCb Tier-1 sites. We performed stress tests designed to evaluate any delay in the propagation of the streams and the scalability of the system. The tests show the robustness of the replica implementation with performance going much beyond the LHCb requirements.
Institutional Factors for Supporting Electronic Learning Communities
ERIC Educational Resources Information Center
Linton, Jayme N.
2017-01-01
This study was designed to explore how the electronic learning community (eLC) process at an established state virtual high school (SVHS) supported new and veteran online high school teachers through the communities of practice (CoP) framework. Specifically, this study focused on the institutionally-driven nature of the eLC process, using Wenger's…
DOT National Transportation Integrated Search
2017-09-01
A number of Connected and/or Automated Vehicle (CAV) applications have recently been designed to improve the performance of our transportation system. Safety, mobility and environmental sustainability are three cornerstone performance metrics when ev...
Sustainable urban systems: Co-design and framing for transformation.
Webb, Robert; Bai, Xuemei; Smith, Mark Stafford; Costanza, Robert; Griggs, David; Moglia, Magnus; Neuman, Michael; Newman, Peter; Newton, Peter; Norman, Barbara; Ryan, Chris; Schandl, Heinz; Steffen, Will; Tapper, Nigel; Thomson, Giles
2018-02-01
Rapid urbanisation generates risks and opportunities for sustainable development. Urban policy and decision makers are challenged by the complexity of cities as social-ecological-technical systems. Consequently there is an increasing need for collaborative knowledge development that supports a whole-of-system view, and transformational change at multiple scales. Such holistic urban approaches are rare in practice. A co-design process involving researchers, practitioners and other stakeholders, has progressed such an approach in the Australian context, aiming to also contribute to international knowledge development and sharing. This process has generated three outputs: (1) a shared framework to support more systematic knowledge development and use, (2) identification of barriers that create a gap between stated urban goals and actual practice, and (3) identification of strategic focal areas to address this gap. Developing integrated strategies at broader urban scales is seen as the most pressing need. The knowledge framework adopts a systems perspective that incorporates the many urban trade-offs and synergies revealed by a systems view. Broader implications are drawn for policy and decision makers, for researchers and for a shared forward agenda.
NASA Technical Reports Server (NTRS)
Bebout, Leslie; Keller, R.; Miller, S.; Jahnke, L.; DeVincenzi, D. (Technical Monitor)
2002-01-01
The Ames Exobiology Culture Collection Database (AECC-DB) has been developed as a collaboration between microbial ecologists and information technology specialists. It allows for extensive web-based archiving of information regarding field samples to document microbial co-habitation of specific ecosystem micro-environments. Documentation and archiving continues as pure cultures are isolated, metabolic properties determined, and DNA extracted and sequenced. In this way metabolic properties and molecular sequences are clearly linked back to specific isolates and the location of those microbes in the ecosystem of origin. Use of this database system presents a significant advancement over traditional bookkeeping wherein there is generally little or no information regarding the environments from which microorganisms were isolated. Generally there is only a general ecosystem designation (i.e., hot-spring). However within each of these there are a myriad of microenvironments with very different properties and determining exactly where (which microenvironment) a given microbe comes from is critical in designing appropriate isolation media and interpreting physiological properties. We are currently using the database to aid in the isolation of a large number of cyanobacterial species and will present results by PI's and students demonstrating the utility of this new approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, JA; McDonald, TM; Bae, TH
Despite the large number of metal-organic frameworks that have been studied in the context of post-combustion carbon capture, adsorption equilibria of gas mixtures including CO2, N-2, and H2O, which are the three biggest components of the flue gas emanating from a coal- or natural gas-fired power plant, have never been reported. Here, we disclose the design and validation of a high-throughput multicomponent adsorption instrument that can measure equilibrium adsorption isotherms for mixtures of gases at conditions that are representative of an actual flue gas from a power plant. This instrument is used to study 15 different metal-organic frameworks, zeolites, mesoporousmore » silicas, and activated carbons representative of the broad range of solid adsorbents that have received attention for CO2 capture. While the multicomponent results presented in this work provide many interesting fundamental insights, only adsorbents functionalized with alkylamines are shown to have any significant CO2 capacity in the presence of N-2 and H2O at equilibrium partial pressures similar to those expected in a carbon capture process. Most significantly, the amine-appended metal organic framework mmen-Mg-2(dobpdc) (mmen = N,N'-dimethylethylenediamine, dobpdc (4-) = 4,4'-dioxido-3,3'-biphenyldicarboxylate) exhibits a record CO2 capacity of 4.2 +/- 0.2 mmol/g (16 wt %) at 0.1 bar and 40 degrees C in the presence of a high partial pressure of H2O.« less
Mason, Jarad A; McDonald, Thomas M; Bae, Tae-Hyun; Bachman, Jonathan E; Sumida, Kenji; Dutton, Justin J; Kaye, Steven S; Long, Jeffrey R
2015-04-15
Despite the large number of metal-organic frameworks that have been studied in the context of post-combustion carbon capture, adsorption equilibria of gas mixtures including CO2, N2, and H2O, which are the three biggest components of the flue gas emanating from a coal- or natural gas-fired power plant, have never been reported. Here, we disclose the design and validation of a high-throughput multicomponent adsorption instrument that can measure equilibrium adsorption isotherms for mixtures of gases at conditions that are representative of an actual flue gas from a power plant. This instrument is used to study 15 different metal-organic frameworks, zeolites, mesoporous silicas, and activated carbons representative of the broad range of solid adsorbents that have received attention for CO2 capture. While the multicomponent results presented in this work provide many interesting fundamental insights, only adsorbents functionalized with alkylamines are shown to have any significant CO2 capacity in the presence of N2 and H2O at equilibrium partial pressures similar to those expected in a carbon capture process. Most significantly, the amine-appended metal organic framework mmen-Mg2(dobpdc) (mmen = N,N'-dimethylethylenediamine, dobpdc (4-) = 4,4'-dioxido-3,3'-biphenyldicarboxylate) exhibits a record CO2 capacity of 4.2 ± 0.2 mmol/g (16 wt %) at 0.1 bar and 40 °C in the presence of a high partial pressure of H2O.
Harmonizing the interpretation of genetic variants across the world: the Malaysian experience.
Hassan, Nik Norliza Nik; Plazzer, John-Paul; Smith, Timothy D; Halim-Fikri, Hashim; Macrae, Finlay; Zubaidi, A A L; Zilfalil, Bin Alwi
2016-02-26
Databases for gene variants are very useful for sharing genetic data and to facilitate the understanding of the genetic basis of diseases. This report summarises the issues surrounding the development of the Malaysian Human Variome Project Country Node. The focus is on human germline variants. Somatic variants, mitochondrial variants and other types of genetic variation have corresponding databases which are not covered here, as they have specific issues that do not necessarily apply to germline variations. The ethical, legal, social issues, intellectual property, ownership of the data, information technology implementation, and efforts to improve the standards and systems used in data sharing are discussed. An overarching framework such as provided by the Human Variome Project to co-ordinate activities is invaluable. Country Nodes, such as MyHVP, enable human gene variation associated with human diseases to be collected, stored and shared by all disciplines (clinicians, molecular biologists, pathologists, bioinformaticians) for a consistent interpretation of genetic variants locally and across the world.
SQLGEN: a framework for rapid client-server database application development.
Nadkarni, P M; Cheung, K H
1995-12-01
SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.
Protocols for the Design of Kinase-focused Compound Libraries.
Jacoby, Edgar; Wroblowski, Berthold; Buyck, Christophe; Neefs, Jean-Marc; Meyer, Christophe; Cummings, Maxwell D; van Vlijmen, Herman
2018-05-01
Protocols for the design of kinase-focused compound libraries are presented. Kinase-focused compound libraries can be differentiated based on the design goal. Depending on whether the library should be a discovery library specific for one particular kinase, a general discovery library for multiple distinct kinase projects, or even phenotypic screening, there exists today a variety of in silico methods to design candidate compound libraries. We address the following scenarios: 1) Datamining of SAR databases and kinase focused vendor catalogues; 2) Predictions and virtual screening; 3) Structure-based design of combinatorial kinase inhibitors; 4) Design of covalent kinase inhibitors; 5) Design of macrocyclic kinase inhibitors; and 6) Design of allosteric kinase inhibitors and activators. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree
ERIC Educational Resources Information Center
Chen, Wei-Bang
2012-01-01
The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…
Effects of Early Writing Intervention Delivered within a Data-Based Instruction Framework
ERIC Educational Resources Information Center
Jung, Pyung-Gang; McMaster, Kristen L.; delMas, Robert C.
2017-01-01
We examined effects of research-based early writing intervention delivered within a data-based instruction (DBI) framework for children with intensive needs. We randomly assigned 46 students with and without disabilities in Grades 1 to 3 within classrooms to either treatment or control. Treatment students received research-based early writing…
WEB-GIS Decision Support System for CO2 storage
NASA Astrophysics Data System (ADS)
Gaitanaru, Dragos; Leonard, Anghel; Radu Gogu, Constantin; Le Guen, Yvi; Scradeanu, Daniel; Pagnejer, Mihaela
2013-04-01
Environmental decision support systems (DSS) paradigm evolves and changes as more knowledge and technology become available to the environmental community. Geographic Information Systems (GIS) can be used to extract, assess and disseminate some types of information, which are otherwise difficult to access by traditional methods. In the same time, with the help of the Internet and accompanying tools, creating and publishing online interactive maps has become easier and rich with options. The Decision Support System (MDSS) developed for the MUSTANG (A MUltiple Space and Time scale Approach for the quaNtification of deep saline formations for CO2 storaGe) project is a user friendly web based application that uses the GIS capabilities. MDSS can be exploited by the experts for CO2 injection and storage in deep saline aquifers. The main objective of the MDSS is to help the experts to take decisions based large structured types of data and information. In order to achieve this objective the MDSS has a geospatial objected-orientated database structure for a wide variety of data and information. The entire application is based on several principles leading to a series of capabilities and specific characteristics: (i) Open-Source - the entire platform (MDSS) is based on open-source technologies - (1) database engine, (2) application server, (3) geospatial server, (4) user interfaces, (5) add-ons, etc. (ii) Multiple database connections - MDSS is capable to connect to different databases that are located on different server machines. (iii)Desktop user experience - MDSS architecture and design follows the structure of a desktop software. (iv)Communication - the server side and the desktop are bound together by series functions that allows the user to upload, use, modify and download data within the application. The architecture of the system involves one database and a modular application composed by: (1) a visualization module, (2) an analysis module, (3) a guidelines module, and (4) a risk assessment module. The Database component is build by using the PostgreSQL and PostGIS open source technology. The visualization module allows the user to view data of CO2 injection sites in different ways: (1) geospatial visualization, (2) table view, (3) 3D visualization. The analysis module will allow the user to perform certain analysis like Injectivity, Containment and Capacity analysis. The Risk Assessment module focus on the site risk matrix approach. The Guidelines module contains the methodologies of CO2 injection and storage into deep saline aquifers guidelines.
Tomer, Mark D; James, David E; Sandoval-Green, Claudette M J
2017-05-01
Conservation planning information is important for identifying options for watershed water quality improvement and can be developed for use at field, farm, and watershed scales. Translation across scales is a key issue impeding progress at watershed scales because watershed improvement goals must be connected with implementation of farm- and field-level conservation practices to demonstrate success. This is particularly true when examining alternatives for "trap and treat" practices implemented at agricultural-field edges to control (or influence) water flows through fields, landscapes, and riparian corridors within agricultural watersheds. We propose that database structures used in developing conservation planning information can achieve translation across conservation-planning scales, and we developed the Agricultural Conservation Planning Framework (ACPF) to enable practical planning applications. The ACPF comprises a planning concept, a database to facilitate field-level and watershed-scale analyses, and an ArcGIS toolbox with Python scripts to identify specific options for placement of conservation practices. This paper appends two prior publications and describes the structure of the ACPF database, which contains land use, crop history, and soils information and is available for download for 6091 HUC12 watersheds located across Iowa, Illinois, Minnesota, and parts of Kansas, Missouri, Nebraska, and Wisconsin and comprises information on 2.74 × 10 agricultural fields (available through /). Sample results examining land use trends across Iowa and Illinois are presented here to demonstrate potential uses of the database. While designed for use with the ACPF toolbox, users are welcome to use the ACPF watershed data in a variety of planning and modeling approaches. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Lazem, Shaimaa; Webster, Mary; Holmes, Wayne; Wolf, Motje
2015-09-02
Here we review 18 articles that describe the design and evaluation of 1 or more games for diabetes from technical, methodological, and theoretical perspectives. We undertook searches covering the period 2010 to May 2015 in the ACM, IEEE, Journal of Medical Internet Research, Studies in Health Technology and Informatics, and Google Scholar online databases using the keywords "children," "computer games," "diabetes," "games," "type 1," and "type 2" in various Boolean combinations. The review sets out to establish, for future research, an understanding of the current landscape of digital games designed for children with diabetes. We briefly explored the use and impact of well-established learning theories in such games. The most frequently mentioned theoretical frameworks were social cognitive theory and social constructivism. Due to the limitations of the reported evaluation methodologies, little evidence was found to support the strong promise of games for diabetes. Furthermore, we could not establish a relation between design features and the game outcomes. We argue that an in-depth discussion about the extent to which learning theories could and should be manifested in the design decisions is required. © 2015 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Verlinde, Christophe L. M. J.; Rudenko, Gabrielle; Hol, Wim G. J.
1992-04-01
A modular method for pursuing structure-based inhibitor design in the framework of a design cycle is presented. The approach entails four stages: (1) a design pathway is defined in the three-dimensional structure of a target protein; (2) this pathway is divided into subregions; (3) complementary building blocks, also called fragments, are designed in each subregion; complementarity is defined in terms of shape, hydrophobicity, hydrogen bond properties and electrostatics; and (4) fragments from different subregions are linked into potential lead compounds. Stages (3) and (4) are qualitatively guided by force-field calculations. In addition, the designed fragments serve as entries for retrieving existing compounds from chemical databases. This linked-fragment approach has been applied in the design of potentially selective inhibitors of triosephosphate isomerase from Trypanosoma brucei, the causative agent of sleeping sickness.
Thirion, Damien; Lee, Joo S; Özdemir, Ercan
2016-01-01
Effective carbon dioxide (CO2) capture requires solid, porous sorbents with chemically and thermally stable frameworks. Herein, we report two new carbon–carbon bonded porous networks that were synthesized through metal-free Knoevenagel nitrile–aldol condensation, namely the covalent organic polymer, COP-156 and 157. COP-156, due to high specific surface area (650 m2/g) and easily interchangeable nitrile groups, was modified post-synthetically into free amine- or amidoxime-containing networks. The modified COP-156-amine showed fast and increased CO2 uptake under simulated moist flue gas conditions compared to the starting network and usual industrial CO2 solvents, reaching up to 7.8 wt % uptake at 40 °C. PMID:28144294
NASA Astrophysics Data System (ADS)
Yim, Keun Soo
This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads.
Cachat, Jonathan; Bandrowski, Anita; Grethe, Jeffery S; Gupta, Amarnath; Astakhov, Vadim; Imam, Fahim; Larson, Stephen D; Martone, Maryann E
2012-01-01
The number of available neuroscience resources (databases, tools, materials, and networks) available via the Web continues to expand, particularly in light of newly implemented data sharing policies required by funding agencies and journals. However, the nature of dense, multifaceted neuroscience data and the design of classic search engine systems make efficient, reliable, and relevant discovery of such resources a significant challenge. This challenge is especially pertinent for online databases, whose dynamic content is largely opaque to contemporary search engines. The Neuroscience Information Framework was initiated to address this problem of finding and utilizing neuroscience-relevant resources. Since its first production release in 2008, NIF has been surveying the resource landscape for the neurosciences, identifying relevant resources and working to make them easily discoverable by the neuroscience community. In this chapter, we provide a survey of the resource landscape for neuroscience: what types of resources are available, how many there are, what they contain, and most importantly, ways in which these resources can be utilized by the research community to advance neuroscience research. Copyright © 2012 Elsevier Inc. All rights reserved.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
Karim, Quarraisha Abdool; Kharsany, Ayesha B M; Naidoo, Kasavan; Yende, Nonhlanhla; Gengiah, Tanuja; Omar, Zaheen; Arulappan, Natasha; Mlisana, Koleka P; Luthuli, Londiwe R; Karim, Salim S Abdool
2011-05-01
In settings where multiple HIV prevention trials are conducted in close proximity, trial participants may attempt to enroll in more than one trial simultaneously. Co-enrollment impacts on participant's safety and validity of trial results. We describe our experience, remedial action taken, inter-organizational collaboration and lessons learnt following the identification of co-enrolled participants. Between February and April 2008, we identified 185 of the 398 enrolled participants as ineligible. In violation of the study protocol exclusion criteria, there was simultaneous enrollment in another HIV prevention trial (ineligible co-enrolled, n=135), and enrollment of women who had participated in a microbicide trial within the past 12 months (ineligible not co-enrolled, n=50). Following a complete audit of all enrolled participants, ineligible participants were discontinued via study exit visits from trial follow-up. Custom-designed education program on co-enrollment impacting on participants' safety and validity of the trial results was implemented. Shared electronic database between research units was established to enable verification of each volunteer's trial participation and to prevent future co-enrollments. Interviews with ineligible enrolled women revealed that high-quality care, financial incentives, altruistic motives, preference for sex with gel, wanting to increase their likelihood of receiving active gel, perceived low risk of discovery and peer pressure are the reasons for their enrollment in the CAPRISA 004 trial. Instituting education programs based on the reasons reported by women for seeking enrollment in more than one trial and using a shared central database system to identify co-enrollments have effectively prevented further co-enrollments. Copyright © 2011 Elsevier Inc. All rights reserved.
Li, Min; Dong, Xiang-yu; Liang, Hao; Leng, Li; Zhang, Hui; Wang, Shou-zhi; Li, Hui; Du, Zhi-Qiang
2017-05-20
Effective management and analysis of precisely recorded phenotypic traits are important components of the selection and breeding of superior livestocks. Over two decades, we divergently selected chicken lines for abdominal fat content at Northeast Agricultural University (Northeast Agricultural University High and Low Fat, NEAUHLF), and collected large volume of phenotypic data related to the investigation on molecular genetic basis of adipose tissue deposition in broilers. To effectively and systematically store, manage and analyze phenotypic data, we built the NEAUHLF Phenome Database (NEAUHLFPD). NEAUHLFPD included the following phenotypic records: pedigree (generations 1-19) and 29 phenotypes, such as body sizes and weights, carcass traits and their corresponding rates. The design and construction strategy of NEAUHLFPD were executed as follows: (1) Framework design. We used Apache as our web server, MySQL and Navicat as database management tools, and PHP as the HTML-embedded language to create dynamic interactive website. (2) Structural components. On the main interface, detailed introduction on the composition, function, and the index buttons of the basic structure of the database could be found. The functional modules of NEAUHLFPD had two main components: the first module referred to the physical storage space for phenotypic data, in which functional manipulation on data can be realized, such as data indexing, filtering, range-setting, searching, etc.; the second module related to the calculation of basic descriptive statistics, where data filtered from the database can be used for the computation of basic statistical parameters and the simultaneous conditional sorting. NEAUHLFPD could be used to effectively store and manage not only phenotypic, but also genotypic and genomics data, which can facilitate further investigation on the molecular genetic basis of chicken adipose tissue growth and development, and expedite the selection and breeding of broilers with low fat content.
Liu, Xikun
2016-01-01
ABSTRACT Epoxyalkane:coenzyme M transferase (EaCoMT) plays a critical role in the aerobic biodegradation and assimilation of alkenes, including ethene, propene, and the toxic chloroethene vinyl chloride (VC). To improve our understanding of the diversity and distribution of EaCoMT genes in the environment, novel EaCoMT-specific terminal-restriction fragment length polymorphism (T-RFLP) and nested-PCR methods were developed and applied to groundwater samples from six different contaminated sites. T-RFLP analysis revealed 192 different EaCoMT T-RFs. Using clone libraries, we retrieved 139 EaCoMT gene sequences from these samples. Phylogenetic analysis revealed that a majority of the sequences (78.4%) grouped with EaCoMT genes found in VC- and ethene-assimilating Mycobacterium strains and Nocardioides sp. strain JS614. The four most-abundant T-RFs were also matched with EaCoMT clone sequences related to Mycobacterium and Nocardioides strains. The remaining EaCoMT sequences clustered within two emergent EaCoMT gene subgroups represented by sequences found in propene-assimilating Gordonia rubripertincta strain B-276 and Xanthobacter autotrophicus strain Py2. EaCoMT gene abundance was positively correlated with VC and ethene concentrations at the sites studied. IMPORTANCE The EaCoMT gene plays a critical role in assimilation of short-chain alkenes, such as ethene, VC, and propene. An improved understanding of EaCoMT gene diversity and distribution is significant to the field of bioremediation in several ways. The expansion of the EaCoMT gene database and identification of incorrectly annotated EaCoMT genes currently in the database will facilitate improved design of environmental molecular diagnostic tools and high-throughput sequencing approaches for future bioremediation studies. Our results further suggest that potentially significant aerobic VC degraders in the environment are not well represented in pure culture. Future research should aim to isolate and characterize aerobic VC-degrading bacteria from these underrepresented groups. PMID:27016563
International Space Station Payload Operations Integration Center (POIC) Overview
NASA Technical Reports Server (NTRS)
Ijames, Gayleen N.
2012-01-01
Objectives and Goals: Maintain and operate the POIC and support integrated Space Station command and control functions. Provide software and hardware systems to support ISS payloads and Shuttle for the POIF cadre, Payload Developers and International Partners. Provide design, development, independent verification &validation, configuration, operational product/system deliveries and maintenance of those systems for telemetry, commanding, database and planning. Provide Backup Control Center for MCC-H in case of shutdown. Provide certified personnel and systems to support 24x7 facility operations per ISS Program. Payloads CoFR Implementation Plan (SSP 52054) and MSFC Payload Operations CoFR Implementation Plan (POIF-1006).
Cooperative CO2 Absorption Isotherms from a Bifunctional Guanidine and Bifunctional Alcohol.
Steinhardt, Rachel; Hiew, Stanley C; Mohapatra, Hemakesh; Nguyen, Du; Oh, Zachary; Truong, Richard; Esser-Kahn, Aaron
2017-12-27
Designing new liquids for CO 2 absorption is a challenge in CO 2 removal. Here, achieving low regeneration energies while keeping high selectivity and large capacity are current challenges. Recent cooperative metal-organic frameworks have shown the potential to address many of these challenges. However, many absorbent systems and designs rely on liquid capture agents. We present herein a liquid absorption system which exhibits cooperative CO 2 absorption isotherms. Upon introduction, CO 2 uptake is initially suppressed, followed by an abrupt increase in absorption. The liquid consists of a bifunctional guanidine and bifunctional alcohol, which, when dissolved in bis(2-methoxyethyl) ether, forms a secondary viscous phase within seconds in response to increases in CO 2 . The precipitation of this second viscous phase drives CO 2 absorption from the gas phase. The isotherm of the bifunctional system differs starkly from the analogous monofunctional system, which exhibits limited CO 2 uptake across the same pressure range. In our system, CO 2 absorption is strongly solvent dependent. In DMSO, both systems exhibit hyperbolic isotherms and no precipitation occurs. Subsequent 1 H NMR experiments confirmed the formation of distinct alkylcarbonate species having either one or two molecules of CO 2 bound. The solvent and structure relationships derived from these results can be used to tailor new liquid absorption systems to the conditions of a given CO 2 separation process.
Han, Xue; Tao, Kai; Wang, Ding; Han, Lei
2018-02-08
Porous nanosheet-structured electrode materials are very attractive for the high efficiency storage of electrochemical energy. Herein, a porous cobalt sulfide nanosheet array on Ni foam (Co 9 S 8 -NSA/NF) is successfully fabricated by a facile method, which involves the uniform growth of 2D Co-based leaf-like zeolitic imidazole frameworks (Co-ZIF-L) on Ni foam followed by subsequent sulfurization with thioacetamide (TAA). Benefiting from the unique porous nanosheet array architecture and conductive substrate, the Co 9 S 8 -NSA/NF exhibits excellent electrochemical performance with a high capacitance (1098.8 F g -1 at 0.5 A g -1 ), good rate capacity (54.6% retention at 10 A g -1 ) and long-term stability (87.4% retention over 1000 cycles), when acted as a binder-free electrode for supercapacitors. Furthermore, an assembled asymmetric supercapacitor device using the as-fabricated Co 9 S 8 -NSA as the positive electrode and activated carbon (AC) as the negative electrode also exhibits a high energy density of 20.0 W h kg -1 at a high power density of 828.5 W kg -1 . The method developed here can be extended to the construction of other structured metal (mono or mixed) sulfide electrode materials for more efficient energy storage.
Uncertainty Quantification for CO2-Enhanced Oil Recovery
NASA Astrophysics Data System (ADS)
Dai, Z.; Middleton, R.; Bauman, J.; Viswanathan, H.; Fessenden-Rahn, J.; Pawar, R.; Lee, S.
2013-12-01
CO2-Enhanced Oil Recovery (EOR) is currently an option for permanently sequestering CO2 in oil reservoirs while increasing oil/gas productions economically. In this study we have developed a framework for understanding CO2 storage potential within an EOR-sequestration environment at the Farnsworth Unit of the Anadarko Basin in northern Texas. By coupling a EOR tool--SENSOR (CEI, 2011) with a uncertainty quantification tool PSUADE (Tong, 2011), we conduct an integrated Monte Carlo simulation of water, oil/gas components and CO2 flow and reactive transport in the heterogeneous Morrow formation to identify the key controlling processes and optimal parameters for CO2 sequestration and EOR. A global sensitivity and response surface analysis are conducted with PSUADE to build numerically the relationship among CO2 injectivity, oil/gas production, reservoir parameters and distance between injection and production wells. The results indicate that the reservoir permeability and porosity are the key parameters to control the CO2 injection, oil and gas (CH4) recovery rates. The distance between the injection and production wells has large impact on oil and gas recovery and net CO2 injection rates. The CO2 injectivity increases with the increasing reservoir permeability and porosity. The distance between injection and production wells is the key parameter for designing an EOR pattern (such as a five (or nine)-spot pattern). The optimal distance for a five-spot-pattern EOR in this site is estimated from the response surface analysis to be around 400 meters. Next, we are building the machinery into our risk assessment framework CO2-PENS to utilize these response surfaces and evaluate the operation risk for CO2 sequestration and EOR at this site.
Ge, Xiaoli; Li, Zhaoqiang; Wang, Chengxiang; Yin, Longwei
2015-12-09
Metal-organic frameworks (MOFs) derived porous core/shell ZnO/ZnCo2O4/C hybrids with ZnO as a core and ZnCo2O4 as a shell are for the first time fabricated by using core/shell ZnCo-MOF precursors as reactant templates. The unique MOFs-derived core/shell structured ZnO/ZnCo2O4/C hybrids are assembled from nanoparticles of ZnO and ZnCo2O4, with homogeneous carbon layers coated on the surface of the ZnCo2O4 shell. When acting as anode materials for lithium-ion batteries (LIBs), the MOFs-derived porous ZnO/ZnCo2O4/C anodes exhibit outstanding cycling stability, high Coulombic efficiency, and remarkable rate capability. The excellent electrochemical performance of the ZnO/ZnCo2O4/C LIB anodes can be attributed to the synergistic effect of the porous structure of the MOFs-derived core/shell ZnO/ZnCo2O4/C and homogeneous carbon layer coating on the surface of the ZnCo2O4 shells. The hierarchically porous core/shell structure offers abundant active sites, enhances the electrode/electrolyte contact area, provides abundant channels for electrolyte penetration, and also alleviates the structure decomposition induced by Li(+) insertion/extraction. The carbon layers effectively improve the conductivity of the hybrids and thus enhance the electron transfer rate, efficiently prevent ZnCo2O4 from aggregation and disintegration, and partially buffer the stress induced by the volume change during cycles. This strategy may shed light on designing new MOF-based hybrid electrodes for energy storage and conversion devices.
Computational designing and screening of solid materials for CO2capture
NASA Astrophysics Data System (ADS)
Duan, Yuhua
In this presentation, we will update our progress on computational designing and screening of solid materials for CO2 capture. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials have been proposed and validated at NETL. The advantage of this method is that it identifies the thermodynamic properties of the CO2 capture reaction as a function of temperature and pressure without any experimental input beyond crystallographic structural information of the solid phases involved. The calculated thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to identify only those solid materials for which lower capture energy costs are expected at the desired working conditions. In addition, we present a simulation scheme to increase and decrease the turnover temperature (Tt) of solid capturing CO2 reaction by mixing other solids. Our results also show that some solid sorbents can serve as bi-functional materials: CO2 sorbent and CO oxidation catalyst. Such dual functionality could be used for removing both CO and CO2 after water-gas-shift to obtain pure H2.
NASA Astrophysics Data System (ADS)
Prendergast, Mark; O'Donoghue, John
2014-11-01
This research investigates the influence that gender, single-sex and co-educational schooling can have on students' mathematics education in second-level Irish classrooms. Although gender differences in mathematics education have been the subject of research for many years, recent results from PISA (Programme for International Student Assessment) show that there are still marked differences between the achievement and attitude of male and female students in Irish mathematics classrooms. This paper examines the influence of gender in more detail and also investigates the impact of single-sex or co-educational schooling. This is a follow on study which further analyses data collected by the authors when they designed a pedagogical framework and used this to develop, implement and evaluate a teaching intervention in four second-level Irish schools. The aim of this pedagogical framework was to promote student interest in the topic of algebra through effective teaching of the domain. This paper further analyses the quantitative data collected and investigates whether there were differences in students' enjoyment and achievement scores based on their gender and whether they attended single-sex or co-educational schools.
NASA Astrophysics Data System (ADS)
Boulicaut, Jean-Francois; Jeudy, Baptiste
Knowledge Discovery in Databases (KDD) is a complex interactive process. The promising theoretical framework of inductive databases considers this is essentially a querying process. It is enabled by a query language which can deal either with raw data or patterns which hold in the data. Mining patterns turns to be the so-called inductive query evaluation process for which constraint-based Data Mining techniques have to be designed. An inductive query specifies declaratively the desired constraints and algorithms are used to compute the patterns satisfying the constraints in the data. We survey important results of this active research domain. This chapter emphasizes a real breakthrough for hard problems concerning local pattern mining under various constraints and it points out the current directions of research as well.
High-Throughput Characterization of Porous Materials Using Graphics Processing Units
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jihan; Martin, Richard L.; Rübel, Oliver
We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less
Modeling Geomagnetic Variations using a Machine Learning Framework
NASA Astrophysics Data System (ADS)
Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.
2017-12-01
We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.
Argobots: A Lightweight Low-Level Threading and Tasking Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan
In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this paper, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by the user or high-level programming model. We describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.« less
Wu, Shao-Min; Liu, Hsuan; Huang, Po-Jung; Chang, Ian Yi-Feng; Lee, Chi-Ching; Yang, Chia-Yu; Tsai, Wen-Sy; Tan, Bertrand Chin-Ming
2018-01-01
Despite their lack of protein-coding potential, long noncoding RNAs (lncRNAs) and circular RNAs (circRNAs) have emerged as key determinants in gene regulation, acting to fine-tune transcriptional and signaling output. These noncoding RNA transcripts are known to affect expression of messenger RNAs (mRNAs) via epigenetic and post-transcriptional regulation. Given their widespread target spectrum, as well as extensive modes of action, a complete understanding of their biological relevance will depend on integrative analyses of systems data at various levels. While a handful of publicly available databases have been reported, existing tools do not fully capture, from a network perspective, the functional implications of lncRNAs or circRNAs of interest. Through an integrated and streamlined design, circlncRNAnet aims to broaden the understanding of ncRNA candidates by testing in silico several hypotheses of ncRNA-based functions, on the basis of large-scale RNA-seq data. This web server is implemented with several features that represent advances in the bioinformatics of ncRNAs: (1) a flexible framework that accepts and processes user-defined next-generation sequencing-based expression data; (2) multiple analytic modules that assign and productively assess the regulatory networks of user-selected ncRNAs by cross-referencing extensively curated databases; (3) an all-purpose, information-rich workflow design that is tailored to all types of ncRNAs. Outputs on expression profiles, co-expression networks and pathways, and molecular interactomes, are dynamically and interactively displayed according to user-defined criteria. In short, users may apply circlncRNAnet to obtain, in real time, multiple lines of functionally relevant information on circRNAs/lncRNAs of their interest. In summary, circlncRNAnet provides a "one-stop" resource for in-depth analyses of ncRNA biology. circlncRNAnet is freely available at http://app.cgu.edu.tw/circlnc/. © The Authors 2017. Published by Oxford University Press.
The WRKY transcription factor family in Brachypodium distachyon.
Tripathi, Prateek; Rabara, Roel C; Langum, Tanner J; Boken, Ashley K; Rushton, Deena L; Boomsma, Darius D; Rinerson, Charles I; Rabara, Jennifer; Reese, R Neil; Chen, Xianfeng; Rohila, Jai S; Rushton, Paul J
2012-06-22
A complete assembled genome sequence of wheat is not yet available. Therefore, model plant systems for wheat are very valuable. Brachypodium distachyon (Brachypodium) is such a system. The WRKY family of transcription factors is one of the most important families of plant transcriptional regulators with members regulating important agronomic traits. Studies of WRKY transcription factors in Brachypodium and wheat therefore promise to lead to new strategies for wheat improvement. We have identified and manually curated the WRKY transcription factor family from Brachypodium using a pipeline designed to identify all potential WRKY genes. 86 WRKY transcription factors were found, a total higher than all other current databases. We therefore propose that our numbering system (BdWRKY1-BdWRKY86) becomes the standard nomenclature. In the JGI v1.0 assembly of Brachypodium with the MIPS/JGI v1.0 annotation, nine of the transcription factors have no gene model and eleven gene models are probably incorrectly predicted. In total, twenty WRKY transcription factors (23.3%) do not appear to have accurate gene models. To facilitate use of our data, we have produced The Database of Brachypodium distachyon WRKY Transcription Factors. Each WRKY transcription factor has a gene page that includes predicted protein domains from MEME analyses. These conserved protein domains reflect possible input and output domains in signaling. The database also contains a BLAST search function where a large dataset of WRKY transcription factors, published genes, and an extensive set of wheat ESTs can be searched. We also produced a phylogram containing the WRKY transcription factor families from Brachypodium, rice, Arabidopsis, soybean, and Physcomitrella patens, together with published WRKY transcription factors from wheat. This phylogenetic tree provides evidence for orthologues, co-orthologues, and paralogues of Brachypodium WRKY transcription factors. The description of the WRKY transcription factor family in Brachypodium that we report here provides a framework for functional genomics studies in an important model system. Our database is a resource for both Brachypodium and wheat studies and ultimately projects aimed at improving wheat through manipulation of WRKY transcription factors.
The WRKY transcription factor family in Brachypodium distachyon
2012-01-01
Background A complete assembled genome sequence of wheat is not yet available. Therefore, model plant systems for wheat are very valuable. Brachypodium distachyon (Brachypodium) is such a system. The WRKY family of transcription factors is one of the most important families of plant transcriptional regulators with members regulating important agronomic traits. Studies of WRKY transcription factors in Brachypodium and wheat therefore promise to lead to new strategies for wheat improvement. Results We have identified and manually curated the WRKY transcription factor family from Brachypodium using a pipeline designed to identify all potential WRKY genes. 86 WRKY transcription factors were found, a total higher than all other current databases. We therefore propose that our numbering system (BdWRKY1-BdWRKY86) becomes the standard nomenclature. In the JGI v1.0 assembly of Brachypodium with the MIPS/JGI v1.0 annotation, nine of the transcription factors have no gene model and eleven gene models are probably incorrectly predicted. In total, twenty WRKY transcription factors (23.3%) do not appear to have accurate gene models. To facilitate use of our data, we have produced The Database of Brachypodium distachyon WRKY Transcription Factors. Each WRKY transcription factor has a gene page that includes predicted protein domains from MEME analyses. These conserved protein domains reflect possible input and output domains in signaling. The database also contains a BLAST search function where a large dataset of WRKY transcription factors, published genes, and an extensive set of wheat ESTs can be searched. We also produced a phylogram containing the WRKY transcription factor families from Brachypodium, rice, Arabidopsis, soybean, and Physcomitrella patens, together with published WRKY transcription factors from wheat. This phylogenetic tree provides evidence for orthologues, co-orthologues, and paralogues of Brachypodium WRKY transcription factors. Conclusions The description of the WRKY transcription factor family in Brachypodium that we report here provides a framework for functional genomics studies in an important model system. Our database is a resource for both Brachypodium and wheat studies and ultimately projects aimed at improving wheat through manipulation of WRKY transcription factors. PMID:22726208
Martin, Richard L; Simon, Cory M; Smit, Berend; Haranczyk, Maciej
2014-04-02
Porous polymer networks (PPNs) are a class of advanced porous materials that combine the advantages of cheap and stable polymers with the high surface areas and tunable chemistry of metal-organic frameworks. They are of particular interest for gas separation or storage applications, for instance, as methane adsorbents for a vehicular natural gas tank or other portable applications. PPNs are self-assembled from distinct building units; here, we utilize commercially available chemical fragments and two experimentally known synthetic routes to design in silico a large database of synthetically realistic PPN materials. All structures from our database of 18,000 materials have been relaxed with semiempirical electronic structure methods and characterized with Grand-canonical Monte Carlo simulations for methane uptake and deliverable (working) capacity. A number of novel structure-property relationships that govern methane storage performance were identified. The relationships are translated into experimental guidelines to realize the ideal PPN structure. We found that cooperative methane-methane attractions were present in all of the best-performing materials, highlighting the importance of guest interaction in the design of optimal materials for methane storage.
DESHARKY: automatic design of metabolic pathways for optimal cell growth.
Rodrigo, Guillermo; Carrera, Javier; Prather, Kristala Jones; Jaramillo, Alfonso
2008-11-01
The biological solution for synthesis or remediation of organic compounds using living organisms, particularly bacteria and yeast, has been promoted because of the cost reduction with respect to the non-living chemical approach. In that way, computational frameworks can profit from the previous knowledge stored in large databases of compounds, enzymes and reactions. In addition, the cell behavior can be studied by modeling the cellular context. We have implemented a Monte Carlo algorithm (DESHARKY) that finds a metabolic pathway from a target compound by exploring a database of enzymatic reactions. DESHARKY outputs a biochemical route to the host metabolism together with its impact in the cellular context by using mathematical models of the cell resources and metabolism. Furthermore, we provide the sequence of amino acids for the enzymes involved in the route closest phylogenetically to the considered organism. We provide examples of designed metabolic pathways with their genetic load characterizations. Here, we have used Escherichia coli as host organism. In addition, our bioinformatic tool can be applied for biodegradation or biosynthesis and its performance scales with the database size. Software, a tutorial and examples are freely available and open source at http://soft.synth-bio.org/desharky.html
Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5
NASA Astrophysics Data System (ADS)
Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.
2015-02-01
We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.
OWLing Clinical Data Repositories With the Ontology Web Language
Pastor, Xavier; Lozano, Esther
2014-01-01
Background The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. Objective The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. Methods We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Results Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. Conclusions OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems. PMID:25599697
OWLing Clinical Data Repositories With the Ontology Web Language.
Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther
2014-08-01
The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.
Rice, Michael; Gladstone, William; Weir, Michael
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills.
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills. PMID:15592597
Chen, Di-Ming; Tian, Jia-Yue; Chen, Min; Liu, Chun-Sen; Du, Miao
2016-07-20
A moisture-stable three-dimensional (3D) metal-organic framework (MOF), {(Me2NH2)[Zn2(bpydb)2(ATZ)](DMA)(NMF)2}n (1, where bpydb = 4,4'-(4,4'-bipyridine-2,6-diyl)dibenzoate, ATZ = deprotonated 5-aminotetrazole, DMA = N,N-dimethylacetamide, and NMF = N-methylformamide), with uncoordinated N-donor sites and charged framework skeleton was fabricated. This MOF exhibits interesting structural dynamic upon CO2 sorption at 195 K and high CO2/N2 (127) and CO2/CH4 (131) sorption selectivity at 298 K and 1 bar. Particularly, its CO2/CH4 selectivity is among the highest MOFs for selective CO2 separation. The results of Grand Canonical Monte Carlo (GCMC) simulation indicate that the polar framework contributes to the strong framework-CO2 binding at zero loading, and the tetrazole pillar contributes to the high CO2 uptake capacity at high loading. Furthermore, the solvent-responsive luminescent properties of 1 indicate that it could be utilized as a fluorescent sensor to detect trace amounts of nitrobenzene in both solvent and vapor systems.
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.; Leung, L. R.
2013-11-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into Earth system models (e.g., community land models (CLMs)), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module, Next Generation BioGeoChemical Module (NGBGC), version 1.0, with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter, and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into CLM. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems. The method presented here could in theory be applied to simulate biogeochemical cycles in other Earth system models.
Co-Creating Quality in Health Care Through Learning and Dissemination.
Holmboe, Eric S; Foster, Tina C; Ogrinc, Greg
2016-01-01
For most of the 20th century the predominant focus of medical education across the professional continuum was the dissemination and acquisition of medical knowledge and procedural skills. Today it is now clear that new areas of focus, such as interprofessional teamwork, care coordination, quality improvement, system science, health information technology, patient safety, assessment of clinical practice, and effective use of clinical decision supports are essential to 21st century medical practice. These areas of need helped to spawn an intense interest in competency-based models of professional education at the turn of this century. However, many of today's practicing health professionals were never educated in these newer competencies during their own training. Co-production and co-creation of learning among interprofessional health care professionals across the continuum can help close the gap in acquiring needed competencies for health care today and tomorrow. Co-learning may be a particularly effective strategy to help organizations achieve the triple aim of better population health, better health care, and lower costs. Structured frameworks, such as the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines, provide guidance in the design, planning, and dissemination of interventions designed to improve care through co-production and co-learning strategies.
SISYPHUS: A high performance seismic inversion factory
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas
2016-04-01
In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).
Carolus, Marshall; Biglarbigi, Khosrow; Warwick, Peter D.; Attanasi, Emil D.; Freeman, Philip A.; Lohr, Celeste D.
2017-10-24
A database called the “Comprehensive Resource Database” (CRD) was prepared to support U.S. Geological Survey (USGS) assessments of technically recoverable hydrocarbons that might result from the injection of miscible or immiscible carbon dioxide (CO2) for enhanced oil recovery (EOR). The CRD was designed by INTEK Inc., a consulting company under contract to the USGS. The CRD contains data on the location, key petrophysical properties, production, and well counts (number of wells) for the major oil and gas reservoirs in onshore areas and State waters of the conterminous United States and Alaska. The CRD includes proprietary data on petrophysical properties of fields and reservoirs from the “Significant Oil and Gas Fields of the United States Database,” prepared by Nehring Associates in 2012, and proprietary production and drilling data from the “Petroleum Information Data Model Relational U.S. Well Data,” prepared by IHS Inc. in 2012. This report describes the CRD and the computer algorithms used to (1) estimate missing reservoir property values in the Nehring Associates (2012) database, and to (2) generate values of additional properties used to characterize reservoirs suitable for miscible or immiscible CO2 flooding for EOR. Because of the proprietary nature of the data and contractual obligations, the CRD and actual data from Nehring Associates (2012) and IHS Inc. (2012) cannot be presented in this report.
Towards Citizen Co-Created Public Service Apps.
Emaldi, Mikel; Aguilera, Unai; López-de-Ipiña, Diego; Pérez-Velasco, Jorge
2017-06-02
WeLive project's main objective is about transforming the current e-government approach by providing a new paradigm based on a new open model oriented towards the design, production and deployment of public services and mobile apps based on the collaboration of different stakeholders. These stakeholders form the quadruple helix, i.e., citizens, private companies, research institutes and public administrations. Through the application of open innovation, open data and open services paradigms, the framework developed within the WeLive project enables the co-creation of urban apps. In this paper, we extend the description of the WeLive platform presented at , plus the preliminary results of the first pilot phase. The two-phase evaluation methodology designed and the evaluation results of first pilot sub-phase are also presented.
ERIC Educational Resources Information Center
Ingram, Julie; Maye, Damian; Kirwan, James; Curry, Nigel; Kubinakova, Katarina
2014-01-01
Purpose: This article utilizes the Communities of Practice (CoP) framework to examine learning processes among a group of permaculture practitioners in England, specifically examining the balance between core practices and boundary processes. Design/methodology/approach: The empirical basis of the article derives from three participatory workshops…
Sethi, Kalyan K; Verma, Saurabh M
2014-08-01
Drug design involves the design of small molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Three-dimensional quantitative structure-activity relationship (3D-QSAR) studies were performed for a series of carbonic anhydrase IX inhibitors using comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) techniques with the help of SYBYL 7.1 software. The large set of 36 different aromatic/heterocyclic sulfamates carbonic anhydrase (CA, EC 4.2.1.1) inhibitors, such as hCA IX, was chosen for this study. The conventional ligand-based 3D-QSAR studies were performed based on the low energy conformations employing database alignment rule. The ligand-based model gave q(2) values 0.802 and 0.829 and r(2) values 1.000 and 0.994 for CoMFA and CoMSIA, respectively, and the predictive ability of the model was validated. The predicted r(2) values are 0.999 and 0.502 for CoMFA and CoMSIA, respectively. SEA (steric, electrostatic, hydrogen bond acceptor) of CoMSIA has the significant contribution for the model development. The docking of inhibitors into hCA IX active site using Glide XP (Schrödinger) software revealed the vital interactions and binding conformation of the inhibitors. The CoMFA and CoMSIA field contour maps are well in agreement with the structural characteristics of the binding pocket of hCA IX active site, which suggests that the information rendered by 3D-QSAR models and the docking interactions can provide guidelines for the development of improved hCA IX inhibitors as leads for various types of metastatic cancers including those of cervical, renal, breast and head and neck origin.
Gini, Rosa; Schuemie, Martijn; Brown, Jeffrey; Ryan, Patrick; Vacchi, Edoardo; Coppola, Massimo; Cazzola, Walter; Coloma, Preciosa; Berni, Roberto; Diallo, Gayo; Oliveira, José Luis; Avillach, Paul; Trifirò, Gianluca; Rijnbeek, Peter; Bellentani, Mariadonata; van Der Lei, Johan; Klazinga, Niek; Sturkenboom, Miriam
2016-01-01
Introduction: We see increased use of existing observational data in order to achieve fast and transparent production of empirical evidence in health care research. Multiple databases are often used to increase power, to assess rare exposures or outcomes, or to study diverse populations. For privacy and sociological reasons, original data on individual subjects can’t be shared, requiring a distributed network approach where data processing is performed prior to data sharing. Case Descriptions and Variation Among Sites: We created a conceptual framework distinguishing three steps in local data processing: (1) data reorganization into a data structure common across the network; (2) derivation of study variables not present in original data; and (3) application of study design to transform longitudinal data into aggregated data sets for statistical analysis. We applied this framework to four case studies to identify similarities and differences in the United States and Europe: Exploring and Understanding Adverse Drug Reactions by Integrative Mining of Clinical Records and Biomedical Knowledge (EU-ADR), Observational Medical Outcomes Partnership (OMOP), the Food and Drug Administration’s (FDA’s) Mini-Sentinel, and the Italian network—the Integration of Content Management Information on the Territory of Patients with Complex Diseases or with Chronic Conditions (MATRICE). Findings: National networks (OMOP, Mini-Sentinel, MATRICE) all adopted shared procedures for local data reorganization. The multinational EU-ADR network needed locally defined procedures to reorganize its heterogeneous data into a common structure. Derivation of new data elements was centrally defined in all networks but the procedure was not shared in EU-ADR. Application of study design was a common and shared procedure in all the case studies. Computer procedures were embodied in different programming languages, including SAS, R, SQL, Java, and C++. Conclusion: Using our conceptual framework we found several areas that would benefit from research to identify optimal standards for production of empirical knowledge from existing databases.an opportunity to advance evidence-based care management. In addition, formalized CM outcomes assessment methodologies will enable us to compare CM effectiveness across health delivery settings. PMID:27014709
Assessment on EXPERT Descent and Landing System Aerodynamics
NASA Astrophysics Data System (ADS)
Wong, H.; Muylaert, J.; Northey, D.; Riley, D.
2009-01-01
EXPERT is a re-entry vehicle designed for validation of aero-thermodynamic models, numerical schemes in Computational Fluid Dynamics codes and test facilities for measuring flight data under an Earth re-entry environment. This paper addresses the design for the descent and landing sequence for EXPERT. It includes the descent sequence, the choice of drogue and main parachutes, and the parachute deployment condition, which can be supersonic or subsonic. The analysis is based mainly on an engineering tool, PASDA, together with some hand calculations for parachute sizing and design. The tool consists of a detailed 6-DoF simulation performed with the aerodynamics database of the vehicle, an empirical wakes model and the International Standard Atmosphere database. The aerodynamics database for the vehicle is generated by DNW experimental data and CFD codes within the framework of an ESA contract to CIRA. The analysis will be presented in terms of altitude, velocity, accelerations, angle-of- attack, pitch angle and angle of rigging line. Discussion on the advantages and disadvantages of each parachute deployment condition is included in addition to some comparison with the available data based on a Monte-Carlo method from a Russian company, FSUE NIIPS. Sensitivity on wind speed to the performance of EXPERT is shown to be strong. Supersonic deployment of drogue shows a better performance in stability at the expense of a larger G-load than those from the subsonic deployment of drogue. Further optimization on the parachute design is necessary in order to fulfill all the EXPERT specifications.
OpenElectrophy: An Electrophysiological Data- and Analysis-Sharing Framework
Garcia, Samuel; Fourcaud-Trocmé, Nicolas
2008-01-01
Progress in experimental tools and design is allowing the acquisition of increasingly large datasets. Storage, manipulation and efficient analyses of such large amounts of data is now a primary issue. We present OpenElectrophy, an electrophysiological data- and analysis-sharing framework developed to fill this niche. It stores all experiment data and meta-data in a single central MySQL database, and provides a graphic user interface to visualize and explore the data, and a library of functions for user analysis scripting in Python. It implements multiple spike-sorting methods, and oscillation detection based on the ridge extraction methods due to Roux et al. (2007). OpenElectrophy is open source and is freely available for download at http://neuralensemble.org/trac/OpenElectrophy. PMID:19521545
Ruffier, Magali; Kähäri, Andreas; Komorowska, Monika; Keenan, Stephen; Laird, Matthew; Longden, Ian; Proctor, Glenn; Searle, Steve; Staines, Daniel; Taylor, Kieron; Vullo, Alessandro; Yates, Andrew; Zerbino, Daniel; Flicek, Paul
2017-01-01
The Ensembl software resources are a stable infrastructure to store, access and manipulate genome assemblies and their functional annotations. The Ensembl 'Core' database and Application Programming Interface (API) was our first major piece of software infrastructure and remains at the centre of all of our genome resources. Since its initial design more than fifteen years ago, the number of publicly available genomic, transcriptomic and proteomic datasets has grown enormously, accelerated by continuous advances in DNA-sequencing technology. Initially intended to provide annotation for the reference human genome, we have extended our framework to support the genomes of all species as well as richer assembly models. Cross-referenced links to other informatics resources facilitate searching our database with a variety of popular identifiers such as UniProt and RefSeq. Our comprehensive and robust framework storing a large diversity of genome annotations in one location serves as a platform for other groups to generate and maintain their own tailored annotation. We welcome reuse and contributions: our databases and APIs are publicly available, all of our source code is released with a permissive Apache v2.0 licence at http://github.com/Ensembl and we have an active developer mailing list ( http://www.ensembl.org/info/about/contact/index.html ). http://www.ensembl.org. © The Author(s) 2017. Published by Oxford University Press.
NIST Photoionization of CO2 (ARPES) Database
National Institute of Standards and Technology Data Gateway
SRD 119 NIST Photoionization of CO2 (ARPES) Database (Web, free access) CO2 is studied using dispersed synchrotron radiation in the 650 Å to 850 Å spectral region. The vibrationally resolved photoelectron spectra are analyzed to generate relative vibrational transition amplitudes and the angular asymmetry parameters describing the various transitions observed.
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Spinhime, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee; Bucholtz, Anthony
2004-01-01
Backscatter lidar signals are a function of both backscatter and extinction. Hence, these lidar observations alone cannot separate the two quantities. The aerosol extinction-to-backscatter ratio, S, is the key parameter required to accurately retrieve extinction and optical depth from backscatter lidar observations of aerosol layers. S is commonly defined as 4*pi divided by the product of the single scatter albedo and the phase function at 180-degree scattering angle. Values of S for different aerosol types are not well known, and are even more difficult to determine when aerosols become mixed. Here we present a new lidar-sunphotometer S database derived from Observations of the NASA Micro-Pulse Lidar Network (MPLNET). MPLNET is a growing worldwide network of eye-safe backscatter lidars co-located with sunphotometers in the NASA Aerosol Robotic Network (AERONET). Values of S for different aerosol species and geographic regions will be presented. A framework for constructing an S look-up table will be shown. Look-up tables of S are needed to calculate aerosol extinction and optical depth from space-based lidar observations in the absence of co-located AOD data. Applications for using the new S look-up table to reprocess aerosol products from NASA's Geoscience Laser Altimeter System (GLAS) will be discussed.
Advanced SPARQL querying in small molecule databases.
Galgonek, Jakub; Hurt, Tomáš; Michlíková, Vendula; Onderka, Petr; Schwarz, Jan; Vondrášek, Jiří
2016-01-01
In recent years, the Resource Description Framework (RDF) and the SPARQL query language have become more widely used in the area of cheminformatics and bioinformatics databases. These technologies allow better interoperability of various data sources and powerful searching facilities. However, we identified several deficiencies that make usage of such RDF databases restrictive or challenging for common users. We extended a SPARQL engine to be able to use special procedures inside SPARQL queries. This allows the user to work with data that cannot be simply precomputed and thus cannot be directly stored in the database. We designed an algorithm that checks a query against data ontology to identify possible user errors. This greatly improves query debugging. We also introduced an approach to visualize retrieved data in a user-friendly way, based on templates describing visualizations of resource classes. To integrate all of our approaches, we developed a simple web application. Our system was implemented successfully, and we demonstrated its usability on the ChEBI database transformed into RDF form. To demonstrate procedure call functions, we employed compound similarity searching based on OrChem. The application is publicly available at https://bioinfo.uochb.cas.cz/projects/chemRDF.
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
Middleton, Richard Stephen; Yaw, Sean Patrick
2018-01-11
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
The cost of getting CCS wrong: Uncertainty, infrastructure design, and stranded CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen; Yaw, Sean Patrick
Carbon capture, and storage (CCS) infrastructure will require industry—such as fossil-fuel power, ethanol production, and oil and gas extraction—to make massive investment in infrastructure. The cost of getting these investments wrong will be substantial and will impact the success of CCS technology. Multiple factors can and will impact the success of commercial-scale CCS, including significant uncertainties regarding capture, transport, and injection-storage decisions. Uncertainties throughout the CCS supply chain include policy, technology, engineering performance, economics, and market forces. In particular, large uncertainties exist for the injection and storage of CO 2. Even taking into account upfront investment in site characterization, themore » final performance of the storage phase is largely unknown until commercial-scale injection has started. We explore and quantify the impact of getting CCS infrastructure decisions wrong based on uncertain injection rates and uncertain CO 2 storage capacities using a case study managing CO 2 emissions from the Canadian oil sands industry in Alberta. We use SimCCS, a widely used CCS infrastructure design framework, to develop multiple CCS infrastructure scenarios. Each scenario consists of a CCS infrastructure network that connects CO 2 sources (oil sands extraction and processing) with CO 2 storage reservoirs (acid gas storage reservoirs) using a dedicated CO 2 pipeline network. Each scenario is analyzed under a range of uncertain storage estimates and infrastructure performance is assessed and quantified in terms of cost to build additional infrastructure to store all CO 2. We also include the role of stranded CO 2, CO 2 that a source was expecting to but cannot capture due substandard performance in the transport and storage infrastructure. Results show that the cost of getting the original infrastructure design wrong are significant and that comprehensive planning will be required to ensure that CCS becomes a successful climate mitigation technology. Here, we show that the concept of stranded CO 2 can transform a seemingly high-performing infrastructure design into the worst case scenario.« less
Pristine Metal-Organic Frameworks and their Composites for Energy Storage and Conversion.
Liang, Zibin; Qu, Chong; Guo, Wenhan; Zou, Ruqiang; Xu, Qiang
2017-11-22
Metal-organic frameworks (MOFs), a new class of crystalline porous organic-inorganic hybrid materials, have recently attracted increasing interest in the field of energy storage and conversion. Herein, recent progress of MOFs and MOF composites for energy storage and conversion applications, including photochemical and electrochemical fuel production (hydrogen production and CO 2 reduction), water oxidation, supercapacitors, and Li-based batteries (Li-ion, Li-S, and Li-O 2 batteries), is summarized. Typical development strategies (e.g., incorporation of active components, design of smart morphologies, and judicious selection of organic linkers and metal nodes) of MOFs and MOF composites for particular energy storage and conversion applications are highlighted. A broad overview of recent progress is provided, which will hopefully promote the future development of MOFs and MOF composites for advanced energy storage and conversion applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Huo, Yajiao; Peng, Xianyun; Liu, Xijun; Li, Huaiyu; Luo, Jun
2018-04-18
Converting carbon dioxide to useful C2 chemicals in a selective and efficient manner remains a major challenge in renewable and sustainable energy research. Herein, we adopt butterfly wings to assist the preparation of an electrocatalyst containing monodispersed Cu particles supported by nitrogen-doped carbon frameworks for an efficient reduction of CO 2 . Benefiting from structure advantages and the synergistic effect between nitrogen dopants and stepped surface-rich Cu particles, the resulting catalyst exhibited a high faradic efficiency of 63.7 ± 1.4% for ethylene production (corresponding to an ethylene/methane products' ratio of 57.9 ± 5.4) and an excellent durability (∼100% retention after 24 h). This work presents some guidelines for the rational design and accurate modulation of metal heterocatalysts for high selectivity toward ethylene from CO 2 electroreduction.
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-04-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Prototype software model for designing intruder detection systems with simulation
NASA Astrophysics Data System (ADS)
Smith, Jeffrey S.; Peters, Brett A.; Curry, James C.; Gupta, Dinesh
1998-08-01
This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.
Towards a framework for geospatial tangible user interfaces in collaborative urban planning
NASA Astrophysics Data System (ADS)
Maquil, Valérie; Leopold, Ulrich; De Sousa, Luís Moreira; Schwartz, Lou; Tobias, Eric
2018-03-01
The increasing complexity of urban planning projects today requires new approaches to better integrate stakeholders with different professional backgrounds throughout a city. Traditional tools used in urban planning are designed for experts and offer little opportunity for participation and collaborative design. This paper introduces the concept of geospatial tangible user interfaces (GTUI) and reports on the design and implementation as well as the usability of such a GTUI to support stakeholder participation in collaborative urban planning. The proposed system uses physical objects to interact with large digital maps and geospatial data projected onto a tabletop. It is implemented using a PostGIS database, a web map server providing OGC web services, the computer vision framework reacTIVision, a Java-based TUIO client, and GeoTools. We describe how a GTUI has be instantiated and evaluated within the scope of two case studies related to real world collaborative urban planning scenarios. Our results confirm the feasibility of our proposed GTUI solutions to (a) instantiate different urban planning scenarios, (b) support collaboration, and (c) ensure an acceptable usability.
Advanced Query Formulation in Deductive Databases.
ERIC Educational Resources Information Center
Niemi, Timo; Jarvelin, Kalervo
1992-01-01
Discusses deductive databases and database management systems (DBMS) and introduces a framework for advanced query formulation for end users. Recursive processing is described, a sample extensional database is presented, query types are explained, and criteria for advanced query formulation from the end user's viewpoint are examined. (31…
NASA Astrophysics Data System (ADS)
Gao, Wenyang
The anthropogenic carbon dioxide (CO2) emission into the atmosphere, mainly through the combustion of fossil fuels, has resulted in a balance disturbance of the carbon cycle. Overwhelming scientific evidence proves that the escalating level of atmospheric CO2 is deemed as the main culprit for global warming and climate change. It is thus imperative to develop viable CO2 capture and sequestration (CCS) technologies to reduce CO2 emissions, which is also essential to avoid the potential devastating effects in future. The drawbacks of energy-cost, corrosion and inefficiency for amine-based wet-scrubbing systems which are currently used in industry, have prompted the exploration of alternative approaches for CCS. Extensive efforts have been dedicated to the development of functional porous materials, such as activated carbons, zeolites, porous organic polymers, and metal-organic frameworks (MOFs) to capture CO2. However, these adsorbents are limited by either poor selectivity for CO2 separation from gas mixtures or low CO2 adsorption capacity. Therefore, it is still highly demanding to design next-generation adsorbent materials fulfilling the requirements of high CO2 selectivity and enough CO2 capacity, as well as high water/moisture stability under practical conditions. Metal-organic frameworks (MOFs) have been positioned at the forefront of this area as a promising type of candidate amongst various porous materials. This is triggered by the modularity and functionality of pore size, pore walls and inner surface of MOFs by use of crystal engineering approaches. In this work, several effective strategies, such as incorporating 1,2,3-triazole groups as moderate Lewis base centers into MOFs and employing flexible azamacrocycle-based ligands to build MOFs, demonstrate to be promising ways to enhance CO 2 uptake capacity and CO2 separation ability of porous MOFs. It is revealed through in-depth studies on counter-intuitive experimental observations that the local electric field favours more than the richness of exposed nitrogen atoms for the interactions between MOFs and CO2 molecules, which provides a new perspective for future design of new MOFs and other types of porous materials for CO2 capture. Meanwhile, to address the water/moisture stability issue of MOFs, remote stabilization of copper paddlewheel clusters is achieved by strengthening the bonding between organic ligands and triangular inorganic copper trimers, which in turn enhances the stability of the whole MOF network and provides a better understanding of the mechanism promoting prospective suitable MOFs with enhanced water stability. In contrast with CO2 capture by sorbent materials, the chemical transformation of the captured CO2 into value-added products represents an alternative which is attractive and sustainable, and has been of escalating interest. The nanospace within MOFs not only provides the inner porosity for CO2 capture, but also engenders accessible room for substrate molecules for catalytic purpose. It is demonstrated that high catalytic efficiency for chemical fixation of CO2 into cyclic carbonates under ambient conditions is achieved on MOF-based nanoreactors featuring a high-density of well-oriented Lewis active sites. Furthermore, described for the first time is that CO 2 can be successfully inserted into aryl C-H bonds of a MOF to generate carboxylate groups. This proof-of-concept study contributes a different perspective to the current landscape of CO2 capture and transformation. In closing, the overarching goal of this work is not only to seek efficient MOF adsorbents for CO2 capture, but also to present a new yet attractive scenario of CO2 utilization on MOF platforms.
Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S
2013-01-01
Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.
Overview of EVE - the event visualization environment of ROOT
NASA Astrophysics Data System (ADS)
Tadel, Matevž
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
Expanding the term "Design Space" in high performance liquid chromatography (I).
Monks, K E; Rieger, H-J; Molnár, I
2011-12-15
The current article presents a novel approach to applying Quality by Design (QbD) principles to the development of high pressure reversed phase liquid chromatography (HPLC) methods. Four common critical parameters in HPLC--gradient time, temperature, pH of the aqueous eluent, and stationary phase--are evaluated within the Quality by Design framework by the means of computer modeling software and a column database, to a satisfactory degree. This work proposes the establishment of two mutually complimentary Design Spaces to fully depict a chromatographic method; one Column Design Space (CDS) and one Eluent Design Space (EDS) to describe the influence of the stationary phase and of the mobile phase on the separation selectivity, respectively. The merge of both Design Spaces into one is founded on the continuous nature of the mobile phase influence on retention and the great variety of the stationary phases available. Copyright © 2011 Elsevier B.V. All rights reserved.
Design of Instant Messaging System of Multi-language E-commerce Platform
NASA Astrophysics Data System (ADS)
Yang, Heng; Chen, Xinyi; Li, Jiajia; Cao, Yaru
2017-09-01
This paper aims at researching the message system in the instant messaging system based on the multi-language e-commerce platform in order to design the instant messaging system in multi-language environment and exhibit the national characteristics based information as well as applying national languages to e-commerce. In order to develop beautiful and friendly system interface for the front end of the message system and reduce the development cost, the mature jQuery framework is adopted in this paper. The high-performance server Tomcat is adopted at the back end to process user requests, and MySQL database is adopted for data storage to persistently store user data, and meanwhile Oracle database is adopted as the message buffer for system optimization. Moreover, AJAX technology is adopted for the client to actively pull the newest data from the server at the specified time. In practical application, the system has strong reliability, good expansibility, short response time, high system throughput capacity and high user concurrency.
NASA Technical Reports Server (NTRS)
Rauch, T.; Rudkowski, A.; Kampka, D.; Werner, K.; Kruk, J. W.; Moehler, S.
2014-01-01
Context. In the framework of the Virtual Observatory (VO), the German Astrophysical VO (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperatures, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. Aims. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. Methods. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. Results. We present a state-of-the-art spectral analysis of Feige 110. We determined Teff =47 250 +/- 2000 K, log g=6.00 +/- 0.20, and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model atmosphere codes.
Stepfamilies Doing Family: A Meta-Ethnography.
Pylyser, Charlotte; Buysse, Ann; Loeys, Tom
2017-04-27
The present review examines how stepfamily members without a shared history co-construct a shared family identity and what family processes are relevant in this stepfamily formation. Three databases (Web of Science, PsycInfo, and ProQuest) were systematically searched, resulting in 20 included qualitative studies. The meta-ethnography approach of Noblit and Hare allowed synthesizing these qualitative studies and constructing a comprehensive framework of stepfamilies doing family. Three interdependent family tasks were identified: (a) honoring the past, (b) marking the present, and (c) investing in the future. Stepfamily members' experiences of these family tasks are strongly affected by the dominant societal perspectives and characterized by an underlying dialectical tension between wanting to be like a first-time family and feeling the differences in their family structure at the same time. These findings clearly demonstrate the family work that all stepfamily members undertake and provide a broader context for interpreting stepfamilies' co-construction of a new family identity. © 2017 Family Process Institute.
A Functional Framework for Database Management Systems.
1980-02-01
Furctionat Approach 13 7.2. Objects in a 080S 14 ".2.1. ExternaL Objects 15 ;.2.2. Conceptual Objects 15 -. 2.3. Internal Objects 15 7.2.4. Externat...standpoint of their ’-efinitional and conceptual goals. 2. To make it posibLe to define arc specify the neeos as the ’irst phase cf the design process...methods. This ain is analogcus to the one in which programming language techrotogy has beer captured and supported through the conceptual lan;4age
Arguel, Amaël; Perez-Concha, Oscar; Li, Simon Y W; Lau, Annie Y S
2018-02-01
The aim of this review was to identify general theoretical frameworks used in online social network interventions for behavioral change. To address this research question, a PRISMA-compliant systematic review was conducted. A systematic review (PROSPERO registration number CRD42014007555) was conducted using 3 electronic databases (PsycINFO, Pubmed, and Embase). Four reviewers screened 1788 abstracts. 15 studies were selected according to the eligibility criteria. Randomized controlled trials and controlled studies were assessed using Cochrane Collaboration's "risk-of-bias" tool, and narrative synthesis. Five eligible articles used the social cognitive theory as a framework to develop interventions targeting behavioral change. Other theoretical frameworks were related to the dynamics of social networks, intention models, and community engagement theories. Only one of the studies selected in the review mentioned a well-known theory from the field of health psychology. Conclusions were that guidelines are lacking in the design of online social network interventions for behavioral change. Existing theories and models from health psychology that are traditionally used for in situ behavioral change should be considered when designing online social network interventions in a health care setting. © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Aitken, M.; Yelverton, W. H.; Dodder, R. S.; Loughlin, D. H.
2014-12-01
Among the diverse menu of technologies for reducing greenhouse gas (GHG) emissions, one option involves pairing carbon capture and storage (CCS) with the generation of synthetic fuels and electricity from co-processed coal and biomass. In this scheme, the feedstocks are first converted to syngas, from which a Fischer-Tropsch (FT) process reactor and combined cycle turbine produce liquid fuels and electricity, respectively. With low concentrations of sulfur and other contaminants, the synthetic fuels are expected to be cleaner than conventional crude oil products. And with CO2 as an inherent byproduct of the FT process, most of the GHG emissions can be eliminated by simply compressing the CO2 output stream for pipeline transport. In fact, the incorporation of CCS at such facilities can result in very low—or perhaps even negative—net GHG emissions, depending on the fraction of biomass as input and its CO2 signature. To examine the potential market penetration and environmental impact of coal and biomass to liquids and electricity (CBtLE), which encompasses various possible combinations of input and output parameters within the overall energy landscape, a system-wide analysis is performed using the MARKet ALlocation (MARKAL) model. With resource supplies, energy conversion technologies, end-use demands, costs, and pollutant emissions as user-defined inputs, MARKAL calculates—using linear programming techniques—the least-cost set of technologies that satisfy the specified demands subject to environmental and policy constraints. In this framework, the U.S. Environmental Protection Agency (EPA) has developed both national and regional databases to characterize assorted technologies in the industrial, commercial, residential, transportation, and generation sectors of the U.S. energy system. Here, the EPA MARKAL database is updated to include the costs and emission characteristics of CBtLE using figures from the literature. Nested sensitivity analysis is then carried out to investigate the impact of various assumptions and scenarios, such as the plant capacity factor, capital costs, CO2 mitigation targets, oil prices, and CO2 storage costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.
2016-06-24
A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies,more » allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.« less
Bao, Lin; Li, Tao; Chen, Shu; Peng, Chang; Li, Ling; Xu, Qian; Chen, Yashao; Ou, Encai; Xu, Weijian
2017-02-01
3D graphene frameworks/Co 3 O 4 composites are produced by the thermal explosion method, in which the generation of Co 3 O 4 nanoparticles, reduction of graphene oxide, and creation of 3D frameworks are simultaneously completed. The process prevents the agglomeration of Co 3 O 4 particles effectively, resulting in monodispersed Co 3 O 4 nanoparticles scattered on the 3D graphene frameworks evenly. The prepared 3D graphene frameworks/Co 3 O 4 composites used as electrodes for supercapacitor display a definite improvement on electrochemical performance with high specific capacitance (≈1765 F g -1 at a current density of 1 A g -1 ), good rate performance (≈1266 F g -1 at a current density of 20 A g -1 ), and excellent stability (≈93% maintenance of specific capacitance at a constant current density of 10 A g -1 after 5000 cycles). In addition, the composites are also employed as nonenzymatic sensors for the electrochemical detection of glucose, which exhibit high sensitivity (122.16 µA mM -1 cm -2 ) and noteworthy lower detection limit (157 × 10 -9 M, S/N = 3). Therefore, the authors expect that the 3D graphene frameworks/Co 3 O 4 composites described here would possess potential applications as the electrode materials in supercapacitors and nonenzymatic detection of glucose. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yu, Huanan; Xu, Dongdong; Xu, Qun
2015-08-28
A hierarchical meso- and microporous metal-organic framework (MOF) was facilely fabricated in an ionic liquid (IL)/supercritical CO2 (SC CO2)/surfactant emulsion system. Notably, CO2 exerts a dual effect during the synthesis; that is, CO2 droplets act as a template for the cores of nanospheres while CO2-swollen micelles induce mesopores on nanospheres.
The Mouse Heart Attack Research Tool (mHART) 1.0 Database.
DeLeon-Pennell, Kristine Y; Iyer, Rugmani Padmanabhan; Ma, Yonggang; Yabluchanskiy, Andriy; Zamilpa, Rogelio; Chiao, Ying Ann; Cannon, Presley; Cates, Courtney; Flynn, Elizabeth R; Halade, Ganesh V; de Castro Bras, Lisandra E; Lindsey, Merry L
2018-05-18
The generation of Big Data has enabled systems-level dissections into the mechanisms of cardiovascular pathology. Integration of genetic, proteomic, and pathophysiological variables across platforms and laboratories fosters discoveries through multidisciplinary investigations and minimizes unnecessary redundancy in research efforts. The Mouse Heart Attack Research Tool (mHART) consolidates a large dataset of over 10 years of experiments from a single laboratory for cardiovascular investigators to generate novel hypotheses and identify new predictive markers of progressive left ventricular remodeling following myocardial infarction (MI) in mice. We designed the mHART REDCap database using our own data to integrate cardiovascular community participation. We generated physiological, biochemical, cellular, and proteomic outputs from plasma and left ventricles obtained from post-MI and no MI (naïve) control groups. We included both male and female mice ranging in age from 3 to 36 months old. After variable collection, data underwent quality assessment for data curation (e.g. eliminate technical errors, check for completeness, remove duplicates, and define terms). Currently, mHART 1.0 contains >888,000 data points and includes results from >2,100 unique mice. Database performance was tested and an example provided to illustrate database utility. This report explains how the first version of the mHART database was established and provides researchers with a standard framework to aid in the integration of their data into our database or in the development of a similar database.
Zhu, Jie; Usov, Pavel M; Xu, Wenqian; Celis-Salazar, Paula J; Lin, Shaoyang; Kessinger, Matthew C; Landaverde-Alvarado, Carlos; Cai, Meng; May, Ann M; Slebodnick, Carla; Zhu, Dunru; Senanayake, Sanjaya D; Morris, Amanda J
2018-01-24
Metal-organic frameworks (MOFs) have shown great promise in catalysis, mainly due to their high content of active centers, large internal surface areas, tunable pore size, and versatile chemical functionalities. However, it is a challenge to rationally design and construct MOFs that can serve as highly stable and reusable heterogeneous catalysts. Here two new robust 3D porous metal-cyclam-based zirconium MOFs, denoted VPI-100 (Cu) and VPI-100 (Ni), have been prepared by a modulated synthetic strategy. The frameworks are assembled by eight-connected Zr 6 clusters and metallocyclams as organic linkers. Importantly, the cyclam core has accessible axial coordination sites for guest interactions and maintains the electronic properties exhibited by the parent cyclam ring. The VPI-100 MOFs exhibit excellent chemical stability in various organic and aqueous solvents over a wide pH range and show high CO 2 uptake capacity (up to ∼9.83 wt% adsorption at 273 K under 1 atm). Moreover, VPI-100 MOFs demonstrate some of the highest reported catalytic activity values (turnover frequency and conversion efficiency) among Zr-based MOFs for the chemical fixation of CO 2 with epoxides, including sterically hindered epoxides. The MOFs, which bear dual catalytic sites (Zr and Cu/Ni), enable chemistry not possible with the cyclam ligand under the same conditions and can be used as recoverable stable heterogeneous catalysts without losing performance.
Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.
Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K
2004-07-01
Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.
A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis
Rahman, M. M.; Antani, S. K.; Thoma, G. R.
2011-01-01
We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350
The role of zinc on the chemistry of complex intermetallic compounds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Weiwei
2014-01-01
Combining experiments and electronic structure theory provides the framework to design and discover new families of complex intermetallic phases and to understand factors that stabilize both new and known phases. Using solid state synthesis and multiple structural determinations, ferromagnetic β-Mn type Co 8+xZn 12–x was analyzed for their crystal and electronic structures.
ERIC Educational Resources Information Center
Watson, Sunnie Lee; Watson, William R.; Richardson, Jennifer; Loizzo, Jamie
2016-01-01
This study examines a MOOC instructor's use of social presence, teaching presence, and dissonance for attitudinal change in a MOOC on Human Trafficking, designed to promote attitudinal change. Researchers explored the MOOC instructor's use of social presence and teaching presence, using the Community of Inquiry (CoI) framework as a lens, and…
ERIC Educational Resources Information Center
Cerf, M.; Bail, Le; Lusson, J. M.; Omon, B.
2017-01-01
Purpose: To highlight the way a public policy aiming to achieve a 50% decrease of pesticides use in France reframed advice provision in public and private networks. Design/methodology/approach: We developed a framework to analyze intermediation in a public funded network, a farmers' association, and a network of co-operatives. The framework…
Assessing the impact of healthcare research: A systematic review of methodological frameworks
Keeley, Thomas J.; Calvert, Melanie J.
2017-01-01
Background Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix. Methods and findings Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) ‘primary research-related impact’, (2) ‘influence on policy making’, (3) ‘health and health systems impact’, (4) ‘health-related and societal impact’, and (5) ‘broader economic impact’. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched. Conclusions The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research. PMID:28792957
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
Marenco, Luis; Li, Yuli; Martone, Maryann E; Sternberg, Paul W; Shepherd, Gordon M; Miller, Perry L
2008-09-01
This paper describes a pilot query interface that has been constructed to help us explore a "concept-based" approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface.
Borzacchiello, Maria Teresa; Torrieri, Vincenzo; Nijkamp, Peter
2009-11-01
This paper offers the description of an integrated information system framework for the assessment of transportation planning and management. After an introductory exposition, in the first part of the paper, a broad overview of international experiences regarding information systems on transportation is given, focusing in particular on the relationship between transportation system's performance monitoring and the decision-making process, and on the importance of this connection in the evaluation and planning process, in Italian and European cases. Next, the methodological design of an information system to support efficient and sustainable transportation planning and management aiming to integrate inputs from several different data sources is presented. The resulting framework deploys modular and integrated databases which include data stemming from different national or regional data banks and which integrate information belonging to different transportation fields. For this reason, it allows public administrations to account for many strategic elements that influence their decisions regarding transportation, both from a systemic and infrastructural point of view.
Li, Yuli; Martone, Maryann E.; Sternberg, Paul W.; Shepherd, Gordon M.; Miller, Perry L.
2009-01-01
This paper describes a pilot query interface that has been constructed to help us explore a “concept-based” approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface. PMID:18953674
A World Wide Web Human Dimensions Framework and Database for Wildlife and Forest Planning
Michael A. Tarrant; Alan D. Bright; H. Ken Cordell
1999-01-01
The paper describes a human dimensions framework(HDF) for application in wildlife and forest planning. The HDF is delivered via the world wide web and retrieves data on-line from the Social, Economic, Environmental, Leisure, and Attitudes (SEELA) database. The proposed HDF is guided by ten fundamental HD principles, and is applied to wildlife and forest planning using...
NASA Astrophysics Data System (ADS)
Kim, J.; Park, M.; Baik, H. S.; Choi, Y.
2016-12-01
At the present time, arguments continue regarding the migration speeds of Martian dune fields and their correlation with atmospheric circulation. However, precisely measuring the spatial translation of Martian dunes has rarely conducted only a very few times Therefore, we developed a generic procedure to precisely measure the migration of dune fields with recently introduced 25-cm resolution High Resolution Imaging Science Experimen (HIRISE) employing a high-accuracy photogrammetric processor and sub-pixel image correlator. The processor was designed to trace estimated dune migration, albeit slight, over the Martian surface by 1) the introduction of very high resolution ortho images and stereo analysis based on hierarchical geodetic control for better initial point settings; 2) positioning error removal throughout the sensor model refinement with a non-rigorous bundle block adjustment, which makes possible the co-alignment of all images in a time series; and 3) improved sub-pixel co-registration algorithms using optical flow with a refinement stage conducted on a pyramidal grid processor and a blunder classifier. Moreover, volumetric changes of Martian dunes were additionally traced by means of stereo analysis and photoclinometry. The established algorithms have been tested using high-resolution HIRISE images over a large number of Martian dune fields covering whole Mars Global Dune Database. Migrations over well-known crater dune fields appeared to be almost static for the considerable temporal periods and were weakly correlated with wind directions estimated by the Mars Climate Database (Millour et al. 2015). Only over a few Martian dune fields, such as Kaiser crater, meaningful migration speeds (>1m/year) compared to phtotogrammetric error residual have been measured. Currently a technical improved processor to compensate error residual using time series observation is under developing and expected to produce the long term migration speed over Martian dune fields where constant HIRISE image acquisitions are available. ACKNOWLEDGEMENTS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement Nr. 607379.
CoP Sensing Framework on Web-Based Environment
NASA Astrophysics Data System (ADS)
Mustapha, S. M. F. D. Syed
The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.
Multidisciplinary Design Optimization of A Highly Flexible Aeroservoelastic Wing
NASA Astrophysics Data System (ADS)
Haghighat, Sohrab
A multidisciplinary design optimization framework is developed that integrates control system design with aerostructural design for a highly-deformable wing. The objective of this framework is to surpass the existing aircraft endurance limits through the use of an active load alleviation system designed concurrently with the rest of the aircraft. The novelty of this work is two fold. First, a unified dynamics framework is developed to represent the full six-degree-of-freedom rigid-body along with the structural dynamics. It allows for an integrated control design to account for both manoeuvrability (flying quality) and aeroelasticity criteria simultaneously. Secondly, by synthesizing the aircraft control system along with the structural sizing and aerodynamic shape design, the final design has the potential to exploit synergies among the three disciplines and yield higher performing aircraft. A co-rotational structural framework featuring Euler--Bernoulli beam elements is developed to capture the wing's nonlinear deformations under the effect of aerodynamic and inertial loadings. In this work, a three-dimensional aerodynamic panel code, capable of calculating both steady and unsteady loadings is used. Two different control methods, a model predictive controller (MPC) and a 2-DOF mixed-norm robust controller, are considered in this work to control a highly flexible aircraft. Both control techniques offer unique advantages that make them promising for controlling a highly flexible aircraft. The control system works towards executing time-dependent manoeuvres along with performing gust/manoeuvre load alleviation. The developed framework is investigated for demonstration in two design cases: one in which the control system simply worked towards achieving or maintaining a target altitude, and another where the control system is also performing load alleviation. The use of the active load alleviation system results in a significant improvement in the aircraft performance relative to the optimum result without load alleviation. The results show that the inclusion of control system discipline along with other disciplines at early stages of aircraft design improves aircraft performance. It is also shown that structural stresses due to gust excitations can be better controlled by the use of active structural control systems which can improve the fatigue life of the structure.
A framework for engaging stakeholders on the management of alien species.
Novoa, Ana; Shackleton, Ross; Canavan, Susan; Cybèle, Cathleen; Davies, Sarah J; Dehnen-Schmutz, Katharina; Fried, Jana; Gaertner, Mirijam; Geerts, Sjirk; Griffiths, Charles L; Kaplan, Haylee; Kumschick, Sabrina; Le Maitre, David C; Measey, G John; Nunes, Ana L; Richardson, David M; Robinson, Tamara B; Touza, Julia; Wilson, John R U
2018-01-01
Alien species can have major ecological and socioeconomic impacts in their novel ranges and so effective management actions are needed. However, management can be contentious and create conflicts, especially when stakeholders who benefit from alien species are different from those who incur costs. Such conflicts of interests mean that management strategies can often not be implemented. There is, therefore, increasing interest in engaging stakeholders affected by alien species or by their management. Through a facilitated workshop and consultation process including academics and managers working on a variety of organisms and in different areas (urban and rural) and ecosystems (terrestrial and aquatic), we developed a framework for engaging stakeholders in the management of alien species. The proposed framework for stakeholder engagement consists of 12 steps: (1) identify stakeholders; (2) select key stakeholders for engagement; (3) explore key stakeholders' perceptions and develop initial aims for management; (4) engage key stakeholders in the development of a draft management strategy; (5) re-explore key stakeholders' perceptions and revise the aims of the strategy; (6) co-design general aims, management objectives and time frames with key stakeholders; (7) co-design a management strategy; (8) facilitate stakeholders' ownership of the strategy and adapt as required; and (9) implement the strategy and monitor management actions to evaluate the need for additional or future actions. In case additional management is needed after these actions take place, some extra steps should be taken: (10) identify any new stakeholders, benefits, and costs; (11) monitor engagement; and (12) revise management strategy. Overall, we believe that our framework provides an effective approach to minimize the impact of conflicts created by alien species management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Edwards, Rufus D; Smith, Kirk R; Zhang, Junfeng; Ma, Yuqing
2003-01-01
Residential energy use in developing countries has traditionally been associated with combustion devices of poor energy efficiency, which have been shown to produce substantial health-damaging pollution, contributing significantly to the global burden of disease, and greenhouse gas (GHG) emissions. Precision of these estimates in China has been hampered by limited data on stove use and fuel consumption in residences. In addition limited information is available on variability of emissions of pollutants from different stove/fuel combinations in typical use, as measurement of emission factors requires measurement of multiple chemical species in complex burn cycle tests. Such measurements are too costly and time consuming for application in conjunction with national surveys. Emissions of most of the major health-damaging pollutants (HDP) and many of the gases that contribute to GHG emissions from cooking stoves are the result of the significant portion of fuel carbon that is diverted to products of incomplete combustion (PIC) as a result of poor combustion efficiencies. The approximately linear increase in emissions of PIC with decreasing combustion efficiencies allows development of linear models to predict emissions of GHG and HDP intrinsically linked to CO2 and PIC production, and ultimately allows the prediction of global warming contributions from residential stove emissions. A comprehensive emissions database of three burn cycles of 23 typical fuel/stove combinations tested in a simulated village house in China has been used to develop models to predict emissions of HDP and global warming commitment (GWC) from cooking stoves in China, that rely on simple survey information on stove and fuel use that may be incorporated into national surveys. Stepwise regression models predicted 66% of the variance in global warming commitment (CO2, CO, CH4, NOx, TNMHC) per 1 MJ delivered energy due to emissions from these stoves if survey information on fuel type was available. Subsequently if stove type is known, stepwise regression models predicted 73% of the variance. Integrated assessment of policies to change stove or fuel type requires that implications for environmental impacts, energy efficiency, global warming and human exposures to HDP emissions can be evaluated. Frequently, this involves measurement of TSP or CO as the major HDPs. Incorporation of this information into models to predict GWC predicted 79% and 78% of the variance respectively. Clearly, however, the complexity of making multiple measurements in conjunction with a national survey would be both expensive and time consuming. Thus, models to predict HDP using simple survey information, and with measurement of either CO/CO2 or TSP/CO2 to predict emission factors for the other HDP have been derived. Stepwise regression models predicted 65% of the variance in emissions of total suspended particulate as grams of carbon (TSPC) per 1 MJ delivered if survey information on fuel and stove type was available and 74% if the CO/CO2 ratio was measured. Similarly stepwise regression models predicted 76% of the variance in COC emissions per MJ delivered with survey information on stove and fuel type and 85% if the TSPC/CO2 ratio was measured. Ultimately, with international agreements on emissions trading frameworks, similar models based on extensive databases of the fate of fuel carbon during combustion from representative household stoves would provide a mechanism for computing greenhouse credits in the residential sector as part of clean development mechanism frameworks and monitoring compliance to control regimes.
Draft secure medical database standard.
Pangalos, George
2002-01-01
Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.
A general temporal data model and the structured population event history register
Clark, Samuel J.
2010-01-01
At this time there are 37 demographic surveillance system sites active in sub-Saharan Africa, Asia and Central America, and this number is growing continuously. These sites and other longitudinal population and health research projects generate large quantities of complex temporal data in order to describe, explain and investigate the event histories of individuals and the populations they constitute. This article presents possible solutions to some of the key data management challenges associated with those data. The fundamental components of a temporal system are identified and both they and their relationships to each other are given simple, standardized definitions. Further, a metadata framework is proposed to endow this abstract generalization with specific meaning and to bind the definitions of the data to the data themselves. The result is a temporal data model that is generalized, conceptually tractable, and inherently contains a full description of the primary data it organizes. Individual databases utilizing this temporal data model can be customized to suit the needs of their operators without modifying the underlying design of the database or sacrificing the potential to transparently share compatible subsets of their data with other similar databases. A practical working relational database design based on this general temporal data model is presented and demonstrated. This work has arisen out of experience with demographic surveillance in the developing world, and although the challenges and their solutions are more general, the discussion is organized around applications in demographic surveillance. An appendix contains detailed examples and working prototype databases that implement the examples discussed in the text. PMID:20396614
Estimation of Regional Net CO2 Exchange over the Southern Great Plains
NASA Astrophysics Data System (ADS)
Biraud, S. C.; Riley, W. J.; Fischer, M. L.; Torn, M. S.; Cooley, H. S.
2004-12-01
Estimating spatially distributed ecosystem CO2 exchange is an important component of the North American Carbon Program. We describe here a methodology to estimate Net Ecosystem Exchange (NEE) over the Southern Great Plains, using: (1) data from the Department Of Energy's Atmospheric Radiation Measurement (ARM) sites in Oklahoma and Kansas; (2) meteorological forcing data from the Mesonet facilities; (3) soil and vegetation types from 1 km resolution USGS databases; (4) vegetation status (e.g., LAI) from 1 km satellite measurements of surface reflectance (MODIS); (5) a tested land-surface model; and (6) a coupled land-surface and meteorological model (MM5/ISOLSM). This framework allows us to simulate regional surface fluxes in addition to ABL and free troposphere concentrations of CO2 at a continental scale with fine-scale nested grids centered on the ARM central facility. We use the offline land-surface and coupled models to estimate regional NEE, and compare predictions to measurements from the 9 Extended Facility sites with eddy correlation measurements. Site level comparisons to portable ECOR measurements in several crop types are also presented. Our approach also allows us to extend bottom-up estimates to periods and areas where meteorological forcing data are unavailable.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
Towards Citizen Co-Created Public Service Apps †
Emaldi, Mikel; Aguilera, Unai; López-de-Ipiña, Diego; Pérez-Velasco, Jorge
2017-01-01
WeLive project’s main objective is about transforming the current e-government approach by providing a new paradigm based on a new open model oriented towards the design, production and deployment of public services and mobile apps based on the collaboration of different stakeholders. These stakeholders form the quadruple helix, i.e., citizens, private companies, research institutes and public administrations. Through the application of open innovation, open data and open services paradigms, the framework developed within the WeLive project enables the co-creation of urban apps. In this paper, we extend the description of the WeLive platform presented at , plus the preliminary results of the first pilot phase. The two-phase evaluation methodology designed and the evaluation results of first pilot sub-phase are also presented. PMID:28574460
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Uniaxial negative thermal expansion and metallophilicity in Cu3[Co(CN)6
NASA Astrophysics Data System (ADS)
Sapnik, A. F.; Liu, X.; Boström, H. L. B.; Coates, C. S.; Overy, A. R.; Reynolds, E. M.; Tkatchenko, A.; Goodwin, A. L.
2018-02-01
We report the synthesis and structural characterisation of the molecular framework copper(I) hexacyanocobaltate(III), Cu3[Co(CN)6], which we find to be isostructural to H3[Co(CN)6] and the colossal negative thermal expansion material Ag3[Co(CN)6]. Using synchrotron X-ray powder diffraction measurements, we find strong positive and negative thermal expansion behaviour respectively perpendicular and parallel to the trigonal crystal axis: αa = 25.4 (5)M K - 1 and αc = - 43.5 (8)M K - 1 . These opposing effects collectively result in a volume expansivity αV = 7.4 (11)M K - 1 that is remarkably small for an anisotropic molecular framework. This thermal response is discussed in the context of the behaviour of the analogous H- and Ag-containing systems. We make use of density-functional theory with many-body dispersion interactions (DFT + MBD) to demonstrate that Cu+…Cu+ metallophilic ('cuprophilic') interactions are significantly weaker in Cu3[Co(CN)6] than Ag+…Ag+ interactions in Ag3[Co(CN)6], but that this lowering of energy scale counterintuitively translates to a more moderate-rather than enhanced-degree of structural flexibility. The same conclusion is drawn from consideration of a simple GULP model, which we also present here. Our results demonstrate that strong interactions can actually be exploited in the design of ultra-responsive materials if those interactions are set up to act in tension.
Report on Legal Protection for Databases. A Report of the Register of Copyrights. August, 1997.
ERIC Educational Resources Information Center
Library of Congress, Washington, DC. Copyright Office.
This report gives an overview of the past and present domestic and international legal framework for database protection. It describes database industry practices in securing protection against unauthorized use and Copyright Office registration practices relating to databases. Finally, it discusses issues raised and concerns expressed in a series…
Extension of the COG and arCOG databases by amino acid and nucleotide sequences
Meereis, Florian; Kaufmann, Michael
2008-01-01
Background The current versions of the COG and arCOG databases, both excellent frameworks for studies in comparative and functional genomics, do not contain the nucleotide sequences corresponding to their protein or protein domain entries. Results Using sequence information obtained from GenBank flat files covering the completely sequenced genomes of the COG and arCOG databases, we constructed NUCOCOG (nucleotide sequences containing COG databases) as an extended version including all nucleotide sequences and in addition the amino acid sequences originally utilized to construct the current COG and arCOG databases. We make available three comprehensive single XML files containing the complete databases including all sequence information. In addition, we provide a web interface as a utility suitable to browse the NUCOCOG database for sequence retrieval. The database is accessible at . Conclusion NUCOCOG offers the possibility to analyze any sequence related property in the context of the COG and arCOG framework simply by using script languages such as PERL applied to a large but single XML document. PMID:19014535
Integration of a neuroimaging processing pipeline into a pan-canadian computing grid
NASA Astrophysics Data System (ADS)
Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.
2012-02-01
The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerke, Brian F; McNeil, Michael A; Tu, Thomas
A major barrier to effective appliance efficiency program design and evaluation is a lack of data for determination of market baselines and cost-effective energy savings potential. The data gap is particularly acute in developing countries, which may have the greatest savings potential per unit GDP. To address this need, we are developing the International Database of Efficient Appliances (IDEA), which automatically compiles data from a wide variety of online sources to create a unified repository of information on efficiency, price, and features for a wide range of energy-consuming products across global markets. This paper summarizes the database framework and demonstratesmore » the power of IDEA as a resource for appliance efficiency research and policy development. Using IDEA data for refrigerators in China and India, we develop robust cost-effectiveness indicators that allow rapid determination of savings potential within each market, as well as comparison of that potential across markets and appliance types. We discuss implications for future energy efficiency policy development.« less
Metal-Organic Framework-Stabilized CO2/Water Interfacial Route for Photocatalytic CO2 Conversion.
Luo, Tian; Zhang, Jianling; Li, Wei; He, Zhenhong; Sun, Xiaofu; Shi, Jinbiao; Shao, Dan; Zhang, Bingxing; Tan, Xiuniang; Han, Buxing
2017-11-29
Here, we propose a CO 2 /water interfacial route for photocatalytic CO 2 conversion by utilizing a metal-organic framework (MOF) as both an emulsifier and a catalyst. The CO 2 reduction occurring at the CO 2 /water interface produces formate with remarkably enhanced efficiency as compared with that in conventional solvent. The route is efficient, facile, adjustable, and environmentally benign, which is applicable for the CO 2 transformation photocatalyzed by different kinds of MOFs.
A hydroeconomic modeling framework for optimal integrated management of forest and water
NASA Astrophysics Data System (ADS)
Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel
2016-10-01
Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.
Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
2012-01-05
Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sargazi, Ghasem, E-mail: g.sargazi@gmail.com; Young Researchers Society, Shahid Bahonar University of Kerman, Kerman, Iran; Afzali, Daryoush, E-mail: daryoush_afzali@yahoo.com
2017-06-15
This work presents a fast route for the preparation of a new Ta(V) metal-organic framework nanostructure with high surface area, significant porosity, and small size distribution. X-ray diffraction (XRD), scanning electron microscopy (SEM), Transition electron microscopy (TEM), energy dispersive spectrometer (EDS), thermo-gravimetric analysis (TGA), differential scanning calorimetry (DSC), fourier transform infrared spectroscopy (FTIR), CHNS/O elemental analyser, and Brunauer-Emmett-Teller (BET) surface area analysis were applied to characterize the synthesized product. Moreover, the influences of ultrasonic irradiation including temperature, time, and power on different features of the final products were systematically studied using 2{sup k-1} factorial design experiments, and the response surfacemore » optimization was used for determining the best welding parameter combination. The results obtained from analyses of variances showed that ultrasonic parameters affected the size distribution, thermal behaviour, and surface area of Ta-MOF samples. Based on response surface methodology, Ta-MOF could be obtained with mean diameter of 55 nm, thermal stability of 228 °C, and high surface area of 2100 m{sup 2}/g. The results revealed that the synthesized products could be utilized in various applications such as a novel candidate for CO{sub 2} adsorption. - Graphical abstract: A facile route was used for fabrication of a new metal -organic framework based on tantalum nanostructures that have high surface area, considerable porosity, homogenous morphology, and small size distribution.« less
NASA Astrophysics Data System (ADS)
Charsley-Groffman, L.; Killeffer, T.; Wullschleger, S. D.; Wilson, C. J.
2016-12-01
The Next Generation Ecosystem Experiment, NGEE Arctic, project aims to improve the representation of arctic terrestrial processes and properties in Earth System Models, ESMs, through coordinated multi-disciplinary field-based observations and experiments. NGEE involves nearly one hundred research staff, post docs and students from multiple DOE laboratories and universities who deploy a wide range of in-situ and remote field observation techniques to quantify and understand interactions between the climate system and surface and subsurface coupled thermal-hydrologic, biogeochemical and vegetation processes. Careful attention was given to the design and management of co-located long-term and one off data collection efforts, as well as their data streams. Field research sites at the Barrow Environmental Observatory near Barrow AK and on the Seward Peninsula were designed around the concept of "ecotypes" which co-evolved with readily identified and classified hydro-geomorphic features characteristic of arctic landscapes. NGEE sub-teams focused on 5 unique science questions collaborated to design field sites and develop naming conventions for locations and data types to develop coherent data sets to parameterize, initialize and test a range of site-specific process resolving models to ESMs. Multi-layer mapping products were a critical means of developing a coordinated and coherent observation design, and a centralized data portal and data reporting framework was critical to ensuring meaningful data products for NGEE modelers and Arctic scientific community at large. We present examples of what works and lessons learned for a large multi-disciplinary terrestrial observational research project in the Arctic.
Framework for Assessing Biogenic CO2 Emissions from Stationary Sources
This revision of the 2011 report, Accounting Framework for Biogenic CO2 Emissions from Stationary Sources, evaluates biogenic CO2 emissions from stationary sources, including a detailed study of the scientific and technical issues associated with assessing biogenic carbon dioxide...
The IAGOS Information System: From the aircraft measurements to the users.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Thouret, Valérie; Cammas, Jean-Pierre; Petzold, Andreas; Volz-Thomas, Andreas; Gerbig, Christoph; Brenninkmeijer, Carl A. M.
2013-04-01
IAGOS (In-service Aircraft for a Global Observing System, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in-situ observations of atmospheric chemical composition throughout the troposphere and in the UTLS. It builds on almost 20 years of scientific and technological expertise gained in the research projects MOZAIC (Measurement of Ozone and Water Vapour on Airbus In-service Aircraft) and CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container). The European consortium includes research centres, universities, national weather services, airline operators and aviation industry. IAGOS consists of two complementary building blocks proving a unique global observation system: IAGOS-CORE deploys newly developed instrumentation for regular in-situ measurements of atmospheric chemical species both reactive and greenhouse gases (O3, CO, NOx, NOy, H2O, CO2, CH4), aerosols and cloud particles. In IAGOS-CARIBIC a cargo container is deployed monthly as a flying laboratory aboard one aircraft. Involved airlines ensure global operation of the network. Today, 5 aircraft are flying with the MOZAIC (3) or IAGOS-CORE (2) instrumentation namely 3 aircraft from Lufthansa, 1 from Air Namibia, and 1 from China Airlines Taiwan. A main improvement and new aspect of the IAGOS-CORE instrumentation compared to MOZAIC is to deliver the raw data in near real time (i.e. as soon as the aircraft lands data are transmitted). After a first and quick validation of the O3 and CO measurements, preliminary data are made available in the central database for both the MACC project (Monitoring Atmospheric Composition and Climate) and scientific research groups. In addition to recorded measurements, the database also contains added-value products such as meteorological information (tropopause height, air mass backtrajectories) and lagrangian model outputs (FLEXPART). Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web site: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The MOZAIC-IAGOS database contains today more than 35000 flights covering mostly the northern hemisphere mid-latitudes but with reduced representation of the Pacific region. The recently equipped China Airlines Taiwan aircraft started in July 2012 filling this gap. Future equipped aircraft scheduled in 2013 from Air France, Cathay Pacific and Iberia will cover the Asia-Oceania sector and Europe-South America transects. The database, as well as the research infrastructure itself are in continuous development and improvement. In the framework of the new starting IGAS project (IAGOS for GMES Atmospheric Service), major achievements will be reached such as metadata and formats standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC data integration within the central database, and the real-time data transmission.
Chang, Liang; Stacchiola, Dario J.; Hu, Yun Hang
2017-10-11
CO 2 conversion to useful materials is the most attractive approach to control its content in the atmosphere. An ideal electrode material for supercapacitors should possess suitable meso/macro-pores as electrolyte reservoirs and rich micro-pores as places for the adsorption of electrolyte ions. In this paper, we designed and synthesized such an ideal material, meso/macro-porous frameworks of surface-microporous graphene (MFSMG), from CO 2via its one-step exothermic reaction with potassium. Furthermore, the MFSMG electrode exhibited a high gravimetric capacitance of 178 F g -1 at 0.2 A g -1 in 2 M KOH and retained 85% capacitance after increasing current density bymore » 50 times. The combination of the MFSMG electrode and the activated carbon (AC) electrode constructed an asymmetrical AC//MFSMG capacitor, leading to a high capacitance of 242.4 F g -1 for MFSMG and 97.4 F g -1 for AC. With the extended potential, the asymmetrical capacitor achieved an improved energy density of 9.43 W h kg -1 and a power density of 3504 W kg -1. Finally, this work provides a novel solution to solve the CO 2 issue and creates an efficient electrode material for supercapacitors.« less
NASA Astrophysics Data System (ADS)
Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros
SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.
Michie, Susan; van Stralen, Maartje M; West, Robert
2011-04-23
Improving the design and implementation of evidence-based practice depends on successful behaviour change interventions. This requires an appropriate method for characterising interventions and linking them to an analysis of the targeted behaviour. There exists a plethora of frameworks of behaviour change interventions, but it is not clear how well they serve this purpose. This paper evaluates these frameworks, and develops and evaluates a new framework aimed at overcoming their limitations. A systematic search of electronic databases and consultation with behaviour change experts were used to identify frameworks of behaviour change interventions. These were evaluated according to three criteria: comprehensiveness, coherence, and a clear link to an overarching model of behaviour. A new framework was developed to meet these criteria. The reliability with which it could be applied was examined in two domains of behaviour change: tobacco control and obesity. Nineteen frameworks were identified covering nine intervention functions and seven policy categories that could enable those interventions. None of the frameworks reviewed covered the full range of intervention functions or policies, and only a minority met the criteria of coherence or linkage to a model of behaviour. At the centre of a proposed new framework is a 'behaviour system' involving three essential conditions: capability, opportunity, and motivation (what we term the 'COM-B system'). This forms the hub of a 'behaviour change wheel' (BCW) around which are positioned the nine intervention functions aimed at addressing deficits in one or more of these conditions; around this are placed seven categories of policy that could enable those interventions to occur. The BCW was used reliably to characterise interventions within the English Department of Health's 2010 tobacco control strategy and the National Institute of Health and Clinical Excellence's guidance on reducing obesity. Interventions and policies to change behaviour can be usefully characterised by means of a BCW comprising: a 'behaviour system' at the hub, encircled by intervention functions and then by policy categories. Research is needed to establish how far the BCW can lead to more efficient design of effective interventions.
2011-01-01
Background Improving the design and implementation of evidence-based practice depends on successful behaviour change interventions. This requires an appropriate method for characterising interventions and linking them to an analysis of the targeted behaviour. There exists a plethora of frameworks of behaviour change interventions, but it is not clear how well they serve this purpose. This paper evaluates these frameworks, and develops and evaluates a new framework aimed at overcoming their limitations. Methods A systematic search of electronic databases and consultation with behaviour change experts were used to identify frameworks of behaviour change interventions. These were evaluated according to three criteria: comprehensiveness, coherence, and a clear link to an overarching model of behaviour. A new framework was developed to meet these criteria. The reliability with which it could be applied was examined in two domains of behaviour change: tobacco control and obesity. Results Nineteen frameworks were identified covering nine intervention functions and seven policy categories that could enable those interventions. None of the frameworks reviewed covered the full range of intervention functions or policies, and only a minority met the criteria of coherence or linkage to a model of behaviour. At the centre of a proposed new framework is a 'behaviour system' involving three essential conditions: capability, opportunity, and motivation (what we term the 'COM-B system'). This forms the hub of a 'behaviour change wheel' (BCW) around which are positioned the nine intervention functions aimed at addressing deficits in one or more of these conditions; around this are placed seven categories of policy that could enable those interventions to occur. The BCW was used reliably to characterise interventions within the English Department of Health's 2010 tobacco control strategy and the National Institute of Health and Clinical Excellence's guidance on reducing obesity. Conclusions Interventions and policies to change behaviour can be usefully characterised by means of a BCW comprising: a 'behaviour system' at the hub, encircled by intervention functions and then by policy categories. Research is needed to establish how far the BCW can lead to more efficient design of effective interventions. PMID:21513547
KP-CoT-23 (CCDC83) is a novel immunogenic cancer/testis antigen in colon cancer.
Song, Myung-Ha; Ha, Jin-Mok; Shin, Dong-Hoon; Lee, Chang-Hun; Old, Lloyd; Lee, Sang-Yull
2012-11-01
Cancer/testis (CT) antigens are considered target molecules for cancer immunotherapy. To identify novel CT antigens, immunoscreening of a testicular cDNA library was performed using serum obtained from a colon cancer patient who was immunized with a new dendritic cell vaccine. We isolated 64 positive cDNA clones comprised of 40 different genes, designated KP-CoT-1 through KP-CoT-40. Three of these putative antigens, including KP-CoT-23 (CCDC83), had testis-specific expression profiles in the Unigene database. RT-PCR analysis showed that the expression of 2 KP-Cot-23 variants was restricted to the testis in normal adult tissues. In addition, KP-CoT-23 variants were frequently expressed in a variety of tumors and cancer cell lines, including colon cancer. A serological western blot assay showed IgG antibodies to the KP-CoT-23 protein in 26 of 37 colon cancer patients and in 4 of 21 healthy patients. These data suggest that KP-CoT-23 is a novel CT antigen that may be useful for the diagnosis and immunotherapy of cancer.
Blunt-Body Aerothermodynamic Database from High-Enthalpy CO2 Testing in an Expansion Tunnel
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Prabhu, Dinesh K.; Maclean, Matthew; Dufrene, Aaron
2016-01-01
An extensive database of heating, pressure, and flow field measurements on a 70-deg sphere-cone blunt body geometry in high-enthalpy, CO2 flow has been generated through testing in an expansion tunnel. This database is intended to support development and validation of computational tools and methods to be employed in the design of future Mars missions. The test was conducted in an expansion tunnel in order to avoid uncertainties in the definition of free stream conditions noted in previous studies performed in reflected shock tunnels. Data were obtained across a wide range of test velocity/density conditions that produced various physical phenomena of interest, including laminar and transitional/turbulent boundary layers, non-reacting to completely dissociated post-shock gas composition and shock-layer radiation. Flow field computations were performed at the test conditions and comparisons were made with the experimental data. Based on these comparisons, it is recommended that computational uncertainties on surface heating and pressure, for laminar, reacting-gas environments can be reduced to +/-10% and +/-5%, respectively. However, for flows with turbulence and shock-layer radiation, there were not sufficient validation-quality data obtained in this study to make any conclusions with respect to uncertainties, which highlights the need for further research in these areas.
Enabling parallel simulation of large-scale HPC network systems
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...
2016-04-07
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Enabling parallel simulation of large-scale HPC network systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less
Argobots: A Lightweight Low-Level Threading and Tasking Framework
Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan; ...
2017-10-24
In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this article, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by the user or high-level programming model. Here, we describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.« less
Argobots: A Lightweight Low-Level Threading and Tasking Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan
In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this article, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by the user or high-level programming model. Here, we describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version.« less
Flywheel-Based Fast Charging Station - FFCS for Electric Vehicles and Public Transportation
NASA Astrophysics Data System (ADS)
Gabbar, Hossam A.; Othman, Ahmed M.
2017-08-01
This paper demonstrates novel Flywheel-based Fast Charging Station (FFCS) for high performance and profitable charging infrastructures for public electric buses. The design criteria will be provided for fast charging stations. The station would support the private and open charging framework. Flywheel Energy storage system is utilized to offer advanced energy storage for charging stations to achieve clean public transportation, including electric buses with reducing GHG, including CO2 emission reduction. The integrated modelling and management system in the station is performed by a decision-based control platform that coordinates the power streams between the quick chargers, the flywheel storage framework, photovoltaic cells and the network association. There is a tidy exchange up between the capacity rate of flywheel framework and the power rating of the network association.”
Dynamically Reconfigurable Systolic Array Accelerator
NASA Technical Reports Server (NTRS)
Dasu, Aravind; Barnes, Robert
2012-01-01
A polymorphic systolic array framework has been developed that works in conjunction with an embedded microprocessor on a field-programmable gate array (FPGA), which allows for dynamic and complimentary scaling of acceleration levels of two algorithms active concurrently on the FPGA. Use is made of systolic arrays and a hardware-software co-design to obtain an efficient multi-application acceleration system. The flexible and simple framework allows hosting of a broader range of algorithms, and is extendable to more complex applications in the area of aerospace embedded systems. FPGA chips can be responsive to realtime demands for changing applications needs, but only if the electronic fabric can respond fast enough. This systolic array framework allows for rapid partial and dynamic reconfiguration of the chip in response to the real-time needs of scalability, and adaptability of executables.
Implicit measures: A normative analysis and review.
De Houwer, Jan; Teige-Mocigemba, Sarah; Spruyt, Adriaan; Moors, Agnes
2009-05-01
Implicit measures can be defined as outcomes of measurement procedures that are caused in an automatic manner by psychological attributes. To establish that a measurement outcome is an implicit measure, one should examine (a) whether the outcome is causally produced by the psychological attribute it was designed to measure, (b) the nature of the processes by which the attribute causes the outcome, and (c) whether these processes operate automatically. This normative analysis provides a heuristic framework for organizing past and future research on implicit measures. The authors illustrate the heuristic function of their framework by using it to review past research on the 2 implicit measures that are currently most popular: effects in implicit association tests and affective priming tasks. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Rieker, G. B.; Jeffries, J. B.; Hanson, R. K.
2009-01-01
A tunable diode laser (TDL) is used to measure the absorption spectra of the R46 through R54 transitions of the 20012 ←00001 band of CO2 near 2.0 μm (5000 cm-1) at room temperature and pressures to 10 atm (densities to 9.2 amagat). Spectra are recorded using direct absorption spectroscopy and wavelength modulation spectroscopy with second-harmonic detection (WMS-2f) in a mixture containing 11% CO2 in air. The direct absorption spectra are influenced by non-Lorentzian effects including finite-duration collisions which perturb far-wing absorption, and an empirical χ-function correction to the Voigt line shape is shown to greatly reduce error in the spectral model. WMS-2f spectra are shown to be at least a factor of four less-influenced by non-Lorentzian effects in this region, making this approach more resistant to errors in the far-wing line shape model and allowing a comparison between the spectral parameters of HITRAN and a new database which includes pressure-induced shift coefficients. The implications of these measurements on practical, high-pressure CO2 sensor design are discussed.
FunCoup 3.0: database of genome-wide functional coupling networks
Schmitt, Thomas; Ogris, Christoph; Sonnhammer, Erik L. L.
2014-01-01
We present an update of the FunCoup database (http://FunCoup.sbc.su.se) of functional couplings, or functional associations, between genes and gene products. Identifying these functional couplings is an important step in the understanding of higher level mechanisms performed by complex cellular processes. FunCoup distinguishes between four classes of couplings: participation in the same signaling cascade, participation in the same metabolic process, co-membership in a protein complex and physical interaction. For each of these four classes, several types of experimental and statistical evidence are combined by Bayesian integration to predict genome-wide functional coupling networks. The FunCoup framework has been completely re-implemented to allow for more frequent future updates. It contains many improvements, such as a regularization procedure to automatically downweight redundant evidences and a novel method to incorporate phylogenetic profile similarity. Several datasets have been updated and new data have been added in FunCoup 3.0. Furthermore, we have developed a new Web site, which provides powerful tools to explore the predicted networks and to retrieve detailed information about the data underlying each prediction. PMID:24185702
FunCoup 3.0: database of genome-wide functional coupling networks.
Schmitt, Thomas; Ogris, Christoph; Sonnhammer, Erik L L
2014-01-01
We present an update of the FunCoup database (http://FunCoup.sbc.su.se) of functional couplings, or functional associations, between genes and gene products. Identifying these functional couplings is an important step in the understanding of higher level mechanisms performed by complex cellular processes. FunCoup distinguishes between four classes of couplings: participation in the same signaling cascade, participation in the same metabolic process, co-membership in a protein complex and physical interaction. For each of these four classes, several types of experimental and statistical evidence are combined by Bayesian integration to predict genome-wide functional coupling networks. The FunCoup framework has been completely re-implemented to allow for more frequent future updates. It contains many improvements, such as a regularization procedure to automatically downweight redundant evidences and a novel method to incorporate phylogenetic profile similarity. Several datasets have been updated and new data have been added in FunCoup 3.0. Furthermore, we have developed a new Web site, which provides powerful tools to explore the predicted networks and to retrieve detailed information about the data underlying each prediction.
Yeung, Daniel; Boes, Peter; Ho, Meng Wei; Li, Zuofeng
2015-05-08
Image-guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X-rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post-treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state-of-the-art Web technologies, a domain model closely matching the workflow, a database-supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model-View-Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client-side technologies, such as jQuery, jQuery Plug-ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process.
Effect of coenzyme Q10 supplementation on heart failure: a meta-analysis123
Thompson-Paul, Angela M; Bazzano, Lydia A
2013-01-01
Background: Coenzyme Q10 (CoQ10; also called ubiquinone) is an antioxidant that has been postulated to improve functional status in congestive heart failure (CHF). Several randomized controlled trials have examined the effects of CoQ10 on CHF with inconclusive results. Objective: The objective of this meta-analysis was to evaluate the impact of CoQ10 supplementation on the ejection fraction (EF) and New York Heart Association (NYHA) functional classification in patients with CHF. Design: A systematic review of the literature was conducted by using databases including MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, and manual examination of references from selected studies. Studies included were randomized controlled trials of CoQ10 supplementation that reported the EF or NYHA functional class as a primary outcome. Information on participant characteristics, trial design and duration, treatment, dose, control, EF, and NYHA classification were extracted by using a standardized protocol. Results: Supplementation with CoQ10 resulted in a pooled mean net change of 3.67% (95% CI: 1.60%, 5.74%) in the EF and −0.30 (95% CI: −0.66, 0.06) in the NYHA functional class. Subgroup analyses showed significant improvement in EF for crossover trials, trials with treatment duration ≤12 wk in length, studies published before 1994, and studies with a dose ≤100 mg CoQ10/d and in patients with less severe CHF. These subgroup analyses should be interpreted cautiously because of the small number of studies and patients included in each subgroup. Conclusions: Pooled analyses of available randomized controlled trials suggest that CoQ10 may improve the EF in patients with CHF. Additional well-designed studies that include more diverse populations are needed. PMID:23221577
Greenhalgh, Trisha; Fahy, Nick
2015-09-21
The 2014 UK Research Excellence Framework (REF2014) generated a unique database of impact case studies, each describing a body of research and impact beyond academia. We sought to explore the nature and mechanism of impact in a sample of these. The study design was manual content analysis of a large sample of impact case studies (producing mainly quantitative data), plus in-depth interpretive analysis of a smaller sub-sample (for qualitative detail), thereby generating both breadth and depth. For all 162 impact case studies submitted to sub-panel A2 in REF2014, we extracted data on study design(s), stated impacts and audiences, mechanisms of impact, and efforts to achieve impact. We analysed four case studies (selected as exemplars of the range of approaches to impact) in depth, including contacting the authors for their narratives of impact efforts. Most impact case studies described quantitative research (most commonly, trials) and depicted a direct, linear link between research and impact. Research was said to have influenced a guideline in 122 case studies, changed policy in 88, changed practice in 84, improved morbidity in 44 and reduced mortality in 25. Qualitative and participatory research designs were rare, and only one case study described a co-production model of impact. Eighty-two case studies described strong and ongoing linkages with policymakers, but only 38 described targeted knowledge translation activities. In 40 case studies, no active efforts to achieve impact were described. Models of good implementation practice were characterised by an ethical commitment by researchers, strong institutional support and a proactive, interdisciplinary approach to impact activities. REF2014 both inspired and documented significant efforts by UK researchers to achieve impact. But in contrast with the published evidence on research impact (which depicts much as occurring indirectly through non-linear mechanisms), this sub-panel seems to have captured mainly direct and relatively short-term impacts one step removed from patient outcomes. Limited impacts on morbidity and mortality, and researchers' relatively low emphasis on the processes and interactions through which indirect impacts may occur, are concerns. These findings have implications for multi-stakeholder research collaborations such as UK National Institute for Health Research Collaborations for Leadership in Applied Health Research and Care, which are built on non-linear models of impact.
Kim, Ki-Joong; Lu, Ping; Culp, Jeffrey T; Ohodnicki, Paul R
2018-02-23
Integration of optical fiber with sensitive thin films offers great potential for the realization of novel chemical sensing platforms. In this study, we present a simple design strategy and high performance of nanoporous metal-organic framework (MOF) based optical gas sensors, which enables detection of a wide range of concentrations of small molecules based upon extremely small differences in refractive indices as a function of analyte adsorption within the MOF framework. Thin and compact MOF films can be uniformly formed and tightly bound on the surface of etched optical fiber through a simple solution method which is critical for manufacturability of MOF-based sensor devices. The resulting sensors show high sensitivity/selectivity to CO 2 gas relative to other small gases (H 2 , N 2 , O 2 , and CO) with rapid (
Designing, Describing and Disseminating New Materials by using the Network Topology Approach.
Öhrström, Lars
2016-09-19
This Concept article describes how network topology analysis is applied to different fields of solid-state chemistry. Its usefulness is demonstrated by examples from metal-organic frameworks, group 14 allotropes and related compounds, ice polymorphs, zeolites, supramolecular (organic) solid-state chemistry, Zintl phases, and cathode materials for Li-ion batteries. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A framework for regional primary health care to organise actions to address health inequities.
Freeman, Toby; Javanparast, Sara; Baum, Fran; Ziersch, Anna; Mackean, Tamara
2018-06-01
Regional primary health-care organisations plan, co-ordinate, and fund some primary health-care services in a designated region. This article presents a framework for examining the equity performance of regional primary health-care organisations, and applies it to Australian Medicare Locals (funded from 2011 to 2015). The framework was developed based on theory, literature, and researcher deliberation. Data were drawn from Medicare Local documents, an online survey of 210 senior Medicare Local staff, and interviews with 50 survey respondents. The framework encompassed equity in planning, collection of equity data, community engagement, and strategies to address equity in access, health outcomes, and social determinants of health. When the framework was applied to Medicare Locals, their inclusion of equity as a goal, collection of equity data, community engagement, and actions improving equity of access were strong, but there were gaps in broader advocacy, and strategies to address social determinants of health, and equity in quality of care. The equity framework allows a platform for advancing knowledge and international comparison of the health equity efforts of regional primary health-care organisations.
Demonstrating the Open Data Repository's Data Publisher: The CheMin Database
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Bristow, T.; Pires, A.; Keller, R. M.; Downs, R. T.; Blake, D.; Dateo, C. E.; Fonda, M.
2018-04-01
The Open Data Repository's Data Publisher aims to provide an easy-to-use software tool that will allow researchers to create and publish database templates and related data. The CheMin Database developed using this framework is shown as an example.
Prediction of novel synthetic pathways for the production of desired chemicals.
Cho, Ayoun; Yun, Hongseok; Park, Jin Hwan; Lee, Sang Yup; Park, Sunwon
2010-03-28
There have been several methods developed for the prediction of synthetic metabolic pathways leading to the production of desired chemicals. In these approaches, novel pathways were predicted based on chemical structure changes, enzymatic information, and/or reaction mechanisms, but the approaches generating a huge number of predicted results are difficult to be applied to real experiments. Also, some of these methods focus on specific pathways, and thus are limited to expansion to the whole metabolism. In the present study, we propose a system framework employing a retrosynthesis model with a prioritization scoring algorithm. This new strategy allows deducing the novel promising pathways for the synthesis of a desired chemical together with information on enzymes involved based on structural changes and reaction mechanisms present in the system database. The prioritization scoring algorithm employing Tanimoto coefficient and group contribution method allows examination of structurally qualified pathways to recognize which pathway is more appropriate. In addition, new concepts of binding site covalence, estimation of pathway distance and organism specificity were taken into account to identify the best synthetic pathway. Parameters of these factors can be evolutionarily optimized when a newly proven synthetic pathway is registered. As the proofs of concept, the novel synthetic pathways for the production of isobutanol, 3-hydroxypropionate, and butyryl-CoA were predicted. The prediction shows a high reliability, in which experimentally verified synthetic pathways were listed within the top 0.089% of the identified pathway candidates. It is expected that the system framework developed in this study would be useful for the in silico design of novel metabolic pathways to be employed for the efficient production of chemicals, fuels and materials.
Svanborg, Per; Eliasson, Alf; Stenport, Victoria
The purpose of this study was to evaluate the fit of additively manufactured cobalt-chromium and titanium and CNC-milled titanium frameworks before and after ceramic veneering. Ten stone casts simulating an edentulous maxilla provided with six abutment analogs were produced. For each stone cast, one additively manufactured cobalt-chromium framework (AM CoCr) and one titanium framework (AM Ti) were fabricated. The fit was analyzed with a coordinate measuring machine in three dimensions (x, y, and z axes) using best-fit virtual matching of center point coordinates, before and after ceramic veneering. CNC-milled titanium frameworks (CNC Ti) and earlier results from CNC-milled cobalt-chromium frameworks (CNC CoCr) were used for comparison. All frameworks presented minor misfit before and after veneering in the horizontal plane (x- and y-axes) between 2.9 and 13.5 μm and in the vertical plane (z-axis) between 1.6 and 5.4 μm. Ceramic veneering affected the fit of all groups of frameworks. Both AM Ti and AM CoCr presented significantly smaller distortion in the vertical plane compared with the CNC-milled frameworks. Implant-supported frameworks can be produced in either Ti or CoCr using either CNC milling or additive manufacturing with a fit well within the range of 20 μm in the horizontal plane and 10 μm in the vertical plane. The fit of frameworks of both materials and production techniques are affected by the ceramic veneering procedure to a small extent.
Rao, Anand B; Rubin, Edward S
2002-10-15
Capture and sequestration of CO2 from fossil fuel power plants is gaining widespread interest as a potential method of controlling greenhouse gas emissions. Performance and cost models of an amine (MEA)-based CO2 absorption system for postcombustion flue gas applications have been developed and integrated with an existing power plant modeling framework that includes multipollutant control technologies for other regulated emissions. The integrated model has been applied to study the feasibility and cost of carbon capture and sequestration at both new and existing coal-burning power plants. The cost of carbon avoidance was shown to depend strongly on assumptions about the reference plant design, details of the CO2 capture system design, interactions with other pollution control systems, and method of CO2 storage. The CO2 avoidance cost for retrofit systems was found to be generally higher than for new plants, mainly because of the higher energy penalty resulting from less efficient heat integration as well as site-specific difficulties typically encountered in retrofit applications. For all cases, a small reduction in CO2 capture cost was afforded by the SO2 emission trading credits generated by amine-based capture systems. Efforts are underway to model a broader suite of carbon capture and sequestration technologies for more comprehensive assessments in the context of multipollutant environmental management.
de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony
2016-10-03
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.
de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony
2016-01-01
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials. PMID:27694824
de Jong, Maarten; Chen, Wei; Notestine, Randy; ...
2016-10-03
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. Themore » approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.« less
Assessment of the SFC database for analysis and modeling
NASA Technical Reports Server (NTRS)
Centeno, Martha A.
1994-01-01
SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.
JBioWH: an open-source Java framework for bioinformatics data integration
Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor
2013-01-01
The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595
JBioWH: an open-source Java framework for bioinformatics data integration.
Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor
2013-01-01
The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.
Dilger, Mathias Georg; Jovanović, Tanja; Voigt, Kai-Ingo
2017-08-01
Practice and theory have proven the relevance of energy co-operatives for civic participation in the energy turnaround. However, due to a still low awareness and changing regulation, there seems an unexploited potential of utilizing the legal form 'co-operative' in this context. The aim of this study is therefore to investigate the crowdfunding implementation in the business model of energy co-operatives in order to cope with the mentioned challenges. Based on a theoretical framework, we derive a Business Model Innovation (BMI) through crowdfunding including synergies and differences. A qualitative study design, particularly a multiple-case study of energy co-operatives, was chosen to prove the BMI and to reveal barriers. The results show that although most co-operatives are not familiar with crowdfunding, there is strong potential in opening up predominantly local structures to a broader group of members. Building on this, equity-based crowdfunding is revealed to be suitable for energy co-operatives as BMI and to accompany other challenges in the same way. Copyright © 2017 Elsevier Ltd. All rights reserved.
LSD: Large Survey Database framework
NASA Astrophysics Data System (ADS)
Juric, Mario
2012-09-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.
da Silva, Kátia Regina; Costa, Roberto; Crevelari, Elizabeth Sartori; Lacerda, Marianna Sobral; de Moraes Albertini, Caio Marcos; Filho, Martino Martinelli; Santana, José Eduardo; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo; Barros, Jacson V
2013-01-01
The ability to apply standard and interoperable solutions for implementing and managing medical registries as well as aggregate, reproduce, and access data sets from legacy formats and platforms to advanced standard formats and operating systems are crucial for both clinical healthcare and biomedical research settings. Our study describes a reproducible, highly scalable, standard framework for a device registry implementation addressing both local data quality components and global linking problems. We developed a device registry framework involving the following steps: (1) Data standards definition and representation of the research workflow, (2) Development of electronic case report forms using REDCap (Research Electronic Data Capture), (3) Data collection according to the clinical research workflow and, (4) Data augmentation by enriching the registry database with local electronic health records, governmental database and linked open data collections, (5) Data quality control and (6) Data dissemination through the registry Web site. Our registry adopted all applicable standardized data elements proposed by American College Cardiology / American Heart Association Clinical Data Standards, as well as variables derived from cardiac devices randomized trials and Clinical Data Interchange Standards Consortium. Local interoperability was performed between REDCap and data derived from Electronic Health Record system. The original data set was also augmented by incorporating the reimbursed values paid by the Brazilian government during a hospitalization for pacemaker implantation. By linking our registry to the open data collection repository Linked Clinical Trials (LinkedCT) we found 130 clinical trials which are potentially correlated with our pacemaker registry. This study demonstrates how standard and reproducible solutions can be applied in the implementation of medical registries to constitute a re-usable framework. Such approach has the potential to facilitate data integration between healthcare and research settings, also being a useful framework to be used in other biomedical registries.
Matsuda, Fumio; Nakabayashi, Ryo; Sawada, Yuji; Suzuki, Makoto; Hirai, Masami Y.; Kanaya, Shigehiko; Saito, Kazuki
2011-01-01
A novel framework for automated elucidation of metabolite structures in liquid chromatography–mass spectrometer metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method. PMID:22645535
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polyakova, I. N.; Poznyak, A. L.; Sergienko, V. S.
2006-07-15
The synthesis and X-ray diffraction study of three Ca[Co(Nta)X] . nH{sub 2}O complexes [X{sup -} = Cl, n = 2.3 (I); X{sup -} = Br, n = 2 (II); and X{sup -} = NCS, n = 2 (III)] are performed. The main structural units of crystals I-III are the [CoX(Nta)]{sup 2-} anionic complexes and hydrated Ca{sup 2+} cations. The anionic complexes have similar structures. The coordination of the Co{sup 2+} atom in the shape of a trigonal bipyramid is formed by N + 3O atoms of the Nta{sup 3-} ligand and the X{sup -} anion in the trans position withmore » respect to N. In structures I-III, the Co-O and Co-N bond lengths lie in the ranges 1.998-2.032 and 2.186-2.201 A, respectively. The Co-X bond lengths are 2.294 (I), 2.436 and 2.445 (II), and 1.982 A (III). The environments of the Ca{sup 2+} cations include oxygen atoms of one or two water molecules and six or seven O(Nta) atoms with the coordination number of 9 in I or 8 in II and III. The Ca-O(Nta) bonds form a three-dimensional framework in I or layers in II and III. Water molecules are involved in the hydrogen bonds O(w)-H...O(Nta), O(w)-H...X, and O(w)-H...O(w). Structural data for crystals I-III are deposited with the Cambridge Structural Database (CCDC nos. 287 814-287 816)« less
McEachan, Rosemary R C; Giles, Sally J; Sirriyeh, Reema; Watt, Ian S; Wright, John
2012-01-01
Objective The aim of this systematic review was to develop a ‘contributory factors framework’ from a synthesis of empirical work which summarises factors contributing to patient safety incidents in hospital settings. Design A mixed-methods systematic review of the literature was conducted. Data sources Electronic databases (Medline, PsycInfo, ISI Web of knowledge, CINAHL and EMBASE), article reference lists, patient safety websites, registered study databases and author contacts. Eligibility criteria Studies were included that reported data from primary research in secondary care aiming to identify the contributory factors to error or threats to patient safety. Results 1502 potential articles were identified. 95 papers (representing 83 studies) which met the inclusion criteria were included, and 1676 contributory factors extracted. Initial coding of contributory factors by two independent reviewers resulted in 20 domains (eg, team factors, supervision and leadership). Each contributory factor was then coded by two reviewers to one of these 20 domains. The majority of studies identified active failures (errors and violations) as factors contributing to patient safety incidents. Individual factors, communication, and equipment and supplies were the other most frequently reported factors within the existing evidence base. Conclusions This review has culminated in an empirically based framework of the factors contributing to patient safety incidents. This framework has the potential to be applied across hospital settings to improve the identification and prevention of factors that cause harm to patients. PMID:22421911
Thermodynamic database for the Co-Pr system.
Zhou, S H; Kramer, M J; Meng, F Q; McCallum, R W; Ott, R T
2016-03-01
In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.
Thermodynamic database for the Co-Pr system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, S. H.; Kramer, M. J.; Meng, F. Q.
2016-01-21
In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitle Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.
Thermodynamic database for the Co-Pr system
Zhou, S. H.; Kramer, M. J.; Meng, F. Q.; ...
2016-03-01
In this article, we describe data on (1) compositions for both as-cast and heat treated specimens were summarized in Table 1; (2) the determined enthalpy of mixing of liquid phase is listed in Table 2; (3) thermodynamic database of the Co-Pr system in TDB format for the research articled entitled ''Chemical partitioning for the Co-Pr system: First-principles, experiments and energetic calculations to investigate the hard magnetic phase W.''
NASA Astrophysics Data System (ADS)
Gould, Jamie A.; Athwal, Harprit Singh; Blake, Alexander J.; Lewis, William; Hubberstey, Peter; Champness, Neil R.; Schröder, Martin
2017-01-01
A family of Cu(II)-based metal-organic frameworks (MOFs) has been synthesized using three pyridyl-isophthalate ligands, H2L1 (4'-(pyridin-4-yl)biphenyl-3,5-dicarboxylic acid), H2L2 (4''-(pyridin-4-yl)-1,1':4',1''-terphenyl-3,5-dicarboxylic acid) and H2L3 (5-[4-(pyridin-4-yl)naphthalen-1-yl]benzene-1,3-dicarboxylic acid). Although in each case the pyridyl-isophthalate ligands adopt the same pseudo-octahedral [Cu2(O2CR)4N2] paddlewheel coordination modes, the resulting frameworks are structurally diverse, particularly in the case of the complex of Cu(II) with H2L3, which leads to three distinct supramolecular isomers, each derived from Kagomé and square nets. In contrast to [Cu(L2)] and the isomers of [Cu(L3)], [Cu(L1)] exhibits permanent porosity. Thus, the gas adsorption properties of [Cu(L1)] were investigated with N2, CO2 and H2, and the material exhibits an isosteric heat of adsorption competitive with leading MOF sorbents for CO2. [Cu(L1)] displays high H2 adsorption, with the density in the pores approaching that of liquid H2. This article is part of the themed issue 'Coordination polymers and metal-organic frameworks: materials by design'.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... the Panel's draft report on EPA's draft Accounting Framework for Biogenic CO2 Emissions from...'s draft report on EPA's draft Accounting Framework for Biogenic CO2 Emissions from Stationary... Radiation requested SAB review of EPA's draft accounting framework. As noticed in 76 FR 61100-61101, the SAB...
ERIC Educational Resources Information Center
Patrizi, Larry A.
2010-01-01
This study examined the relationship between student satisfaction with faculty characteristics via the community of inquiry (CoI) framework. The CoI framework includes cognitive presence, social presence, and teaching presence. Subsets of teaching presence were explored more deeply by including subsets facilitation of discourse and direct…
Sandra, Fabien; Depardieu, Martin; Mouline, Zineb; Vignoles, Gérard L; Iwamoto, Yuji; Miele, Philippe; Backov, Rénal; Bernard, Samuel
2016-06-06
A template-assisted polymer-derived ceramic route is investigated for preparing a series of silicoboron carbonitride (Si/B/C/N) foams with a hierarchical pore size distribution and tailorable interconnected porosity. A boron-modified polycarbosilazane was selected to impregnate monolithic silica and carbonaceous templates and form after pyrolysis and template removal Si/B/C/N foams. By changing the hard template nature and controlling the quantity of polymer to be impregnated, controlled micropore/macropore distributions with mesoscopic cell windows are generated. Specific surface areas from 29 to 239 m(2) g(-1) and porosities from 51 to 77 % are achieved. These foams combine a low density with a thermal insulation and a relatively good thermostructural stability. Their particular structure allowed the in situ growth of metal-organic frameworks (MOFs) directly within the open-cell structure. MOFs offered a microporosity feature to the resulting Si/B/C/N@MOF composite foams that allowed increasing the specific surface area to provide CO2 uptake of 2.2 %. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Jiménez-Solomon, Oscar G; Méndez-Bustos, Pablo; Swarbrick, Margaret; Díaz, Samantha; Silva, Sissy; Kelley, Maura; Duke, Steve; Lewis-Fernández, Roberto
2016-09-01
People with psychiatric disabilities experience substantial economic exclusion, which hinders their ability to achieve recovery and wellness. The purpose of this article is to describe a framework for a peer-supported economic empowerment intervention grounded in empirical literature and designed to enhance financial wellness. The authors followed a 3-step process, including (a) an environmental scan of scientific literature, (b) a critical review of relevant conceptual frameworks, and (c) the design of an intervention logic framework based on (a) and (b), the programmatic experience of the authors, and input from peer providers. We identified 6 peer provider functions to support individuals with psychiatric disabilities to overcome economic inclusion barriers, achieve financial wellness goals, and lessen the psychosocial impact of poverty and dependency. These include (a) engaging individuals in culturally meaningful conversations about life dreams and financial goals, (b) inspiring individuals to reframe self-defeating narratives by sharing personal stories, (c) facilitating a financial wellness action plan, (d) coaching to develop essential financial skills, (e) supporting navigation and utilization of financial and asset-building services, and (f) fostering mutual emotional and social support to achieve financial wellness goals. Financial wellness requires capabilities that depend on gaining access to financial and asset-building supports, and not merely developing financial skills. The proposed framework outlines new roles and competencies for peer providers to help individuals build essential financial capabilities, and address social determinants of mental health and disability. Research is currently underway to pilot-test and refine peer-supported economic empowerment strategies. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
2010-01-01
Background Addressing deficiencies in the dissemination and transfer of research-based knowledge into routine clinical practice is high on the policy agenda both in the UK and internationally. However, there is lack of clarity between funding agencies as to what represents dissemination. Moreover, the expectations and guidance provided to researchers vary from one agency to another. Against this background, we performed a systematic scoping to identify and describe any conceptual/organising frameworks that could be used by researchers to guide their dissemination activity. Methods We searched twelve electronic databases (including MEDLINE, EMBASE, CINAHL, and PsycINFO), the reference lists of included studies and of individual funding agency websites to identify potential studies for inclusion. To be included, papers had to present an explicit framework or plan either designed for use by researchers or that could be used to guide dissemination activity. Papers which mentioned dissemination (but did not provide any detail) in the context of a wider knowledge translation framework, were excluded. References were screened independently by at least two reviewers; disagreements were resolved by discussion. For each included paper, the source, the date of publication, a description of the main elements of the framework, and whether there was any implicit/explicit reference to theory were extracted. A narrative synthesis was undertaken. Results Thirty-three frameworks met our inclusion criteria, 20 of which were designed to be used by researchers to guide their dissemination activities. Twenty-eight included frameworks were underpinned at least in part by one or more of three different theoretical approaches, namely persuasive communication, diffusion of innovations theory, and social marketing. Conclusions There are currently a number of theoretically-informed frameworks available to researchers that can be used to help guide their dissemination planning and activity. Given the current emphasis on enhancing the uptake of knowledge about the effects of interventions into routine practice, funders could consider encouraging researchers to adopt a theoretically-informed approach to their research dissemination. PMID:21092164
A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
NASA Astrophysics Data System (ADS)
Matouš, Karel; Geers, Marc G. D.; Kouznetsova, Varvara G.; Gillman, Andrew
2017-02-01
Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platform in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.
A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matouš, Karel, E-mail: kmatous@nd.edu; Geers, Marc G.D.; Kouznetsova, Varvara G.
2017-02-01
Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platformmore » in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.« less
Privacy Preserving Facial and Fingerprint Multi-biometric Authentication
NASA Astrophysics Data System (ADS)
Anzaku, Esla Timothy; Sohn, Hosik; Ro, Yong Man
The cases of identity theft can be mitigated by the adoption of secure authentication methods. Biohashing and its variants, which utilizes secret keys and biometrics, are promising methods for secure authentication; however, their shortcoming is the degraded performance under the assumption that secret keys are compromised. In this paper, we extend the concept of Biohashing to multi-biometrics - facial and fingerprint traits. We chose these traits because they are widely used, howbeit, little research attention has been given to designing privacy preserving multi-biometric systems using them. Instead of just using a single modality (facial or fingerprint), we presented a framework for using both modalities. The improved performance of the proposed method, using face and fingerprint, as against either facial or fingerprint trait used in isolation is evaluated using two chimerical bimodal databases formed from publicly available facial and fingerprint databases.
A Conceptual Model of the Information Requirements of Nursing Organizations
Miller, Emmy
1989-01-01
Three related issues play a role in the identification of the information requirements of nursing organizations. These issues are the current state of computer systems in health care organizations, the lack of a well-defined data set for nursing, and the absence of models representing data and information relevant to clinical and administrative nursing practice. This paper will examine current methods of data collection, processing, and storage in clinical and administrative nursing practice for the purpose of identifying the information requirements of nursing organizations. To satisfy these information requirements, database technology can be used; however, a model for database design is needed that reflects the conceptual framework of nursing and the professional concerns of nurses. A conceptual model of the types of data necessary to produce the desired information will be presented and the relationships among data will be delineated.
NASA Astrophysics Data System (ADS)
McCarty, M.
2009-09-01
The renaissance of the web has driven development of many new technologies that have forever changed the way we write software. The resulting tools have been applied to both solve problems and creat new ones in a wide range of domains ranging from monitor and control user interfaces to information distribution. This discussion covers which of and how these technologies are being used in the astronomical computing community. Topics include JavaScript, Cascading Style Sheets, HTML, XML, JSON, RSS, iCalendar, Java, PHP, Python, Ruby on Rails, database technologies, and web frameworks/design patterns.
Partnerships - Working Together to Build The National Map
,
2004-01-01
Through The National Map, the U.S. Geological Survey (USGS) is working with partners to ensure that current, accurate, and complete base geographic information is available for the Nation. Designed as a network of online digital databases, it provides a consistent geographic data framework for the country and serves as a foundation for integrating, sharing, and using data easily and reliably. It provides public access to high quality geospatial data and information from multiple partners to help inform decisionmaking by resource managers and the public, and to support intergovernmental homeland security and emergency management requirements.
Michel-Sendis, F.; Gauld, I.; Martinez, J. S.; ...
2017-08-02
SFCOMPO-2.0 is the new release of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) database of experimental assay measurements. These measurements are isotopic concentrations from destructive radiochemical analyses of spent nuclear fuel (SNF) samples. We supplement the measurements with design information for the fuel assembly and fuel rod from which each sample was taken, as well as with relevant information on operating conditions and characteristics of the host reactors. These data are necessary for modeling and simulation of the isotopic evolution of the fuel during irradiation. SFCOMPO-2.0 has been developed and is maintained by the OECDmore » NEA under the guidance of the Expert Group on Assay Data of Spent Nuclear Fuel (EGADSNF), which is part of the NEA Working Party on Nuclear Criticality Safety (WPNCS). Significant efforts aimed at establishing a thorough, reliable, publicly available resource for code validation and safety applications have led to the capture and standardization of experimental data from 750 SNF samples from more than 40 reactors. These efforts have resulted in the creation of the SFCOMPO-2.0 database, which is publicly available from the NEA Data Bank. Our paper describes the new database, and applications of SFCOMPO-2.0 for computer code validation, integral nuclear data benchmarking, and uncertainty analysis in nuclear waste package analysis are briefly illustrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel-Sendis, F.; Gauld, I.; Martinez, J. S.
SFCOMPO-2.0 is the new release of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) database of experimental assay measurements. These measurements are isotopic concentrations from destructive radiochemical analyses of spent nuclear fuel (SNF) samples. We supplement the measurements with design information for the fuel assembly and fuel rod from which each sample was taken, as well as with relevant information on operating conditions and characteristics of the host reactors. These data are necessary for modeling and simulation of the isotopic evolution of the fuel during irradiation. SFCOMPO-2.0 has been developed and is maintained by the OECDmore » NEA under the guidance of the Expert Group on Assay Data of Spent Nuclear Fuel (EGADSNF), which is part of the NEA Working Party on Nuclear Criticality Safety (WPNCS). Significant efforts aimed at establishing a thorough, reliable, publicly available resource for code validation and safety applications have led to the capture and standardization of experimental data from 750 SNF samples from more than 40 reactors. These efforts have resulted in the creation of the SFCOMPO-2.0 database, which is publicly available from the NEA Data Bank. Our paper describes the new database, and applications of SFCOMPO-2.0 for computer code validation, integral nuclear data benchmarking, and uncertainty analysis in nuclear waste package analysis are briefly illustrated.« less
Chen, Er-Xia; Fu, Hong-Ru; Lin, Rui; Tan, Yan-Xi; Zhang, Jian
2014-12-24
A cobalt imidazolate (im) framework material [Co(im)2]n was employed to use as a trimethylamine (TMA) gas sensor and the [Co(im)2]n sensor can be easily fabricated by using Ag-Pd interdigitated electrodes. Gas sensing measurement indicated that the [Co(im)2]n sensor shows excellent selectivity, high gas response and a low detection limit level of 2 ppm to TMA at 75 °C. The good selectivity and high response to TMA of the sensor based on [Co(im)2]n may be attributed to the weak interaction between the TMA molecules and the [Co(im)2]n framework. That may provide an ideal candidate for detecting freshness of fish and seafood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jie; Usov, Pavel M.; Xu, Wenqian
Metal–organic frameworks (MOFs) have shown great promise in catalysis, mainly due to their high content of active centers, large internal surface areas, tunable pore size, and versatile chemical functionalities. However, it is a challenge to rationally design and construct MOFs that can serve as highly stable and reusable heterogeneous catalysts. Here two new robust 3D porous metal-cyclam-based zirconium MOFs, denoted VPI-100 (Cu) and VPI-100 (Ni), have been prepared by a modulated synthetic strategy. The frameworks are assembled by eight-connected Zr 6 clusters and metallocyclams as organic linkers. Importantly, the cyclam core has accessible axial coordination sites for guest interactions andmore » maintains the electronic properties exhibited by the parent cyclam ring. The VPI-100 MOFs exhibit excellent chemical stability in various organic and aqueous solvents over a wide pH range and show high CO 2 uptake capacity (up to ~9.83 wt% adsorption at 273 K under 1 atm). Moreover, VPI-100 MOFs demonstrate some of the highest reported catalytic activity values (turnover frequency and conversion efficiency) among Zr-based MOFs for the chemical fixation of CO 2 with epoxides, including sterically hindered epoxides. Thus, the MOFs, which bear dual catalytic sites (Zr and Cu/Ni), enable chemistry not possible with the cyclam ligand under the same conditions and can be used as recoverable stable heterogeneous catalysts without losing performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jie; Usov, Pavel M.; Xu, Wenqian
Metal-organic frameworks (MOFs) have shown great promise in catalysis, mainly due to their high content of active centers, large internal surface areas, tunable pore size, and versatile chemical functionalities. However, it is a challenge to rationally design and construct MOFs that can serve as highly stable and reusable heterogeneous catalysts. Here two new robust 3D porous metal-cyclam-based zirconium MOFs, denoted VPI-100 (Cu) and VPI-100 (Ni), have been prepared by a modulated synthetic strategy. The frameworks are assembled by eight-connected Zr-6 clusters and metallocyclams as organic linkers. Importantly, the cyclam core has accessible axial coordination sites for guest interactions and maintainsmore » the electronic properties exhibited by the parent cyclam ring. The VPI-100 MOFs exhibit excellent chemical stability in various organic and aqueous solvents over a wide pH range and show high CO2 uptake capacity (up to similar to 9.83 wt% adsorption at 273 K under 1 atm). Moreover, VPI-100 MOFs demonstrate some of the highest reported catalytic activity values (turnover frequency and conversion efficiency) among Zr-based MOFs for the chemical fixation of CO2 with epoxides, including sterically hindered epoxides. The MOFs, which bear dual catalytic sites (Zr and Cu/Ni), enable chemistry not possible with the cyclam ligand under the same conditions and can be used as recoverable stable heterogeneous catalysts without losing performance.« less
Zhu, Jie; Usov, Pavel M.; Xu, Wenqian; ...
2017-12-22
Metal–organic frameworks (MOFs) have shown great promise in catalysis, mainly due to their high content of active centers, large internal surface areas, tunable pore size, and versatile chemical functionalities. However, it is a challenge to rationally design and construct MOFs that can serve as highly stable and reusable heterogeneous catalysts. Here two new robust 3D porous metal-cyclam-based zirconium MOFs, denoted VPI-100 (Cu) and VPI-100 (Ni), have been prepared by a modulated synthetic strategy. The frameworks are assembled by eight-connected Zr 6 clusters and metallocyclams as organic linkers. Importantly, the cyclam core has accessible axial coordination sites for guest interactions andmore » maintains the electronic properties exhibited by the parent cyclam ring. The VPI-100 MOFs exhibit excellent chemical stability in various organic and aqueous solvents over a wide pH range and show high CO 2 uptake capacity (up to ~9.83 wt% adsorption at 273 K under 1 atm). Moreover, VPI-100 MOFs demonstrate some of the highest reported catalytic activity values (turnover frequency and conversion efficiency) among Zr-based MOFs for the chemical fixation of CO 2 with epoxides, including sterically hindered epoxides. Thus, the MOFs, which bear dual catalytic sites (Zr and Cu/Ni), enable chemistry not possible with the cyclam ligand under the same conditions and can be used as recoverable stable heterogeneous catalysts without losing performance.« less
Fundamental Studies of Crystal Growth of Microporous Materials
NASA Technical Reports Server (NTRS)
Singh, Ramsharan; Doolittle, John, Jr.; Payra, Pramatha; Dutta, Prabir K.; George, Michael A.; Ramachandran, Narayanan; Schoeman, Brian J.
2003-01-01
Microporous materials are framework structures with well-defined porosity, often of molecular dimensions. Zeolites contain aluminum and silicon atoms in their framework and are the most extensively studied amongst all microporous materials. Framework structures with P, Ga, Fe, Co, Zn, B, Ti and a host of other elements have also been made. Typical synthesis of microporous materials involve mixing the framework elements (or compounds, thereof) in a basic solution, followed by aging in some cases and then heating at elevated temperatures. This process is termed hydrothermal synthesis, and involves complex chemical and physical changes. Because of a limited understanding of this process, most synthesis advancements happen by a trial and error approach. There is considerable interest in understanding the synthesis process at a molecular level with the expectation that eventually new framework structures will be built by design. The basic issues in the microporous materials crystallization process include: (a) Nature of the molecular units responsible for the crystal nuclei formation; (b) Nature of the nuclei and nucleation process; (c) Growth process of the nuclei into crystal; (d) Morphological control and size of the resulting crystal; (e) Surface structure of the resulting crystals; and (f) Transformation of frameworks into other frameworks or condensed structures.
A development framework for semantically interoperable health information systems.
Lopez, Diego M; Blobel, Bernd G M E
2009-02-01
Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.
Ecohydrology frameworks for green infrastructure design and ecosystem service provision
NASA Astrophysics Data System (ADS)
Pavao-Zuckerman, M.; Knerl, A.; Barron-Gafford, G.
2014-12-01
Urbanization is a dominant form of landscape change that affects the structure and function of ecosystems and alters control points in biogeochemical and hydrologic cycles. Green infrastructure (GI) has been proposed as a solution to many urban environmental challenges and may be a way to manage biogeochemical control points. Despite this promise, there has been relatively limited empirical focus to evaluate the efficacy of GI, relationships between design and function, and the ability of GI to provide ecosystem services in cities. This work has been driven by goals of adapting GI approaches to dryland cities and to harvest rain and storm water for providing ecosystem services related to storm water management and urban heat island mitigation, as well as other co-benefits. We will present a modification of ecohydrologic theory for guiding the design and function of green infrastructure for dryland systems that highlights how GI functions in context of Trigger - Transfer - Reserve - Pulse (TTRP) dynamic framework. Here we also apply this TTRP framework to observations of established street-scape green infrastructure in Tucson, AZ, and an experimental installation of green infrastructure basins on the campus of Biosphere 2 (Oracle, AZ) where we have been measuring plant performance and soil biogeochemical functions. We found variable sensitivity of microbial activity, soil respiration, N-mineralization, photosynthesis and respiration that was mediated both by elements of basin design (soil texture and composition, choice of surface mulches) and antecedent precipitation inputs and soil moisture conditions. The adapted TTRP framework and field studies suggest that there are strong connections between design and function that have implications for stormwater management and ecosystem service provision in dryland cities.
PGMS: A Case Study of Collecting PDA-Based Geo-Tagged Malaria-Related Survey Data
Zhou, Ying; Lobo, Neil F.; Wolkon, Adam; Gimnig, John E.; Malishee, Alpha; Stevenson, Jennifer; Sulistyawati; Collins, Frank H.; Madey, Greg
2014-01-01
Using mobile devices, such as personal digital assistants (PDAs), smartphones, tablet computers, etc., to electronically collect malaria-related field data is the way for the field questionnaires in the future. This case study seeks to design a generic survey framework PDA-based geo-tagged malaria-related data collection tool (PGMS) that can be used not only for large-scale community-level geo-tagged electronic malaria-related surveys, but also for a wide variety of electronic data collections of other infectious diseases. The framework includes two parts: the database designed for subsequent cross-sectional data analysis and the customized programs for the six study sites (two in Kenya, three in Indonesia, and one in Tanzania). In addition to the framework development, we also present our methods used when configuring and deploying the PDAs to 1) reduce data entry errors, 2) conserve battery power, 3) field install the programs onto dozens of handheld devices, 4) translate electronic questionnaires into local languages, 5) prevent data loss, and 6) transfer data from PDAs to computers for future analysis and storage. Since 2008, PGMS has successfully accomplished quite a few surveys that recorded 10,871 compounds and households, 52,126 persons, and 17,100 bed nets from the six sites. These numbers are still growing. PMID:25048377