Sample records for database development final

  1. Freshwater Biological Traits Database (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Freshwater Biological Traits Database. This report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also...

  2. Freshwater Biological Traits Database (Data Sources)

    EPA Science Inventory

    When EPA release the final report, Freshwater Biological Traits Database, it referenced numerous data sources that are included below. The Traits Database report covers the development of a database of freshwater biological traits with additional traits that are relevan...

  3. Unified Database Development Program. Final Report.

    ERIC Educational Resources Information Center

    Thomas, Everett L., Jr.; Deem, Robert N.

    The objective of the unified database (UDB) program was to develop an automated information system that would be useful in the design, development, testing, and support of new Air Force aircraft weapon systems. Primary emphasis was on the development of: (1) a historical logistics data repository system to provide convenient and timely access to…

  4. SPECIATE 4.0: SPECIATION DATABASE DEVELOPMENT DOCUMENTATION--FINAL REPORT

    EPA Science Inventory

    SPECIATE is the U.S. EPA's repository of total organic compounds (TOC) and particulate matter (PM) speciation profiles of air pollution sources. This report documents how EPA developed the SPECIATE 4.0 database that replaces the prior version, SPECIATE 3.2. SPECIATE 4.0 includes ...

  5. Progress in development of an integrated dietary supplement ingredient database at the NIH Office of Dietary Supplements

    PubMed Central

    Dwyer, Johanna T.; Picciano, Mary Frances; Betz, Joseph M.; Fisher, Kenneth D.; Saldanha, Leila G.; Yetley, Elizabeth A.; Coates, Paul M.; Radimer, Kathy; Bindewald, Bernadette; Sharpless, Katherine E.; Holden, Joanne; Andrews, Karen; Zhao, Cuiwei; Harnly, James; Wolf, Wayne R.; Perry, Charles R.

    2013-01-01

    Several activities of the Office of Dietary Supplements (ODS) at the National Institutes of Health involve enhancement of dietary supplement databases. These include an initiative with US Department of Agriculture to develop an analytically substantiated dietary supplement ingredient database (DSID) and collaboration with the National Center for Health Statistics to enhance the dietary supplement label database in the National Health and Nutrition Examination Survey (NHANES). The many challenges that must be dealt with in developing an analytically supported DSID include categorizing product types in the database, identifying nutrients, and other components of public health interest in these products and prioritizing which will be entered in the database first. Additional tasks include developing methods and reference materials for quantifying the constituents, finding qualified laboratories to measure the constituents, developing appropriate sample handling procedures, and finally developing representative sampling plans. Developing the NHANES dietary supplement label database has other challenges such as collecting information on dietary supplement use from NHANES respondents, constant updating and refining of information obtained, developing default values that can be used if the respondent cannot supply the exact supplement or strength that was consumed, and developing a publicly available label database. Federal partners and the research community are assisting in making an analytically supported dietary supplement database a reality. PMID:25309034

  6. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  7. Development of a Life History Database for Upper Mississippi River Fishes

    DTIC Science & Technology

    2007-05-01

    prevailing ecological and river theories with existing empirical data, investigating anthropogenic controls on functional attributes of ecosystems...2001; 2005a). database closely reflect the ecological attributes Finally, the life history database will allow the of UMRS fish species. These...34 Functional Feeding Guilds attribute class provide information on reproductive capacity, timing and mode for UMRS fish species. Our first example used the

  8. Computer Science and Technology: Modeling and Measurement Techniques for Evaluation of Design Alternatives in the Implementation of Database Management Software. Final Report.

    ERIC Educational Resources Information Center

    Deutsch, Donald R.

    This report describes a research effort that was carried out over a period of several years to develop and demonstrate a methodology for evaluating proposed Database Management System designs. The major proposition addressed by this study is embodied in the thesis statement: Proposed database management system designs can be evaluated best through…

  9. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  10. Databases for Microbiologists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhulin, Igor B.

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  11. A VBA Desktop Database for Proposal Processing at National Optical Astronomy Observatories

    NASA Astrophysics Data System (ADS)

    Brown, Christa L.

    National Optical Astronomy Observatories (NOAO) has developed a relational Microsoft Windows desktop database using Microsoft Access and the Microsoft Office programming language, Visual Basic for Applications (VBA). The database is used to track data relating to observing proposals from original receipt through the review process, scheduling, observing, and final statistical reporting. The database has automated proposal processing and distribution of information. It allows NOAO to collect and archive data so as to query and analyze information about our science programs in new ways.

  12. Development and application of basis database for materials life cycle assessment in china

    NASA Astrophysics Data System (ADS)

    Li, Xiaoqing; Gong, Xianzheng; Liu, Yu

    2017-03-01

    As the data intensive method, high quality environmental burden data is an important premise of carrying out materials life cycle assessment (MLCA), and the reliability of data directly influences the reliability of the assessment results and its application performance. Therefore, building Chinese MLCA database is the basic data needs and technical supports for carrying out and improving LCA practice. Firstly, some new progress on database which related to materials life cycle assessment research and development are introduced. Secondly, according to requirement of ISO 14040 series standards, the database framework and main datasets of the materials life cycle assessment are studied. Thirdly, MLCA data platform based on big data is developed. Finally, the future research works were proposed and discussed.

  13. New Directions in Library and Information Science Education. Final Report. Volume 2.5: Database Producer Professional Competencies.

    ERIC Educational Resources Information Center

    Griffiths, Jose-Marie; And Others

    This document contains validated activities and competencies needed by librarians working in a database producer organization. The activities and competencies are organized according to the functions which these librarians perform: acquisitions; thesaurus development and control; indexing/abstracting; and publications and product management.…

  14. C3I system modification and EMC (electromagnetic compatibility) methodology, volume 1

    NASA Astrophysics Data System (ADS)

    Wilson, J. L.; Jolly, M. B.

    1984-01-01

    A methodology (i.e., consistent set of procedures) for assessing the electromagnetic compatibility (EMC) of RF subsystem modifications on C3I aircraft was generated during this study (Volume 1). An IEMCAP (Intrasystem Electromagnetic Compatibility Analysis Program) database for the E-3A (AWACS) C3I aircraft RF subsystem was extracted to support the design of the EMC assessment methodology (Volume 2). Mock modifications were performed on the E-3A database to assess, using a preliminary form of the methodology, the resulting EMC impact. Application of the preliminary assessment methodology to modifications in the E-3A database served to fine tune the form of a final assessment methodology. The resulting final assessment methodology is documented in this report in conjunction with the overall study goals, procedures, and database. It is recommended that a similar EMC assessment methodology be developed for the power subsystem within C3I aircraft. It is further recommended that future EMC assessment methodologies be developed around expert systems (i.e., computer intelligent agents) to control both the excruciating detail and user requirement for transparency.

  15. Developing a Non-Formal Education and Literacy Database in the Asia-Pacific Region. Final Report of the Expert Group Consultation Meeting (Dhaka, Bangladesh, December 15-18, 1997).

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.

    The objectives of the Expert Group Consultation Meeting for Developing a Non-Formal Education and Literacy Database in the Asia-Pacific Region were: to exchange information and review the state-of-the-art in the field of data collection, analysis and indicators of non-formal education and literacy programs; to examine and review the set of…

  16. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.

  17. Michigan urban trunkline segments safety performance functions (SPFs) : final report.

    DOT National Transportation Integrated Search

    2016-07-01

    This study involves the development of safety performance functions (SPFs) for urban and suburban trunkline segments in the : state of Michigan. Extensive databases were developed through the integration of traffic crash information, traffic volumes,...

  18. Enhanced digital mapping project : final report

    DOT National Transportation Integrated Search

    2004-11-19

    The Enhanced Digital Map Project (EDMap) was a three-year effort launched in April 2001 to develop a range of digital map database enhancements that enable or improve the performance of driver assistance systems currently under development or conside...

  19. Advanced telemedicine development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; George, J.E.; Gavrilov, E.M.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop a Java-based, electronic, medical-record system that can handle multimedia data and work over a wide-area network based on open standards, and that can utilize an existing database back end. The physician is to be totally unaware that there is a database behind the scenes and is only aware that he/she can access and manage the relevant information to treat the patient.

  20. Software Classifications: Trends in Literacy Software Publication and Marketing.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    First in a continuing series of reports on trends in marketing and publication of software for literacy education, a study explored the development of a database to track the trends and reported on trends seen in 1995. The final version of the 1995 database consisted of 1011 software titles, 165 of which had been published in 1995 and 846…

  1. Final report for DOE Award # DE- SC0010039*: Carbon dynamics of forest recovery under a changing climate: Forcings, feedbacks, and implications for earth system modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Teixeira, Kristina J.; DeLucia, Evan H.; Duval, Benjamin D.

    2015-10-29

    To advance understanding of C dynamics of forests globally, we compiled a new database, the Forest C database (ForC-db), which contains data on ground-based measurements of ecosystem-level C stocks and annual fluxes along with disturbance history. This database currently contains 18,791 records from 2009 sites, making it the largest and most comprehensive database of C stocks and flows in forest ecosystems globally. The tropical component of the database will be published in conjunction with a manuscript that is currently under review (Anderson-Teixeira et al., in review). Database development continues, and we hope to maintain a dynamic instance of the entiremore » (global) database.« less

  2. Rapid development of entity-based data models for bioinformatics with persistence object-oriented design and structured interfaces.

    PubMed

    Ezra Tsur, Elishai

    2017-01-01

    Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.

  3. The Space Systems Environmental Test Facility Database (SSETFD), Website Development Status

    NASA Technical Reports Server (NTRS)

    Snyder, James M.

    2008-01-01

    The Aerospace Corporation has been developing a database of U.S. environmental test laboratory capabilities utilized by the space systems hardware development community. To date, 19 sites have been visited by The Aerospace Corporation and verbal agreements reached to include their capability descriptions in the database. A website is being developed to make this database accessible by all interested government, civil, university and industry personnel. The website will be accessible by all interested in learning more about the extensive collective capability that the US based space industry has to offer. The Environments, Test & Assessment Department within The Aerospace Corporation will be responsible for overall coordination and maintenance of the database. Several US government agencies are interested in utilizing this database to assist in the source selection process for future spacecraft programs. This paper introduces the website by providing an overview of its development, location and search capabilities. It will show how the aerospace community can apply this new tool as a way to increase the utilization of existing lab facilities, and as a starting point for capital expenditure/upgrade trade studies. The long term result is expected to be increased utilization of existing laboratory capability and reduced overall development cost of space systems hardware. Finally, the paper will present the process for adding new participants, and how the database will be maintained.

  4. FERN Ethnomedicinal Plant Database: Exploring Fern Ethnomedicinal Plants Knowledge for Computational Drug Discovery.

    PubMed

    Thakar, Sambhaji B; Ghorpade, Pradnya N; Kale, Manisha V; Sonawane, Kailas D

    2015-01-01

    Fern plants are known for their ethnomedicinal applications. Huge amount of fern medicinal plants information is scattered in the form of text. Hence, database development would be an appropriate endeavor to cope with the situation. So by looking at the importance of medicinally useful fern plants, we developed a web based database which contains information about several group of ferns, their medicinal uses, chemical constituents as well as protein/enzyme sequences isolated from different fern plants. Fern ethnomedicinal plant database is an all-embracing, content management web-based database system, used to retrieve collection of factual knowledge related to the ethnomedicinal fern species. Most of the protein/enzyme sequences have been extracted from NCBI Protein sequence database. The fern species, family name, identification, taxonomy ID from NCBI, geographical occurrence, trial for, plant parts used, ethnomedicinal importance, morphological characteristics, collected from various scientific literatures and journals available in the text form. NCBI's BLAST, InterPro, phylogeny, Clustal W web source has also been provided for the future comparative studies. So users can get information related to fern plants and their medicinal applications at one place. This Fern ethnomedicinal plant database includes information of 100 fern medicinal species. This web based database would be an advantageous to derive information specifically for computational drug discovery, botanists or botanical interested persons, pharmacologists, researchers, biochemists, plant biotechnologists, ayurvedic practitioners, doctors/pharmacists, traditional medicinal users, farmers, agricultural students and teachers from universities as well as colleges and finally fern plant lovers. This effort would be useful to provide essential knowledge for the users about the adventitious applications for drug discovery, applications, conservation of fern species around the world and finally to create social awareness.

  5. X-48B Phase 1 Flight Maneuver Database and ICP Airspace Constraint Analysis

    NASA Technical Reports Server (NTRS)

    Fast, Peter Alan

    2010-01-01

    The work preformed during the Summer 2010 by Peter Fast. The main tasks assigned were to update and improve the X-48 Flight Maneuver Database and conduct an Airspace Constraint Analysis for the Remotely Operated Aircraft Area used to flight test Unmanned Arial Vehicles. The final task was to develop and demonstrate a working knowledge of flight control theory.

  6. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  7. Algorithms for database-dependent search of MS/MS data.

    PubMed

    Matthiesen, Rune

    2013-01-01

    The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.

  8. The CSB Incident Screening Database: description, summary statistics and uses.

    PubMed

    Gomez, Manuel R; Casper, Susan; Smith, E Allen

    2008-11-15

    This paper briefly describes the Chemical Incident Screening Database currently used by the CSB to identify and evaluate chemical incidents for possible investigations, and summarizes descriptive statistics from this database that can potentially help to estimate the number, character, and consequences of chemical incidents in the US. The report compares some of the information in the CSB database to roughly similar information available from databases operated by EPA and the Agency for Toxic Substances and Disease Registry (ATSDR), and explores the possible implications of these comparisons with regard to the dimension of the chemical incident problem. Finally, the report explores in a preliminary way whether a system modeled after the existing CSB screening database could be developed to serve as a national surveillance tool for chemical incidents.

  9. DE-NE0000735 - FINAL REPORT ON THORIUM FUEL CYCLE NEUP PROJECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krahn, Steven; Ault, Timothy; Worrall, Andrew

    The report is broken into six chapters, including this executive summary chapter. Following an introduction, this report discusses each of the project’s three major components (Fuel Cycle Data Package (FCDP) Development, Thorium Fuel Cycle Literature Analysis and Database Development, and the Thorium Fuel Cycle Technical Track and Proceedings). A final chapter is devoted to summarization. Various outcomes, publications, etc. originating from this project can be found in the Appendices at the end of the document.

  10. The integration of quantitative information with an intelligent decision support system for residential energy retrofits

    NASA Astrophysics Data System (ADS)

    Mo, Yunjeong

    The purpose of this research is to support the development of an intelligent Decision Support System (DSS) by integrating quantitative information with expert knowledge in order to facilitate effective retrofit decision-making. To achieve this goal, the Energy Retrofit Decision Process Framework is analyzed. Expert system shell software, a retrofit measure cost database, and energy simulation software are needed for developing the DSS; Exsys Corvid, the NREM database and BEopt were chosen for implementing an integration model. This integration model demonstrates the holistic function of a residential energy retrofit system for existing homes, by providing a prioritized list of retrofit measures with cost information, energy simulation and expert advice. The users, such as homeowners and energy auditors, can acquire all of the necessary retrofit information from this unified system without having to explore several separate systems. The integration model plays the role of a prototype for the finalized intelligent decision support system. It implements all of the necessary functions for the finalized DSS, including integration of the database, energy simulation and expert knowledge.

  11. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  12. A Comparative Analysis of Transitions from Education to Work in Europe (CATEWE). Final Report [and] Annex to the Final Report.

    ERIC Educational Resources Information Center

    Smyth, Emer; Gangl, Markus; Raffe, David; Hannan, Damian F.; McCoy, Selina

    This project aimed to develop a more comprehensive conceptual framework of school-to-work transitions in different national contexts and apply this framework to the empirical analysis of transition processes across European countries. It drew on these two data sources: European Community Labor Force Survey and integrated databases on national…

  13. Development of comprehensive guidance on obtaining service consumed data for NTD : final report, January 2009.

    DOT National Transportation Integrated Search

    2009-01-01

    This document proposes The National Transit Database Sampling Manual. It is developed for the : Federal Transit Administration (FTA) to replace its current guidance (circulars 2710.1A and 2710.2A) to : transit agencies on how they may estimate servic...

  14. IDD Info: a software to manage surveillance data of Iodine Deficiency Disorders.

    PubMed

    Liu, Peng; Teng, Bai-Jun; Zhang, Shu-Bin; Su, Xiao-Hui; Yu, Jun; Liu, Shou-Jun

    2011-08-01

    IDD info, a new software for managing survey data of Iodine Deficiency Disorders (IDD), is presented in this paper. IDD Info aims to create IDD project databases, process, analyze various national or regional surveillance data and form final report. It has series measures of choosing database from existing ones, revising it, choosing indicators from pool to establish database and adding indicators to pool. It also provides simple tools to scan one database and compare two databases, to set IDD standard parameters, to analyze data by single indicator and multi-indicators, and finally to form typeset report with content customized. IDD Info was developed using Chinese national IDD surveillance data of 2005. Its validity was evaluated by comparing with survey report given by China CDC. The IDD Info is a professional analysis tool, which succeeds in speeding IDD data analysis up to about 14.28% with respect to standard reference routines. It consequently enhances analysis performance and user compliance. IDD Info is a practical and accurate means of managing the multifarious IDD surveillance data that can be widely used by non-statisticians in national and regional IDD surveillance. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database

    NASA Technical Reports Server (NTRS)

    Hanke, Jeremy L.

    2011-01-01

    The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.

  16. Structural significance of mechanical damage.

    DOT National Transportation Integrated Search

    2012-05-01

    The letter transmits the Final Report for work completed under US DOT PHMSA Other Transaction Agreement (OTA) DTPH56-08-T-000011, Structural Significance of Mechanical Damage. The project was implemented to develop a detailed experimental database on...

  17. The Development and Validation of a Special Education Intelligent Administration Support Program. Final Report.

    ERIC Educational Resources Information Center

    Utah State Univ., Logan. Center for Persons with Disabilities.

    This project studied the effects of implementing a computerized management information system developed for special education administrators. The Intelligent Administration Support Program (IASP), an expert system and database program, assisted in information acquisition and analysis pertaining to the district's quality of decisions and procedures…

  18. Final Report - Enhanced LAW Glass Property - Composition Models - Phase 1 VSL-13R2940-1, Rev. 0, dated 9/27/2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, Albert A.; Muller, I.; Gilbo, K.

    2013-11-13

    The objectives of this work are aimed at the development of enhanced LAW propertycomposition models that expand the composition region covered by the models. The models of interest include PCT, VHT, viscosity and electrical conductivity. This is planned as a multi-year effort that will be performed in phases with the objectives listed below for the current phase.  Incorporate property- composition data from the new glasses into the database.  Assess the database and identify composition spaces in the database that need augmentation.  Develop statistically-designed composition matrices to cover the composition regions identified in the above analysis.  Preparemore » crucible melts of glass compositions from the statistically-designed composition matrix and measure the properties of interest.  Incorporate the above property-composition data into the database.  Assess existing models against the complete dataset and, as necessary, start development of new models.« less

  19. Repeatability and uncertainty analyses of NASA/MSFC light gas gun test data

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.; Cooper, David

    1993-01-01

    This Final Report presents an overview of the impact tests performed at NASA/MSFC in the time period 1985 to 1991 and the results of phenomena repeatability and data uncertainty studies performed using the information obtained from those tests. An analysis of the data from over 400 tests conducted between 1989 and 1991 was performed to generate a database to supplement the Hypervelocity Impact Damage Database developed under a previous effort.

  20. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE PAGES

    Zerkin, V. V.; Pritychenko, B.

    2018-02-04

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  1. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zerkin, V. V.; Pritychenko, B.

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  2. HLLV avionics requirements study and electronic filing system database development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This final report provides a summary of achievements and activities performed under Contract NAS8-39215. The contract's objective was to explore a new way of delivering, storing, accessing, and archiving study products and information and to define top level system requirements for Heavy Lift Launch Vehicle (HLLV) avionics that incorporate Vehicle Health Management (VHM). This report includes technical objectives, methods, assumptions, recommendations, sample data, and issues as specified by DPD No. 772, DR-3. The report is organized into two major subsections, one specific to each of the two tasks defined in the Statement of Work: the Index Database Task and the HLLV Avionics Requirements Task. The Index Database Task resulted in the selection and modification of a commercial database software tool to contain the data developed during the HLLV Avionics Requirements Task. All summary information is addressed within each task's section.

  3. Goods Movement: Regional Analysis and Database Final Report

    DOT National Transportation Integrated Search

    1993-03-26

    The project reported here was undertaken to create and test methods for synthesizing truck flow patterns in urban areas from partial and fragmentary observations. More specifically, the project sought to develop a way to estimate origin-destination (...

  4. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  5. Overlap and diversity in antimicrobial peptide databases: compiling a non-redundant set of sequences.

    PubMed

    Aguilera-Mendoza, Longendri; Marrero-Ponce, Yovani; Tellez-Ibarra, Roberto; Llorente-Quesada, Monica T; Salgado, Jesús; Barigye, Stephen J; Liu, Jun

    2015-08-01

    The large variety of antimicrobial peptide (AMP) databases developed to date are characterized by a substantial overlap of data and similarity of sequences. Our goals are to analyze the levels of redundancy for all available AMP databases and use this information to build a new non-redundant sequence database. For this purpose, a new software tool is introduced. A comparative study of 25 AMP databases reveals the overlap and diversity among them and the internal diversity within each database. The overlap analysis shows that only one database (Peptaibol) contains exclusive data, not present in any other, whereas all sequences in the LAMP_Patent database are included in CAMP_Patent. However, the majority of databases have their own set of unique sequences, as well as some overlap with other databases. The complete set of non-duplicate sequences comprises 16 990 cases, which is almost half of the total number of reported peptides. On the other hand, the diversity analysis identifies the most and least diverse databases and proves that all databases exhibit some level of redundancy. Finally, we present a new parallel-free software, named Dover Analyzer, developed to compute the overlap and diversity between any number of databases and compile a set of non-redundant sequences. These results are useful for selecting or building a suitable representative set of AMPs, according to specific needs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Patient safety and systematic reviews: finding papers indexed in MEDLINE, EMBASE and CINAHL.

    PubMed

    Tanon, A A; Champagne, F; Contandriopoulos, A-P; Pomey, M-P; Vadeboncoeur, A; Nguyen, H

    2010-10-01

    To develop search strategies for identifying papers on patient safety in MEDLINE, EMBASE and CINAHL. Six journals were electronically searched for papers on patient safety published between 2000 and 2006. Identified papers were divided into two gold standards: one to build and the other to validate the search strategies. Candidate terms for strategy construction were identified using a word frequency analysis of titles, abstracts and keywords used to index the papers in the databases. Searches were run for each one of the selected terms independently in every database. Sensitivity, precision and specificity were calculated for each candidate term. Terms with sensitivity greater than 10% were combined to form the final strategies. The search strategies developed were run against the validation gold standard to assess their performance. A final step in the validation process was to compare the performance of each strategy to those of other strategies found in the literature. We developed strategies for all three databases that were highly sensitive (range 95%-100%), precise (range 40%-60%) and balanced (the product of sensitivity and precision being in the range of 30%-40%). The strategies were very specific and outperformed those found in the literature. The strategies we developed can meet the needs of users aiming to maximise either sensitivity or precision, or seeking a reasonable compromise between sensitivity and precision, when searching for papers on patient safety in MEDLINE, EMBASE or CINAHL.

  7. IRIS Toxicological Review of Benzo[a]pyrene (Public Comment Draft)

    EPA Science Inventory

    EPA is developing an Integrated Risk Information System (IRIS) assessment of benzo[a]pyrene and has released the draft assessment for public comment and external peer review. When final, the assessment will appear on the IRIS database.

  8. Design of Knowledge Bases for Plant Gene Regulatory Networks.

    PubMed

    Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich

    2017-01-01

    Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.

  9. A Project to Computerize Performance Objectives and Criterion-Referenced Measures in Occupational Education for Research and Determination of Applicability to Handicapped Learners. Final Report.

    ERIC Educational Resources Information Center

    Lee, Connie W.; Hinson, Tony M.

    This publication is the final report of a 21-month project designed to (1) expand and refine the computer capabilities of the Vocational-Technical Education Consortium of States (V-TECS) to ensure rapid data access for generating routine and special occupational data-based reports; (2) develop and implement a computer storage and retrieval system…

  10. Certifiable database generation for SVS

    NASA Astrophysics Data System (ADS)

    Schiefele, Jens; Damjanovic, Dejan; Kubbat, Wolfgang

    2000-06-01

    In future aircraft cockpits SVS will be used to display 3D physical and virtual information to pilots. A review of prototype and production Synthetic Vision Displays (SVD) from Euro Telematic, UPS Advanced Technologies, Universal Avionics, VDO-Luftfahrtgeratewerk, and NASA, are discussed. As data sources terrain, obstacle, navigation, and airport data is needed, Jeppesen-Sanderson, Inc. and Darmstadt Univ. of Technology currently develop certifiable methods for acquisition, validation, and processing methods for terrain, obstacle, and airport databases. The acquired data will be integrated into a High-Quality Database (HQ-DB). This database is the master repository. It contains all information relevant for all types of aviation applications. From the HQ-DB SVS relevant data is retried, converted, decimated, and adapted into a SVS Real-Time Onboard Database (RTO-DB). The process of data acquisition, verification, and data processing will be defined in a way that allows certication within DO-200a and new RTCA/EUROCAE standards for airport and terrain data. The open formats proposed will be established and evaluated for industrial usability. Finally, a NASA-industry cooperation to develop industrial SVS products under the umbrella of the NASA Aviation Safety Program (ASP) is introduced. A key element of the SVS NASA-ASP is the Jeppesen lead task to develop methods for world-wide database generation and certification. Jeppesen will build three airport databases that will be used in flight trials with NASA aircraft.

  11. Development of an NTD tool for vanpool services : final report, November 2008.

    DOT National Transportation Integrated Search

    2009-11-01

    The National Transit Database has requirements on how providers of vanpool services collect and report their data on service consumed and service provided. Current practices, however, often deviate from these requirements. Such deviations lead to poo...

  12. Computational Modeling as a Design Tool in Microelectronics Manufacturing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    Plans to introduce pilot lines or fabs for 300 mm processing are in progress. The IC technology is simultaneously moving towards 0.25/0.18 micron. The convergence of these two trends places unprecedented stringent demands on processes and equipments. More than ever, computational modeling is called upon to play a complementary role in equipment and process design. The pace in hardware/process development needs a matching pace in software development: an aggressive move towards developing "virtual reactors" is desirable and essential to reduce design cycle and costs. This goal has three elements: reactor scale model, feature level model, and database of physical/chemical properties. With these elements coupled, the complete model should function as a design aid in a CAD environment. This talk would aim at the description of various elements. At the reactor level, continuum, DSMC(or particle) and hybrid models will be discussed and compared using examples of plasma and thermal process simulations. In microtopography evolution, approaches such as level set methods compete with conventional geometric models. Regardless of the approach, the reliance on empricism is to be eliminated through coupling to reactor model and computational surface science. This coupling poses challenging issues of orders of magnitude variation in length and time scales. Finally, database development has fallen behind; current situation is rapidly aggravated by the ever newer chemistries emerging to meet process metrics. The virtual reactor would be a useless concept without an accompanying reliable database that consists of: thermal reaction pathways and rate constants, electron-molecule cross sections, thermochemical properties, transport properties, and finally, surface data on the interaction of radicals, atoms and ions with various surfaces. Large scale computational chemistry efforts are critical as experiments alone cannot meet database needs due to the difficulties associated with such controlled experiments and costs.

  13. IRIS Toxicological Review of Hexahydro-1,3,5-Trinitro-1,3,5 ...

    EPA Pesticide Factsheets

    EPA is developing an Integrated Risk Information System (IRIS) assessment of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) and has released the draft assessment for public comment. When final, the assessment will appear on the IRIS database. EPA is undertaking an update of the Integrated Risk Information System (IRIS) health assessment for RDX. The outcome of this project is an updated Toxicological Review and IRIS Summary for RDX that will be entered into the IRIS database.

  14. IRIS Toxicological Review of Benzo[a]pyrene (Public ...

    EPA Pesticide Factsheets

    EPA is developing an Integrated Risk Information System (IRIS) assessment of benzo[a]pyrene and has released the draft assessment for public comment and external peer review. When final, the assessment will appear on the IRIS database. EPA is undertaking an update of the Integrated Risk Information System (IRIS) health assessment for benzo[a]pyrene (BaP). The outcome of this project is an updated Toxicological Review and IRIS Summary for BaP that will be entered into the IRIS database.

  15. Integrating the Allen Brain Institute Cell Types Database into Automated Neuroscience Workflow.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2017-10-01

    We developed software tools to download, extract features, and organize the Cell Types Database from the Allen Brain Institute (ABI) in order to integrate its whole cell patch clamp characterization data into the automated modeling/data analysis cycle. To expand the potential user base we employed both Python and MATLAB. The basic set of tools downloads selected raw data and extracts cell, sweep, and spike features, using ABI's feature extraction code. To facilitate data manipulation we added a tool to build a local specialized database of raw data plus extracted features. Finally, to maximize automation, we extended our NeuroManager workflow automation suite to include these tools plus a separate investigation database. The extended suite allows the user to integrate ABI experimental and modeling data into an automated workflow deployed on heterogeneous computer infrastructures, from local servers, to high performance computing environments, to the cloud. Since our approach is focused on workflow procedures our tools can be modified to interact with the increasing number of neuroscience databases being developed to cover all scales and properties of the nervous system.

  16. Tests of methods for evaluating bibliographic databases: an analysis of the National Library of Medicine's handling of literatures in the medical behavioral sciences.

    PubMed

    Griffith, B C; White, H D; Drott, M C; Saye, J D

    1986-07-01

    This article reports on five separate studies designed for the National Library of Medicine (NLM) to develop and test methodologies for evaluating the products of large databases. The methodologies were tested on literatures of the medical behavioral sciences (MBS). One of these studies examined how well NLM covered MBS monographic literature using CATLINE and OCLC. Another examined MBS journal and serial literature coverage in MEDLINE and other MBS-related databases available through DIALOG. These two studies used 1010 items derived from the reference lists of sixty-one journals, and tested for gaps and overlaps in coverage in the various databases. A third study examined the quality of the indexing NLM provides to MBS literatures and developed a measure of indexing as a system component. The final two studies explored how well MEDLINE retrieved documents on topics submitted by MBS professionals and how online searchers viewed MEDLINE (and other systems and databases) in handling MBS topics. The five studies yielded both broad research outcomes and specific recommendations to NLM.

  17. Estimation of Solvation Quantities from Experimental Thermodynamic Data: Development of the Comprehensive CompSol Databank for Pure and Mixed Solutes

    NASA Astrophysics Data System (ADS)

    Moine, Edouard; Privat, Romain; Sirjean, Baptiste; Jaubert, Jean-Noël

    2017-09-01

    The Gibbs energy of solvation measures the affinity of a solute for its solvent and is thus a key property for the selection of an appropriate solvent for a chemical synthesis or a separation process. More fundamentally, Gibbs energies of solvation are choice data for developing and benchmarking molecular models predicting solvation effects. The Comprehensive Solvation—CompSol—database was developed with the ambition to propose very large sets of new experimental solvation chemical-potential, solvation entropy, and solvation enthalpy data of pure and mixed components, covering extended temperature ranges. For mixed compounds, the solvation quantities were generated in infinite-dilution conditions by combining experimental values of pure-component and binary-mixture thermodynamic properties. Three types of binary-mixture properties were considered: partition coefficients, activity coefficients at infinite dilution, and Henry's-law constants. A rigorous methodology was implemented with the aim to select data at appropriate conditions of temperature, pressure, and concentration for the estimation of solvation data. Finally, our comprehensive CompSol database contains 21 671 data associated with 1969 pure species and 70 062 data associated with 14 102 binary mixtures (including 760 solvation data related to the ionic-liquid class of solvents). On the basis of the very large amount of experimental data contained in the CompSol database, it is finally discussed how solvation energies are influenced by hydrogen-bonding association effects.

  18. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    PubMed

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  19. ETHNOS: A versatile electronic tool for the development and curation of national genetic databases

    PubMed Central

    2010-01-01

    National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation. PMID:20650823

  20. ETHNOS : A versatile electronic tool for the development and curation of national genetic databases.

    PubMed

    van Baal, Sjozef; Zlotogora, Joël; Lagoumintzis, George; Gkantouna, Vassiliki; Tzimas, Ioannis; Poulas, Konstantinos; Tsakalidis, Athanassios; Romeo, Giovanni; Patrinos, George P

    2010-06-01

    National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation.

  1. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  2. Core Goals and Objectives of the University of Connecticut School of Medicine: The Product and the Process.

    ERIC Educational Resources Information Center

    Gjerde, Craig L.; Sheehan, T. Joseph

    The final report of the University of Connecticut Health Center curriculum project entitled "A Data-Based Approval to Developing a Curriculum" is presented. The aims of the project were these: (1) to develop procedures for judging and cross-judging the goals and objectives of undergraduate medical education; (2) to implement these…

  3. Evaluation and Development of Pavement Scores, Performance Models and Needs Estimates for the TXDOT Pavement Management Information System : Final Report

    DOT National Transportation Integrated Search

    2012-10-01

    This project conducted a thorough review of the existing Pavement Management Information System (PMIS) database, : performance models, needs estimates, utility curves, and scores calculations, as well as a review of District practices : concerning th...

  4. Scientists at Work. Final Report.

    ERIC Educational Resources Information Center

    Education Turnkey Systems, Inc., Falls Church, VA.

    This report summarizes activities related to the development, field testing, evaluation, and marketing of the "Scientists at Work" program which combines computer assisted instruction with database tools to aid cognitively impaired middle and early high school children in learning and applying thinking skills to science. The brief report reviews…

  5. VitisExpDB: a database resource for grape functional genomics.

    PubMed

    Doddapaneni, Harshavardhan; Lin, Hong; Walker, M Andrew; Yao, Jiqiang; Civerolo, Edwin L

    2008-02-28

    The family Vitaceae consists of many different grape species that grow in a range of climatic conditions. In the past few years, several studies have generated functional genomic information on different Vitis species and cultivars, including the European grape vine, Vitis vinifera. Our goal is to develop a comprehensive web data source for Vitaceae. VitisExpDB is an online MySQL-PHP driven relational database that houses annotated EST and gene expression data for V. vinifera and non-vinifera grape species and varieties. Currently, the database stores approximately 320,000 EST sequences derived from 8 species/hybrids, their annotation (BLAST top match) details and Gene Ontology based structured vocabulary. Putative homologs for each EST in other species and varieties along with information on their percent nucleotide identities, phylogenetic relationship and common primers can be retrieved. The database also includes information on probe sequence and annotation features of the high density 60-mer gene expression chip consisting of approximately 20,000 non-redundant set of ESTs. Finally, the database includes 14 processed global microarray expression profile sets. Data from 12 of these expression profile sets have been mapped onto metabolic pathways. A user-friendly web interface with multiple search indices and extensively hyperlinked result features that permit efficient data retrieval has been developed. Several online bioinformatics tools that interact with the database along with other sequence analysis tools have been added. In addition, users can submit their ESTs to the database. The developed database provides genomic resource to grape community for functional analysis of genes in the collection and for the grape genome annotation and gene function identification. The VitisExpDB database is available through our website http://cropdisease.ars.usda.gov/vitis_at/main-page.htm.

  6. VitisExpDB: A database resource for grape functional genomics

    PubMed Central

    Doddapaneni, Harshavardhan; Lin, Hong; Walker, M Andrew; Yao, Jiqiang; Civerolo, Edwin L

    2008-01-01

    Background The family Vitaceae consists of many different grape species that grow in a range of climatic conditions. In the past few years, several studies have generated functional genomic information on different Vitis species and cultivars, including the European grape vine, Vitis vinifera. Our goal is to develop a comprehensive web data source for Vitaceae. Description VitisExpDB is an online MySQL-PHP driven relational database that houses annotated EST and gene expression data for V. vinifera and non-vinifera grape species and varieties. Currently, the database stores ~320,000 EST sequences derived from 8 species/hybrids, their annotation (BLAST top match) details and Gene Ontology based structured vocabulary. Putative homologs for each EST in other species and varieties along with information on their percent nucleotide identities, phylogenetic relationship and common primers can be retrieved. The database also includes information on probe sequence and annotation features of the high density 60-mer gene expression chip consisting of ~20,000 non-redundant set of ESTs. Finally, the database includes 14 processed global microarray expression profile sets. Data from 12 of these expression profile sets have been mapped onto metabolic pathways. A user-friendly web interface with multiple search indices and extensively hyperlinked result features that permit efficient data retrieval has been developed. Several online bioinformatics tools that interact with the database along with other sequence analysis tools have been added. In addition, users can submit their ESTs to the database. Conclusion The developed database provides genomic resource to grape community for functional analysis of genes in the collection and for the grape genome annotation and gene function identification. The VitisExpDB database is available through our website . PMID:18307813

  7. On patterns and re-use in bioinformatics databases.

    PubMed

    Bell, Michael J; Lord, Phillip

    2017-09-01

    As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Analytical software is available on request. phillip.lord@newcastle.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  8. On patterns and re-use in bioinformatics databases

    PubMed Central

    Bell, Michael J.; Lord, Phillip

    2017-01-01

    Abstract Motivation: As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. Results: We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Availability and implementation: Analytical software is available on request. Contact: phillip.lord@newcastle.ac.uk PMID:28525546

  9. Partial Updating of TSCA Inventory DataBase; Production and Site Reports; Final Rule

    EPA Pesticide Factsheets

    A partial updating of the TSCA inventory database. The final rule requires manufacturers and importers of certain chemical substances included on the TSCA Chemical Substances Inventory to report current data on the production volume, plant site, etc.

  10. Disbiome database: linking the microbiome to disease.

    PubMed

    Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart

    2018-06-04

    Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.

  11. Fifteen hundred guidelines and growing: the UK database of clinical guidelines.

    PubMed

    van Loo, John; Leonard, Niamh

    2006-06-01

    The National Library for Health offers a comprehensive searchable database of nationally approved clinical guidelines, called the Guidelines Finder. This resource, commissioned in 2002, is managed and developed by the University of Sheffield Health Sciences Library. The authors introduce the historical and political dimension of guidelines and the nature of guidelines as a mechanism to ensure clinical effectiveness in practice. The article then outlines the maintenance and organisation of the Guidelines Finder database itself, the criteria for selection, who publishes guidelines and guideline formats, usage of the Guidelines Finder service and finally looks at some lessons learnt from a local library offering a national service. Clinical guidelines are central to effective clinical practice at the national, organisational and individual level. The Guidelines Finder is one of the most visited resources within the National Library for Health and is successful in answering information needs related to specific patient care, clinical research, guideline development and education.

  12. [Algorithms for the identification of hospital stays due to osteoporotic femoral neck fractures in European medical administrative databases using ICD-10 codes: A non-systematic review of the literature].

    PubMed

    Caillet, P; Oberlin, P; Monnet, E; Guillon-Grammatico, L; Métral, P; Belhassen, M; Denier, P; Banaei-Bouchareb, L; Viprey, M; Biau, D; Schott, A-M

    2017-10-01

    Osteoporotic hip fractures (OHF) are associated with significant morbidity and mortality. The French medico-administrative database (SNIIRAM) offers an interesting opportunity to improve the management of OHF. However, the validity of studies conducted with this database relies heavily on the quality of the algorithm used to detect OHF. The aim of the REDSIAM network is to facilitate the use of the SNIIRAM database. The main objective of this study was to present and discuss several OHF-detection algorithms that could be used with this database. A non-systematic literature search was performed. The Medline database was explored during the period January 2005-August 2016. Furthermore, a snowball search was then carried out from the articles included and field experts were contacted. The extraction was conducted using the chart developed by the REDSIAM network's "Methodology" task force. The ICD-10 codes used to detect OHF are mainly S72.0, S72.1, and S72.2. The performance of these algorithms is at best partially validated. Complementary use of medical and surgical procedure codes would affect their performance. Finally, few studies described how they dealt with fractures of non-osteoporotic origin, re-hospitalization, and potential contralateral fracture cases. Authors in the literature encourage the use of ICD-10 codes S72.0 to S72.2 to develop algorithms for OHF detection. These are the codes most frequently used for OHF in France. Depending on the study objectives, other ICD10 codes and medical and surgical procedures could be usefully discussed for inclusion in the algorithm. Detection and management of duplicates and non-osteoporotic fractures should be considered in the process. Finally, when a study is based on such an algorithm, all these points should be precisely described in the publication. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Web application for detailed real-time database transaction monitoring for CMS condition data

    NASA Astrophysics Data System (ADS)

    de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2012-12-01

    In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.

  14. Development of a Relative Potency Factor (Rpf) Approach for Polycyclic Aromatic Hydrocarbon (PAH) Mixtures (External Review Draft)

    EPA Science Inventory

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of polycyclic aromatic hydrocarbon (PAH) mixtures that when finalized will appear on the Integrated Risk Information System (IRIS) database. ...

  15. IRIS Toxicological Review of Hexahydro-1,3,5-Trinitro-1,3,5-Triazine (RDX) (Public Comment Draft)

    EPA Science Inventory

    EPA is developing an Integrated Risk Information System (IRIS) assessment of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) and has released the draft assessment for public comment. When final, the assessment will appear on the IRIS database.

  16. A comprehensive view of the web-resources related to sericulture

    PubMed Central

    Singh, Deepika; Chetia, Hasnahana; Kabiraj, Debajyoti; Sharma, Swagata; Kumar, Anil; Sharma, Pragya; Deka, Manab; Bora, Utpal

    2016-01-01

    Recent progress in the field of sequencing and analysis has led to a tremendous spike in data and the development of data science tools. One of the outcomes of this scientific progress is development of numerous databases which are gaining popularity in all disciplines of biology including sericulture. As economically important organism, silkworms are studied extensively for their numerous applications in the field of textiles, biomaterials, biomimetics, etc. Similarly, host plants, pests, pathogens, etc. are also being probed to understand the seri-resources more efficiently. These studies have led to the generation of numerous seri-related databases which are extremely helpful for the scientific community. In this article, we have reviewed all the available online resources on silkworm and its related organisms, including databases as well as informative websites. We have studied their basic features and impact on research through citation count analysis, finally discussing the role of emerging sequencing and analysis technologies in the field of seri-data science. As an outcome of this review, a web portal named SeriPort, has been created which will act as an index for the various sericulture-related databases and web resources available in cyberspace. Database URL: http://www.seriport.in/ PMID:27307138

  17. Kin-Driver: a database of driver mutations in protein kinases.

    PubMed

    Simonetti, Franco L; Tornador, Cristian; Nabau-Moretó, Nuria; Molina-Vila, Miguel A; Marino-Buslje, Cristina

    2014-01-01

    Somatic mutations in protein kinases (PKs) are frequent driver events in many human tumors, while germ-line mutations are associated with hereditary diseases. Here we present Kin-driver, the first database that compiles driver mutations in PKs with experimental evidence demonstrating their functional role. Kin-driver is a manual expert-curated database that pays special attention to activating mutations (AMs) and can serve as a validation set to develop new generation tools focused on the prediction of gain-of-function driver mutations. It also offers an easy and intuitive environment to facilitate the visualization and analysis of mutations in PKs. Because all mutations are mapped onto a multiple sequence alignment, analogue positions between kinases can be identified and tentative new mutations can be proposed for studying by transferring annotation. Finally, our database can also be of use to clinical and translational laboratories, helping them to identify uncommon AMs that can correlate with response to new antitumor drugs. The website was developed using PHP and JavaScript, which are supported by all major browsers; the database was built using MySQL server. Kin-driver is available at: http://kin-driver.leloir.org.ar/ © The Author(s) 2014. Published by Oxford University Press.

  18. Translation from the collaborative OSM database to cartography

    NASA Astrophysics Data System (ADS)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  19. Tomato Expression Database (TED): a suite of data presentation and analysis tools

    PubMed Central

    Fei, Zhangjun; Tang, Xuemei; Alba, Rob; Giovannoni, James

    2006-01-01

    The Tomato Expression Database (TED) includes three integrated components. The Tomato Microarray Data Warehouse serves as a central repository for raw gene expression data derived from the public tomato cDNA microarray. In addition to expression data, TED stores experimental design and array information in compliance with the MIAME guidelines and provides web interfaces for researchers to retrieve data for their own analysis and use. The Tomato Microarray Expression Database contains normalized and processed microarray data for ten time points with nine pair-wise comparisons during fruit development and ripening in a normal tomato variety and nearly isogenic single gene mutants impacting fruit development and ripening. Finally, the Tomato Digital Expression Database contains raw and normalized digital expression (EST abundance) data derived from analysis of the complete public tomato EST collection containing >150 000 ESTs derived from 27 different non-normalized EST libraries. This last component also includes tools for the comparison of tomato and Arabidopsis digital expression data. A set of query interfaces and analysis, and visualization tools have been developed and incorporated into TED, which aid users in identifying and deciphering biologically important information from our datasets. TED can be accessed at . PMID:16381976

  20. Tomato Expression Database (TED): a suite of data presentation and analysis tools.

    PubMed

    Fei, Zhangjun; Tang, Xuemei; Alba, Rob; Giovannoni, James

    2006-01-01

    The Tomato Expression Database (TED) includes three integrated components. The Tomato Microarray Data Warehouse serves as a central repository for raw gene expression data derived from the public tomato cDNA microarray. In addition to expression data, TED stores experimental design and array information in compliance with the MIAME guidelines and provides web interfaces for researchers to retrieve data for their own analysis and use. The Tomato Microarray Expression Database contains normalized and processed microarray data for ten time points with nine pair-wise comparisons during fruit development and ripening in a normal tomato variety and nearly isogenic single gene mutants impacting fruit development and ripening. Finally, the Tomato Digital Expression Database contains raw and normalized digital expression (EST abundance) data derived from analysis of the complete public tomato EST collection containing >150,000 ESTs derived from 27 different non-normalized EST libraries. This last component also includes tools for the comparison of tomato and Arabidopsis digital expression data. A set of query interfaces and analysis, and visualization tools have been developed and incorporated into TED, which aid users in identifying and deciphering biologically important information from our datasets. TED can be accessed at http://ted.bti.cornell.edu.

  1. Monolithic Cu-Cr-Nb Alloys for High Temperature, High Heat Flux Applications

    NASA Technical Reports Server (NTRS)

    Ellis, David L.; Locci, Ivan E.; Michal, Gary M.; Humphrey, Derek M.

    1999-01-01

    Work during the prior four years of this grant has resulted in significant advances in the development of Cu-8 Cr4 Nb and related Cu-Cr-Nb alloys. The alloys are nearing commercial use in the Reusable Launch Vehicle (RLV) where they are candidate materials for the thrust cell liners of the aerospike engines being developed by Rocketdyne. During the fifth and final year of the grant, it is proposed to complete development of the design level database of mechanical and thermophysical properties and transfer it to NASA Glenn Research Center and Rocketdyne. The database development work will be divided into three main areas: Thermophysical Database Augmentation, Mechanical Testing and Metallography and Fractography. In addition to the database development, work will continue that is focussed on the production of alternatives to the powder metallurgy alloys currently used. Exploration of alternative alloys will be aimed at both the development of lower cost materials and higher performance materials. A key element of this effort will be the use of Thermo-Calc software to survey the solubility behavior of a wide range of alloying elements in a copper matrix. The ultimate goals would be to define suitable alloy compositions and processing routes to produce thin sheets of the material at either a lower cost, or, with improved mechanical and thermal properties compared to the current Cu-Cr-Nb powder metallurgy alloys.

  2. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    PubMed

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  3. Final Report: Development of a Chemical Model to Predict the Interactions between Supercritical CO2, Fluid and Rock in EGS Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McPherson, Brian J.; Pan, Feng

    2014-09-24

    This report summarizes development of a coupled-process reservoir model for simulating enhanced geothermal systems (EGS) that utilize supercritical carbon dioxide as a working fluid. Specifically, the project team developed an advanced chemical kinetic model for evaluating important processes in EGS reservoirs, such as mineral precipitation and dissolution at elevated temperature and pressure, and for evaluating potential impacts on EGS surface facilities by related chemical processes. We assembled a new database for better-calibrated simulation of water/brine/ rock/CO2 interactions in EGS reservoirs. This database utilizes existing kinetic and other chemical data, and we updated those data to reflect corrections for elevated temperaturemore » and pressure conditions of EGS reservoirs.« less

  4. MGIS: managing banana (Musa spp.) genetic resources information and high-throughput genotyping data

    PubMed Central

    Guignon, V.; Sempere, G.; Sardos, J.; Hueber, Y.; Duvergey, H.; Andrieu, A.; Chase, R.; Jenny, C.; Hazekamp, T.; Irish, B.; Jelali, K.; Adeka, J.; Ayala-Silva, T.; Chao, C.P.; Daniells, J.; Dowiya, B.; Effa effa, B.; Gueco, L.; Herradura, L.; Ibobondji, L.; Kempenaers, E.; Kilangi, J.; Muhangi, S.; Ngo Xuan, P.; Paofa, J.; Pavis, C.; Thiemele, D.; Tossou, C.; Sandoval, J.; Sutanto, A.; Vangu Paka, G.; Yi, G.; Van den houwe, I.; Roux, N.

    2017-01-01

    Abstract Unraveling the genetic diversity held in genebanks on a large scale is underway, due to advances in Next-generation sequence (NGS) based technologies that produce high-density genetic markers for a large number of samples at low cost. Genebank users should be in a position to identify and select germplasm from the global genepool based on a combination of passport, genotypic and phenotypic data. To facilitate this, a new generation of information systems is being designed to efficiently handle data and link it with other external resources such as genome or breeding databases. The Musa Germplasm Information System (MGIS), the database for global ex situ-held banana genetic resources, has been developed to address those needs in a user-friendly way. In developing MGIS, we selected a generic database schema (Chado), the robust content management system Drupal for the user interface, and Tripal, a set of Drupal modules which links the Chado schema to Drupal. MGIS allows germplasm collection examination, accession browsing, advanced search functions, and germplasm orders. Additionally, we developed unique graphical interfaces to compare accessions and to explore them based on their taxonomic information. Accession-based data has been enriched with publications, genotyping studies and associated genotyping datasets reporting on germplasm use. Finally, an interoperability layer has been implemented to facilitate the link with complementary databases like the Banana Genome Hub and the MusaBase breeding database. Database URL: https://www.crop-diversity.org/mgis/ PMID:29220435

  5. A Smarter Pathway for Delivering Cue Exposure Therapy? The Design and Development of a Smartphone App Targeting Alcohol Use Disorder.

    PubMed

    Mellentin, Angelina Isabella; Stenager, Elsebeth; Nielsen, Bent; Nielsen, Anette Søgaard; Yu, Fei

    2017-01-30

    Although the number of alcohol-related treatments in app stores is proliferating, none of them are based on a psychological framework and supported by empirical evidence. Cue exposure treatment (CET) with urge-specific coping skills (USCS) is often used in Danish treatment settings. It is an evidence-based psychological approach that focuses on promoting "confrontation with alcohol cues" as a means of reducing urges and the likelihood of relapse. The objective of this study was to describe the design and development of a CET-based smartphone app; an innovative delivery pathway for treating alcohol use disorder (AUD). The treatment is based on Monty and coworkers' manual for CET with USCS (2002). It was created by a multidisciplinary team of psychiatrists, psychologists, programmers, and graphic designers as well as patients with AUD. A database was developed for the purpose of registering and monitoring training activities. A final version of the CET app and database was developed after several user tests. The final version of the CET app includes an introduction, 4 sessions featuring USCS, 8 alcohol exposure videos promoting the use of one of the USCS, and a results component providing an overview of training activities and potential progress. Real-time urges are measured before, during, and after exposure to alcohol cues and are registered in the app together with other training activity variables. Data packages are continuously sent in encrypted form to an external database and will be merged with other data (in an internal database) in the future. The CET smartphone app is currently being tested at a large-scale, randomized controlled trial with the aim of clarifying whether it can be classified as an evidence-based treatment solution. The app has the potential to augment the reach of psychological treatment for AUD. ©Angelina Isabella Mellentin, Elsebeth Stenager, Bent Nielsen, Anette Søgaard Nielsen, Fei Yu. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 30.01.2017.

  6. A Smarter Pathway for Delivering Cue Exposure Therapy? The Design and Development of a Smartphone App Targeting Alcohol Use Disorder

    PubMed Central

    Stenager, Elsebeth; Nielsen, Bent; Nielsen, Anette Søgaard; Yu, Fei

    2017-01-01

    Background Although the number of alcohol-related treatments in app stores is proliferating, none of them are based on a psychological framework and supported by empirical evidence. Cue exposure treatment (CET) with urge-specific coping skills (USCS) is often used in Danish treatment settings. It is an evidence-based psychological approach that focuses on promoting “confrontation with alcohol cues” as a means of reducing urges and the likelihood of relapse. Objective The objective of this study was to describe the design and development of a CET-based smartphone app; an innovative delivery pathway for treating alcohol use disorder (AUD). Methods The treatment is based on Monty and coworkers’ manual for CET with USCS (2002). It was created by a multidisciplinary team of psychiatrists, psychologists, programmers, and graphic designers as well as patients with AUD. A database was developed for the purpose of registering and monitoring training activities. A final version of the CET app and database was developed after several user tests. Results The final version of the CET app includes an introduction, 4 sessions featuring USCS, 8 alcohol exposure videos promoting the use of one of the USCS, and a results component providing an overview of training activities and potential progress. Real-time urges are measured before, during, and after exposure to alcohol cues and are registered in the app together with other training activity variables. Data packages are continuously sent in encrypted form to an external database and will be merged with other data (in an internal database) in the future. Conclusions The CET smartphone app is currently being tested at a large-scale, randomized controlled trial with the aim of clarifying whether it can be classified as an evidence-based treatment solution. The app has the potential to augment the reach of psychological treatment for AUD. PMID:28137701

  7. RECENT DEVELOPMENTS IN HYDROWEB DATABASE Water level time series on lakes and reservoirs (Invited)

    NASA Astrophysics Data System (ADS)

    Cretaux, J.; Arsen, A.; Calmant, S.

    2013-12-01

    We present the current state of the Hydroweb database as well as developments in progress. It provides offline water level time series on rivers, reservoirs and lakes based on altimetry data from several satellites (Topex/Poseidon, ERS, Jason-1&2, GFO and ENVISAT). The major developments in Hydroweb concerns the development of an operational data centre with automatic acquisition and processing of IGDR data for updating time series in near real time (both for lakes & rivers) and also use of additional remote sensing data, like satellite imagery allowing the calculation of lake's surfaces. A lake data centre is under development at the Legos in coordination with Hydrolare Project leaded by SHI (State Hydrological Institute of the Russian Academy of Science). It will provide the level-surface-volume variations of about 230 lakes and reservoirs, calculated through combination of various satellite images (Modis, Asar, Landsat, Cbers) and radar altimetry (Topex / Poseidon, Jason-1 & 2, GFO, Envisat, ERS2, AltiKa). The final objective is to propose a data centre fully based on remote sensing technique and controlled by in situ infrastructure for the Global Terrestrial Network for Lakes (GTN-L) under the supervision of WMO and GCOS. In a longer perspective, the Hydroweb database will integrate data from future missions (Jason-3, Jason-CS, Sentinel-3A/B) and finally will serve for the design of the SWOT mission. The products of hydroweb will be used as input data for simulation of the SWOT products (water height and surface variations of lakes and rivers). In the future, the SWOT mission will allow to monitor on a sub-monthly basis the worldwide lakes and reservoirs bigger than 250 * 250 m and Hydroweb will host water level and extent products from this

  8. Image superresolution of cytology images using wavelet based patch search

    NASA Astrophysics Data System (ADS)

    Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo

    2015-01-01

    Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.

  9. A Self-paced Course in Pharmaceutical Mathematics Using Web-based Databases

    PubMed Central

    Bourne, David W.A.; Davison, A. Machelle

    2006-01-01

    Objective To transform a pharmaceutical mathematics course to a self-paced instructional format using Web-accessed databases for student practice and examination preparation. Design The existing pharmaceutical mathematics course was modified from a lecture style with midsemester and final examinations to a self-paced format in which students had multiple opportunities to complete online, nongraded self-assessments as well as in-class module examinations. Assessment Grades and course evaluations were compared between students taking the class in lecture format with midsemester and final examinations and students taking the class in the self-paced instructional format. The number of times it took students to pass examinations was also analyzed. Conclusions Based on instructor assessment and student feedback, the course succeeded in giving students who were proficient in pharmaceutical mathematics a chance to progress quickly and students who were less skillful the opportunity to receive instruction at their own pace and develop mathematical competence. PMID:17149445

  10. A self-paced course in pharmaceutical mathematics using web-based databases.

    PubMed

    Bourne, David W A; Davison, A Machelle

    2006-10-15

    To transform a pharmaceutical mathematics course to a self-paced instructional format using Web-accessed databases for student practice and examination preparation. The existing pharmaceutical mathematics course was modified from a lecture style with midsemester and final examinations to a self-paced format in which students had multiple opportunities to complete online, nongraded self-assessments as well as in-class module examinations. Grades and course evaluations were compared between students taking the class in lecture format with midsemester and final examinations and students taking the class in the self-paced instructional format. The number of times it took students to pass examinations was also analyzed. Based on instructor assessment and student feedback, the course succeeded in giving students who were proficient in pharmaceutical mathematics a chance to progress quickly and students who were less skillful the opportunity to receive instruction at their own pace and develop mathematical competence.

  11. Applying cognitive load theory to the redesign of a conventional database systems course

    NASA Astrophysics Data System (ADS)

    Mason, Raina; Seton, Carolyn; Cooper, Graham

    2016-01-01

    Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional structure for a database course, covering database design first, then database development. Analysis showed the conventional course content was appropriate but the instructional materials used were too complex, especially for novice students. The redesign of instructional materials applied CLT to remove split attention and redundancy effects, to provide suitable worked examples and sub-goals, and included an extensive re-sequencing of content. The approach was primarily directed towards mid- to lower performing students and results showed a significant improvement for this cohort with the exam failure rate reducing by 34% after the redesign on identical final exams. Student satisfaction also increased and feedback from subsequent study was very positive. The application of CLT to the design of instructional materials is discussed for delivery of technical courses.

  12. Commercial Supersonics Technology Project - Status of Airport Noise

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2016-01-01

    The Commercial Supersonic Technology Project has been developing databases, computational tools, and system models to prepare for a level 1 milestone, the Low Noise Propulsion Tech Challenge, to be delivered Sept 2016. Steps taken to prepare for the final validation test are given, including system analysis, code validation, and risk reduction testing.

  13. Integrated and Applied Curricula Discussion Group and Data Base Project. Final Report.

    ERIC Educational Resources Information Center

    Wisconsin Univ. - Stout, Menomonie. Center for Vocational, Technical and Adult Education.

    A project was conducted to compile integrated and applied curriculum resources, develop databases on the World Wide Web, and encourage networking for high school and technical college educators through an Internet discussion group. Activities conducted during the project include the creation of a web page to guide users to resource banks…

  14. The Matching of Educational and Occupational Structures in Finland and Sweden. Final Report. CEDEFOP Dossier.

    ERIC Educational Resources Information Center

    Ahola, Sakari

    This report studies the matching of educational and occupational structures in Sweden and Finland by using classifications that include all educational and occupational groups. By using comprehensive databases available in Finland and Sweden, it aims to develop the methodological and theoretical perspectives of the research on education and…

  15. A Structural Equation Model for Predicting Business Student Performance

    ERIC Educational Resources Information Center

    Pomykalski, James J.; Dion, Paul; Brock, James L.

    2008-01-01

    In this study, the authors developed a structural equation model that accounted for 79% of the variability of a student's final grade point average by using a sample size of 147 students. The model is based on student grades in 4 foundational business courses: introduction to business, macroeconomics, statistics, and using databases. Educators and…

  16. The design of moral education website for college students based on ASP.NET

    NASA Astrophysics Data System (ADS)

    Sui, Chunling; Du, Ruiqing

    2012-01-01

    Moral education website offers an available solution to low transmission speed and small influence areas of traditional moral education. The aim of this paper is to illustrate the design of one moral education website and the advantages of using it to help moral teaching. The reason for moral education website was discussed at the beginning of this paper. Development tools were introduced. The system design was illustrated with module design and database design. How to access data in SQL Server database are discussed in details. Finally a conclusion was made based on the discussions in this paper.

  17. Research Update: The materials genome initiative: Data sharing and the impact of collaborative ab initio databases

    DOE PAGES

    Jain, Anubhav; Persson, Kristin A.; Ceder, Gerbrand

    2016-03-24

    Materials innovations enable new technological capabilities and drive major societal advancements but have historically required long and costly development cycles. The Materials Genome Initiative (MGI) aims to greatly reduce this time and cost. Here, we focus on data reuse in the MGI and, in particular, discuss the impact of three different computational databases based on density functional theory methods to the research community. Finally, we discuss and provide recommendations on technical aspects of data reuse, outline remaining fundamental challenges, and present an outlook on the future of MGI's vision of data sharing.

  18. The test chemical selection procedure of the European Centre for the Validation of Alternative Methods for the EU Project ReProTect.

    PubMed

    Pazos, Patricia; Pellizzer, Cristian; Stummann, Tina C; Hareng, Lars; Bremer, Susanne

    2010-08-01

    The selection of reference compounds is crucial for a successful in vitro test development in order to proof the relevance of the test system. This publication describes the criteria and the selection strategy leading to a list of more than 130 chemicals suitable for test development within the ReProTect project. The presented chemical inventory aimed to support the development and optimization of in vitro tests that seek to fulfill ECVAM's criteria for entering into the prevalidation. In order to select appropriate substances, a primary database was established compiling information from existing databases. In a second step, predefined selection criteria have been applied to obtain a comprehensive list ready to undergo a peer review process from independent experts with industrial, academic and regulatory background. Finally, a peer reviewed chemical list containing 13 substances challenging endocrine disrupter tests, additional 50 substances serving as reference chemicals for various tests evaluating effects on male and female fertility, and finally 61 substances were identified as known to provoke effects on the early development of mammalian offspring. The final list aims to cover relevant and specific mode/site of actions as they are known to be relevant for various substance classes. However, the recommended list should not be interpreted as a list of reproductive toxicants, because such a description requires proven associations with adverse effects of mammalian reproduction, which are subject of regulatory decisions done by involved competent authorities. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Integrated Electronic Health Record Database Management System: A Proposal.

    PubMed

    Schiza, Eirini C; Panos, George; David, Christiana; Petkov, Nicolai; Schizas, Christos N

    2015-01-01

    eHealth has attained significant importance as a new mechanism for health management and medical practice. However, the technological growth of eHealth is still limited by technical expertise needed to develop appropriate products. Researchers are constantly in a process of developing and testing new software for building and handling Clinical Medical Records, being renamed to Electronic Health Record (EHR) systems; EHRs take full advantage of the technological developments and at the same time provide increased diagnostic and treatment capabilities to doctors. A step to be considered for facilitating this aim is to involve more actively the doctor in building the fundamental steps for creating the EHR system and database. A global clinical patient record database management system can be electronically created by simulating real life medical practice health record taking and utilizing, analyzing the recorded parameters. This proposed approach demonstrates the effective implementation of a universal classic medical record in electronic form, a procedure by which, clinicians are led to utilize algorithms and intelligent systems for their differential diagnosis, final diagnosis and treatment strategies.

  20. Challenges to the Standardization of Burn Data Collection: A Call for Common Data Elements for Burn Care.

    PubMed

    Schneider, Jeffrey C; Chen, Liang; Simko, Laura C; Warren, Katherine N; Nguyen, Brian Phu; Thorpe, Catherine R; Jeng, James C; Hickerson, William L; Kazis, Lewis E; Ryan, Colleen M

    2018-02-20

    The use of common data elements (CDEs) is growing in medical research; CDEs have demonstrated benefit in maximizing the impact of existing research infrastructure and funding. However, the field of burn care does not have a standard set of CDEs. The objective of this study is to examine the extent of common data collected in current burn databases.This study examines the data dictionaries of six U.S. burn databases to ascertain the extent of common data. This was assessed from a quantitative and qualitative perspective. Thirty-two demographic and clinical data elements were examined. The number of databases that collect each data element was calculated. The data values for each data element were compared across the six databases for common terminology. Finally, the data prompts of the data elements were examined for common language and structure.Five (16%) of the 32 data elements are collected by all six burn databases; additionally, five data elements (16%) are present in only one database. Furthermore, there are considerable variations in data values and prompts used among the burn databases. Only one of the 32 data elements (age) contains the same data values across all databases.The burn databases examined show minimal evidence of common data. There is a need to develop CDEs and standardized coding to enhance interoperability of burn databases.

  1. A Solution on Identification and Rearing Files Insmallhold Pig Farming

    NASA Astrophysics Data System (ADS)

    Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang

    In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.

  2. Evaluation of SHEEO's State Policy Resource Connections (SPRC) Initiative. Final Report

    ERIC Educational Resources Information Center

    Ryherd, Ann Daley

    2011-01-01

    With the assistance of the Lumina Foundation, the State Higher Education Executive Officers (SHEEO) staff has been working to develop a broad, up-to-date database of policy relevant information for the states and to create analytical studies to help state leaders identify priorities and practices for improving policies and performance across the…

  3. Development of the framework for a water quality monitoring system : controlling MoDOT's contribution to 303(d) listed streams in the state of Missouri, final report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    By utilizing ArcGIS to quickly visualize the location of any impaired waterbody in relation to its projects/activities, MoDOT will : be able to allocate resources optimally. Additionally, the Water Quality Impact Database (WQID) will allow easy trans...

  4. Report on Legal Protection for Databases. A Report of the Register of Copyrights. August, 1997.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. Copyright Office.

    This report gives an overview of the past and present domestic and international legal framework for database protection. It describes database industry practices in securing protection against unauthorized use and Copyright Office registration practices relating to databases. Finally, it discusses issues raised and concerns expressed in a series…

  5. New Directions in Library and Information Science Education. Final Report. Volume 2.6: Database Distributor/Service Professional Competencies.

    ERIC Educational Resources Information Center

    Griffiths, Jose-Marie; And Others

    This document contains validated activities and competencies needed by librarians working in a database distributor/service organization. The activities of professionals working in database distributor/service organizations are listed by function: Database Processing; Customer Support; System Administration; and Planning. The competencies are…

  6. Development and implementation of a custom integrated database with dashboards to assist with hematopathology specimen triage and traffic

    PubMed Central

    Azzato, Elizabeth M.; Morrissette, Jennifer J. D.; Halbiger, Regina D.; Bagg, Adam; Daber, Robert D.

    2014-01-01

    Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS). We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS. PMID:25250187

  7. Proteomics data exchange and storage: the need for common standards and public repositories.

    PubMed

    Jiménez, Rafael C; Vizcaíno, Juan Antonio

    2013-01-01

    Both the existence of data standards and public databases or repositories have been key factors behind the development of the existing "omics" approaches. In this book chapter we first review the main existing mass spectrometry (MS)-based proteomics resources: PRIDE, PeptideAtlas, GPMDB, and Tranche. Second, we report on the current status of the different proteomics data standards developed by the Proteomics Standards Initiative (PSI): the formats mzML, mzIdentML, mzQuantML, TraML, and PSI-MI XML are then reviewed. Finally, we present an easy way to query and access MS proteomics data in the PRIDE database, as a representative of the existing repositories, using the workflow management system (WMS) tool Taverna. Two different publicly available workflows are explained and described.

  8. Reviving a medical wearable computer for teaching purposes.

    PubMed

    Frenger, Paul

    2014-01-01

    In 1978 the author constructed a medical wearable computer using an early CMOS microprocessor and support chips. This device was targeted for use by health-conscious consumers and other early adopters. Its expandable functions included weight management, blood pressure control, diabetes care, medication reminders, smoking cessation, pediatric growth and development, simple medical database, digital communication with a doctor’s office and emergency alert system. Various physiological sensors could be plugged-into the calculator-sized chassis. The device was shown to investor groups but funding was not obtained; by 1992 the author ceased pursuing it. The Computing and Mathematics Chair at a local University, a NASA acquaintance, approached the author to mentor a CS capstone course for Summer 2012. With the author’s guidance, five students proceeded to convert this medical wearable computer design to an iPhone-based implementation using the Apple Xcode Developer Kit and other utilities. The final student device contained a body mass index (BMI) calculator, an emergency alert for 911 or other first responders, a medication reminder, a Doctor’s appointment feature, a medical database, medical Internet links, and a pediatric growth & development guide. The students’ final imple-mentation was successfully demonstrated on an actual iPhone 4 at the CS capstone meeting in mid-Summer.

  9. Query by forms: User-oriented relational database retrieving system and its application in analysis of experiment data

    NASA Astrophysics Data System (ADS)

    Skotniczny, Zbigniew

    1989-12-01

    The Query by Forms (QbF) system is a user-oriented interactive tool for querying large relational database with minimal queries difinition cost. The system was worked out under the assumption that user's time and effort for defining needed queries is the most severe bottleneck. The system may be applied in any Rdb/VMS databases system and is recommended for specific information systems of any project where end-user queries cannot be foreseen. The tool is dedicated to specialist of an application domain who have to analyze data maintained in database from any needed point of view, who do not need to know commercial databases languages. The paper presents the system developed as a compromise between its functionality and usability. User-system communication via a menu-driven "tree-like" structure of screen-forms which produces a query difinition and execution is discussed in detail. Output of query results (printed reports and graphics) is also discussed. Finally the paper shows one application of QbF to a HERA-project.

  10. The Latin American Social Medicine database

    PubMed Central

    Eldredge, Jonathan D; Waitzkin, Howard; Buchanan, Holly S; Teal, Janis; Iriart, Celia; Wiley, Kevin; Tregear, Jonathan

    2004-01-01

    Background Public health practitioners and researchers for many years have been attempting to understand more clearly the links between social conditions and the health of populations. Until recently, most public health professionals in English-speaking countries were unaware that their colleagues in Latin America had developed an entire field of inquiry and practice devoted to making these links more clearly understood. The Latin American Social Medicine (LASM) database finally bridges this previous gap. Description This public health informatics case study describes the key features of a unique information resource intended to improve access to LASM literature and to augment understanding about the social determinants of health. This case study includes both quantitative and qualitative evaluation data. Currently the LASM database at The University of New Mexico brings important information, originally known mostly within professional networks located in Latin American countries to public health professionals worldwide via the Internet. The LASM database uses Spanish, Portuguese, and English language trilingual, structured abstracts to summarize classic and contemporary works. Conclusion This database provides helpful information for public health professionals on the social determinants of health and expands access to LASM. PMID:15627401

  11. Food Composition Database Format and Structure: A User Focused Approach

    PubMed Central

    Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836

  12. Space transfer vehicle concepts and requirements study, phase 2

    NASA Technical Reports Server (NTRS)

    Cannon, Jeffrey H.; Vinopal, Tim; Andrews, Dana; Richards, Bill; Weber, Gary; Paddock, Greg; Maricich, Peter; Bouton, Bruce; Hagen, Jim; Kolesar, Richard

    1992-01-01

    This final report is a compilation of the Phase 1 and Phase 2 study findings and is intended as a Space Transfer Vehicle (STV) 'users guide' rather than an exhaustive explanation of STV design details. It provides a database for design choices in the general areas of basing, reusability, propulsion, and staging; with selection criteria based on cost, performance, available infrastructure, risk, and technology. The report is organized into the following three parts: (1) design guide; (2) STV Phase 1 Concepts and Requirements Study Summary; and (3) STV Phase 2 Concepts and Requirements Study Summary. The overall objectives of the STV study were to: (1) define preferred STV concepts capable of accommodating future exploration missions in a cost-effective manner; (2) determine the level of technology development required to perform these missions in the most cost effective manner; and (3) develop a decision database of programmatic approaches for the development of an STV concept.

  13. Creating a literature database of low-calorie sweeteners and health studies: evidence mapping.

    PubMed

    Wang, Ding Ding; Shams-White, Marissa; Bright, Oliver John M; Parrott, J Scott; Chung, Mei

    2016-01-05

    Evidence mapping is an emerging tool used to systematically identify, organize and summarize the quantity and focus of scientific evidence on a broad topic, but there are currently no methodological standards. Using the topic of low-calorie sweeteners (LCS) and selected health outcomes, we describe the process of creating an evidence-map database and demonstrate several example descriptive analyses using this database. The process of creating an evidence-map database is described in detail. The steps include: developing a comprehensive literature search strategy, establishing study eligibility criteria and a systematic study selection process, extracting data, developing outcome groups with input from expert stakeholders and tabulating data using descriptive analyses. The database was uploaded onto SRDR™ (Systematic Review Data Repository), an open public data repository. Our final LCS evidence-map database included 225 studies, of which 208 were interventional studies and 17 were cohort studies. An example bubble plot was produced to display the evidence-map data and visualize research gaps according to four parameters: comparison types, population baseline health status, outcome groups, and study sample size. This plot indicated a lack of studies assessing appetite and dietary intake related outcomes using LCS with a sugar intake comparison in people with diabetes. Evidence mapping is an important tool for the contextualization of in-depth systematic reviews within broader literature and identifies gaps in the evidence base, which can be used to inform future research. An open evidence-map database has the potential to promote knowledge translation from nutrition science to policy.

  14. Hydroacoustic propagation grids for the CTBT knowledge databaes BBN technical memorandum W1303

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Angell

    1998-05-01

    The Hydroacoustic Coverage Assessment Model (HydroCAM) has been used to develop components of the hydroacoustic knowledge database required by operational monitoring systems, particularly the US National Data Center (NDC). The database, which consists of travel time, amplitude correction and travel time standard deviation grids, is planned to support source location, discrimination and estimation functions of the monitoring network. The grids will also be used under the current BBN subcontract to support an analysis of the performance of the International Monitoring System (IMS) and national sensor systems. This report describes the format and contents of the hydroacoustic knowledgebase grids, and themore » procedures and model parameters used to generate these grids. Comparisons between the knowledge grids, measured data and other modeled results are presented to illustrate the strengths and weaknesses of the current approach. A recommended approach for augmenting the knowledge database with a database of expected spectral/waveform characteristics is provided in the final section of the report.« less

  15. Chemical Space: Big Data Challenge for Molecular Diversity.

    PubMed

    Awale, Mahendra; Visini, Ricardo; Probst, Daniel; Arús-Pous, Josep; Reymond, Jean-Louis

    2017-10-25

    Chemical space describes all possible molecules as well as multi-dimensional conceptual spaces representing the structural diversity of these molecules. Part of this chemical space is available in public databases ranging from thousands to billions of compounds. Exploiting these databases for drug discovery represents a typical big data problem limited by computational power, data storage and data access capacity. Here we review recent developments of our laboratory, including progress in the chemical universe databases (GDB) and the fragment subset FDB-17, tools for ligand-based virtual screening by nearest neighbor searches, such as our multi-fingerprint browser for the ZINC database to select purchasable screening compounds, and their application to discover potent and selective inhibitors for calcium channel TRPV6 and Aurora A kinase, the polypharmacology browser (PPB) for predicting off-target effects, and finally interactive 3D-chemical space visualization using our online tools WebDrugCS and WebMolCS. All resources described in this paper are available for public use at www.gdb.unibe.ch.

  16. Assessment of CFD-based Response Surface Model for Ares I Supersonic Ascent Aerodynamics

    NASA Technical Reports Server (NTRS)

    Hanke, Jeremy L.

    2011-01-01

    The Ascent Force and Moment Aerodynamic (AFMA) Databases (DBs) for the Ares I Crew Launch Vehicle (CLV) were typically based on wind tunnel (WT) data, with increments provided by computational fluid dynamics (CFD) simulations for aspects of the vehicle that could not be tested in the WT tests. During the Design Analysis Cycle 3 analysis for the outer mold line (OML) geometry designated A106, a major tunnel mishap delayed the WT test for supersonic Mach numbers (M) greater than 1.6 in the Unitary Plan Wind Tunnel at NASA Langley Research Center, and the test delay pushed the final delivery of the A106 AFMA DB back by several months. The aero team developed an interim database based entirely on the already completed CFD simulations to mitigate the impact of the delay. This CFD-based database used a response surface methodology based on radial basis functions to predict the aerodynamic coefficients for M > 1.6 based on only the CFD data from both WT and flight Reynolds number conditions. The aero team used extensive knowledge of the previous AFMA DB for the A103 OML to guide the development of the CFD-based A106 AFMA DB. This report details the development of the CFD-based A106 Supersonic AFMA DB, constructs a prediction of the database uncertainty using data available at the time of development, and assesses the overall quality of the CFD-based DB both qualitatively and quantitatively. This assessment confirms that a reasonable aerodynamic database can be constructed for launch vehicles at supersonic conditions using only CFD data if sufficient knowledge of the physics and expected behavior is available. This report also demonstrates the applicability of non-parametric response surface modeling using radial basis functions for development of aerodynamic databases that exhibit both linear and non-linear behavior throughout a large data space.

  17. The Impact of Environment and Occupation on the Health and Safety of Active Duty Air Force Members: Database Development and De-Identification.

    PubMed

    Erich, Roger; Eaton, Melinda; Mayes, Ryan; Pierce, Lamar; Knight, Andrew; Genovesi, Paul; Escobar, James; Mychalczuk, George; Selent, Monica

    2016-08-01

    Preparing data for medical research can be challenging, detail oriented, and time consuming. Transcription errors, missing or nonsensical data, and records not applicable to the study population may hamper progress and, if unaddressed, can lead to erroneous conclusions. In addition, study data may be housed in multiple disparate databases and complex formats. Merging methods may be incomplete to obtain temporally synchronized data elements. We created a comprehensive database to explore the general hypothesis that environmental and occupational factors influence health outcomes and risk-taking behavior among active duty Air Force personnel. Several databases containing demographics, medical records, health survey responses, and safety incident reports were cleaned, validated, and linked to form a comprehensive, relational database. The final step involved removing and transforming personally identifiable information to form a Health Insurance Portability and Accountability Act compliant limited database. Initial data consisted of over 62.8 million records containing 221 variables. When completed, approximately 23.9 million clean and valid records with 214 variables remained. With a clean, robust database, future analysis aims to identify high-risk career fields for targeted interventions or uncover potential protective factors in low-risk career fields. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  18. Development and Validation of a Social Capital Questionnaire for Adolescent Students (SCQ-AS)

    PubMed Central

    Paiva, Paula Cristina Pelli; de Paiva, Haroldo Neves; de Oliveira Filho, Paulo Messias; Lamounier, Joel Alves; Ferreira, Efigênia Ferreira e; Ferreira, Raquel Conceição; Kawachi, Ichiro; Zarzar, Patrícia Maria

    2014-01-01

    Objectives Social capital has been studied due to its contextual influence on health. However, no specific assessment tool has been developed and validated for the measurement of social capital among 12-year-old adolescent students. The aim of the present study was to develop and validate a quick, simple assessment tool to measure social capital among adolescent students. Methods A questionnaire was developed based on a review of relevant literature. For such, searches were made of the Scientific Electronic Library Online, Latin American and Caribbean Health Sciences, The Cochrane Library, ISI Web of Knowledge, International Database for Medical Literature and PubMed Central bibliographical databases from September 2011 to January 2014 for papers addressing assessment tools for the evaluation of social capital. Focus groups were also formed by adolescent students as well as health, educational and social professionals. The final assessment tool was administered to a convenience sample from two public schools (79 students) and one private school (22 students), comprising a final sample of 101 students. Reliability and internal consistency were evaluated using the Kappa coefficient and Cronbach's alpha coefficient, respectively. Content validity was determined by expert consensus as well as exploratory and confirmatory factor analysis. Results The final version of the questionnaire was made up of 12 items. The total scale demonstrated very good internal consistency (Cronbach's alpha: 0.71). Reproducibility was also very good, as the Kappa coefficient was higher than 0.72 for the majority of items (range: 0.63 to 0.97). Factor analysis grouped the 12 items into four subscales: School Social Cohesion, School Friendships, Neighborhood Social Cohesion and Trust (school and neighborhood). Conclusions The present findings indicate the validity and reliability of the Social Capital Questionnaire for Adolescent Students. PMID:25093409

  19. Analysing inter-relationships among water, governance, human development variables in developing countries: WatSan4Dev database coherency analysis

    NASA Astrophysics Data System (ADS)

    Dondeynaz, C.; Carmona Moreno, C.; Céspedes Lorente, J. J.

    2012-01-01

    The "Integrated Water Resources Management" principle was formally laid down at the International Conference on Water and Sustainable development in Dublin 1992. One of the main results of this conference is that improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). These sectors influence or are influenced by the access to WSS. The understanding of these interrelations appears as crucial for decision makers in the water sector. In this framework, the Joint Research Centre (JRC) of the European Commission (EC) has developed a new database (WatSan4Dev database) containing 45 indicators (called variables in this paper) from environmental, socio-economic, governance and financial aid flows data in developing countries. This paper describes the development of the WatSan4Dev dataset, the statistical processes needed to improve the data quality; and, finally, the analysis to verify the database coherence is presented. At the light of the first analysis, WatSan4Dev Dataset shows the coherency among the different variables that are confirmed by the direct field experience and/or the scientific literature in the domain. Preliminary analysis of the relationships indicates that the informal urbanisation development is an important factor influencing negatively the percentage of the population having access to WSS. Health, and in particular children health, benefits from the improvement of WSS. Efficient environmental governance is also an important factor for providing improved water supply services. The database would be at the base of posterior analyses to better understand the interrelationships between the different indicators associated in the water sector in developing countries. A data model using the different indicators will be realised on the next phase of this research work.

  20. Defense and Development in Sub-Saharan Africa: Codebook.

    DTIC Science & Technology

    1988-03-01

    countries by presenting the different data sources and explaining how they were compiled. The statistics in the 0 database cover 41 African countries for...February 1984, pp. 157-164 -vi Finally, in addition to the economic and military data , some statistics have been compiled that monitor social and...32 IX. SOCIAL/POLITICAL STATISTICS ....................................34 SOURCES AND NOTES ON COLLECTION OF DATA

  1. Final Report from the External Peer Review of the IRIS ...

    EPA Pesticide Factsheets

    This document is the final report for the 2004 external peer review for the EPA IRIS Reassessment of the Inhalation Carcinogenicity of Naphthalene, prepared by the Office of Research and Development's National Center for Environmental Assessment (NCEA), for the Integrated Risk Information System (IRIS) Database. A panel of external peer reviewers met to discuss the IRIS report and their responses to the charge questions on July 30, 2004. This document contains the final written comments of the external peer reviewers. This document is the final report for the 2004 external peer review for the IRIS Reassessment of the Inhalation Carcinogenicity of Naphthalene, prepared by the U.S. Environmental Protection Agency (EPA), National Center for Environmental Assessment (NCEA), for the Integrated Risk Information System (IRIS). A panel of external peer reviewers met to discuss their responses to the charge questions on July 30, 2004. This document contains the final written comments of the external peer reviewers.

  2. Thermal and Chemical Characterization of Composite Materials. MSFC Center Director's Discretionary Fund Final Report, Project No. ED36-18

    NASA Technical Reports Server (NTRS)

    Stanley, D. C.; Huff, T. L.

    2003-01-01

    The purpose of this research effort was to: (1) provide a concise and well-defined property profile of current and developing composite materials using thermal and chemical characterization techniques and (2) optimize analytical testing requirements of materials. This effort applied a diverse array of methodologies to ascertain composite material properties. Often, a single method of technique will provide useful, but nonetheless incomplete, information on material composition and/or behavior. To more completely understand and predict material properties, a broad-based analytical approach is required. By developing a database of information comprised of both thermal and chemical properties, material behavior under varying conditions may be better understood. THis is even more important in the aerospace community, where new composite materials and those in the development stage have little reference data. For example, Fourier transform infrared (FTIR) spectroscopy spectral databases available for identification of vapor phase spectra, such as those generated during experiments, generally refer to well-defined chemical compounds. Because this method renders a unique thermal decomposition spectral pattern, even larger, more diverse databases, such as those found in solid and liquid phase FTIR spectroscopy libraries, cannot be used. By combining this and other available methodologies, a database specifically for new materials and materials being developed at Marshall Space Flight Center can be generated . In addition, characterizing materials using this approach will be extremely useful in the verification of materials and identification of anomalies in NASA-wide investigations.

  3. The FREGAT biobank: a clinico-biological database dedicated to esophageal and gastric cancers.

    PubMed

    Mariette, Christophe; Renaud, Florence; Piessen, Guillaume; Gele, Patrick; Copin, Marie-Christine; Leteurtre, Emmanuelle; Delaeter, Christine; Dib, Malek; Clisant, Stéphanie; Harter, Valentin; Bonnetain, Franck; Duhamel, Alain; Christophe, Véronique; Adenis, Antoine

    2018-02-06

    While the incidence of esophageal and gastric cancers is increasing, the prognosis of these cancers remains bleak. Endoscopy and surgery are the standard treatments for localized tumors, but multimodal treatments, associated chemotherapy, targeted therapies, immunotherapy, radiotherapy, and surgery are needed for the vast majority of patients who present with locally advanced or metastatic disease at diagnosis. Although survival has improved, most patients still present with advanced disease at diagnosis. In addition, most patients exhibit a poor or incomplete response to treatment, experience early recurrence and have an impaired quality of life. Compared with several other cancers, the therapeutic approach is not personalized, and research is much less developed. It is, therefore, urgent to hasten the development of research protocols, and consequently, develop a large, ambitious and innovative tool through which future scientific questions may be answered. This research must be patient-related so that rapid feedback to the bedside is achieved and should aim to identify clinical-, biological- and tumor-related factors that are associated with treatment resistance. Finally, this research should also seek to explain epidemiological and social facets of disease behavior. The prospective FREGAT database, established by the French National Cancer Institute, is focused on adult patients with carcinomas of the esophagus and stomach and on whatever might be the tumor stage or therapeutic strategy. The database includes epidemiological, clinical, and tumor characteristics data as well as follow-up, human and social sciences quality of life data, along with a tumor and serum bank. This innovative method of research will allow for the banking of millions of data for the development of excellent basic, translational and clinical research programs for esophageal and gastric cancer. This will ultimately improve general knowledge of these diseases, therapeutic strategies and patient survival. This database was initially developed in France on a nationwide basis, but currently, the database is available for worldwide contributions with respect to the input of patient data or the request for data for scientific projects. The FREGAT database has a dedicated website ( www.fregat-database.org ) and is registered on the Clinicaltrials.gov site, number NCT 02526095 , since August 8, 2015.

  4. Investigations into mirror fabrication metrology analysis

    NASA Technical Reports Server (NTRS)

    Dimmock, John O.

    1994-01-01

    This final report describes the work performed under this delivery order from June 1993 through August 1994. The scope of work included three distinct tasks in support of the AXAF-I program. The objective of the first task was to perform investigations of the grinding and polishing characteristics of the zerodur material by fabricating several samples. The second task was to continue the development of the integrated optical performance modeling software for AXAF-I. The purpose of third and final task was to develop and update the database of AXAF technical documents for an easy and rapid access. The MSFC optical and metrology shops were relocated from the B-wing of Building 4487 to Room BC 144 of Building 4466 in the beginning of this contract. This included dismantling, packing, and moving the equipment from its old location, and then reassembling it at the new location. A total of 65 zerodur samples, measuring 1 inch x 2 inches x 6 inches were ground and polished to a surface figure of lambda/10 p-v, and a surface finish of 5A rms were fabricated for coating tests. A number of special purpose tools and metal mirrors were also fabricated to support various AXAF-I development activities. In the metrology area, the ZYGO Mark 4 interferometer was relocated and also upgraded with a faster and more powerful processor. Surface metrology work was also performed on the coating samples and other optics using ZYGO interferometer and WYKO profilometer. A number of new features have been added to the GRAZTRACE program to enhance its analysis and modeling capabilities. A number of new commands have been added to the command mode GRAZTRACE program to provide a better control to the user on the program execution and data manipulation. Some commands and parameter entries have been reorganized for a uniform format. The command mode version of the convolution program CONVOLVE has been developed. An on-line help system and a user's manual have also been developed for the benefit of the users. The database of AXAF technical documents continues to progress. The titles, company name, date, and location of over 390 documents have been entered in this database. This database provides both a data search and retrieval function, and a data adding function. These functions allow a user to quickly search the data files for documents or add new information. A detailed user's guide has also been prepared. This user guide includes a document classification guide, a list of abbreviations, and a list of acronyms, which have been used in compiling this database of AXAF-I technical documents.

  5. "Utstein style" spreadsheet and database programs based on Microsoft Excel and Microsoft Access software for CPR data management of in-hospital resuscitation.

    PubMed

    Adams, Bruce D; Whitlock, Warren L

    2004-04-01

    In 1997, The American Heart Association in association with representatives of the International Committee on Resuscitation (ILCOR) published recommended guidelines for reviewing, reporting and conducting in-hospital cardiopulmonary resuscitation (CPR) outcomes using the "Utstein style". Using these guidelines, we developed two Microsoft Office based database management programs that may be useful to the resuscitation community. We developed a user-friendly spreadsheet based on MS Office Excel. The user enters patient variables such as name, age, and diagnosis. Then, event resuscitation variables such as time of collapse and CPR team arrival are entered from a "code flow sheet". Finally, outcome variables such as patient condition at different time points are recorded. The program then makes automatic calculations of average response times, survival rates and other important outcome measurements. Also using the Utstein style, we developed a database program based on MS Office Access. To promote free public access to these programs, we established at a website. These programs will help hospitals track, analyze, and present their CPR outcomes data. Clinical CPR researchers might also find the programs useful because they are easily modified and have statistical functions.

  6. Final Report: Efficient Databases for MPC Microdata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael A. Bender; Martin Farach-Colton; Bradley C. Kuszmaul

    2011-08-31

    The purpose of this grant was to develop the theory and practice of high-performance databases for massive streamed datasets. Over the last three years, we have developed fast indexing technology, that is, technology for rapidly ingesting data and storing that data so that it can be efficiently queried and analyzed. During this project we developed the technology so that high-bandwidth data streams can be indexed and queried efficiently. Our technology has been proven to work data sets composed of tens of billions of rows when the data streams arrives at over 40,000 rows per second. We achieved these numbers evenmore » on a single disk driven by two cores. Our work comprised (1) new write-optimized data structures with better asymptotic complexity than traditional structures, (2) implementation, and (3) benchmarking. We furthermore developed a prototype of TokuFS, a middleware layer that can handle microdata I/O packaged up in an MPI-IO abstraction.« less

  7. Macromolecular Structure Database. Final Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilliland, Gary L.

    2003-09-23

    The central activity of the PDB continues to be the collection, archiving and distribution of high quality structural data to the scientific community on a timely basis. In support of these activities NIST has continued its roles in developing the physical archive, in developing data uniformity, in dealing with NMR issues and in the distribution of PDB data through CD-ROMs. The physical archive holdings have been organized, inventoried, and a database has been created to facilitate their use. Data from individual PDB entries have been annotated to produce uniform values improving tremendously the accuracy of results of queries. Working withmore » the NMR community we have established data items specific for NMR that will be included in new entries and facilitate data deposition. The PDB CD-ROM production has continued on a quarterly basis, and new products are being distributed.« less

  8. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    PubMed

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  9. Use of Graph Database for the Integration of Heterogeneous Biological Data

    PubMed Central

    Yoon, Byoung-Ha; Kim, Seon-Kyu

    2017-01-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data. PMID:28416946

  10. Results from prototype die-to-database reticle inspection system

    NASA Astrophysics Data System (ADS)

    Mu, Bo; Dayal, Aditya; Broadbent, Bill; Lim, Phillip; Goonesekera, Arosha; Chen, Chunlin; Yeung, Kevin; Pinto, Becky

    2009-03-01

    A prototype die-to-database high-resolution reticle defect inspection system has been developed for 32nm and below logic reticles, and 4X Half Pitch (HP) production and 3X HP development memory reticles. These nodes will use predominantly 193nm immersion lithography (with some layers double patterned), although EUV may also be used. Many different reticle types may be used for these generations including: binary (COG, EAPSM), simple tritone, complex tritone, high transmission, dark field alternating (APSM), mask enhancer, CPL, and EUV. Finally, aggressive model based OPC is typically used, which includes many small structures such as jogs, serifs, and SRAF (sub-resolution assist features), accompanied by very small gaps between adjacent structures. The architecture and performance of the prototype inspection system is described. This system is designed to inspect the aforementioned reticle types in die-todatabase mode. Die-to-database inspection results are shown on standard programmed defect test reticles, as well as advanced 32nm logic, and 4X HP and 3X HP memory reticles from industry sources. Direct comparisons with currentgeneration inspection systems show measurable sensitivity improvement and a reduction in false detections.

  11. US Army Research Laboratory Joint Interagency Field Experimentation 15-2 Final Report

    DTIC Science & Technology

    2015-12-01

    February 2015, at Alameda Island, California. Advanced text analytics capabilities were demonstrated in a logically coherent workflow pipeline that... text processing capabilities allowed the targeted use of a persistent imagery sensor for rapid detection of mission- critical events. The creation of...a very large text database from open source data provides a relevant and unclassified foundation for continued development of text -processing

  12. Datacomputer and SIP Operations

    DTIC Science & Technology

    1979-03-30

    developed in 1977 under ARPA Contract No. N0001 i4-76-C-0991 , as an application package which would utilize the Datacomputer [ Dorin & Sattley] for...Datacomputer Cambridge, Massachusetts, [ DORIN & EASTLAKE] R. H. Darin and Donald E. Eastlake, III; "Use of the Datacomputer in the Vela Seismological... DORIN & SATTLEY] Dorin , R.H. and Sattl^y, J.Z. Databases: Final Technical Report, Report No. CCAr.77-10, Computer America, 575

  13. Common Battlefield Training for Airmen

    DTIC Science & Technology

    2007-01-01

    Independent Evaluation Analysis We developed three model courses that satisfied the training requirements for CBAT,4 based primarily on training materials ...individual subject-matter experts identified in their sorting or that the material from the Lessons Learned Database suggested9 refer to and apply the...shared experience that might or might not materialize in future operations. Finally, there have been questions regarding the best location for CBAT or

  14. Model Study for an Economic Data Program on the Conditions of Arts and Cultural Institutions. Final Report.

    ERIC Educational Resources Information Center

    Deane, Robert T.; And Others

    The development of econometric models and a data base to predict the responsiveness of arts institutions to changes in the economy is reported. The study focused on models for museums, theaters (profit and non-profit), symphony, ballet, opera, and dance. The report details four objectives of the project: to identify useful databases and studies on…

  15. EXPOSURES AND INTERNAL DOSES OF ...

    EPA Pesticide Factsheets

    The National Center for Environmental Assessment (NCEA) has released a final report that presents and applies a method to estimate distributions of internal concentrations of trihalomethanes (THMs) in humans resulting from a residential drinking water exposure. The report presents simulations of oral, dermal and inhalation exposures and demonstrates the feasibility of linking the US EPA’s information Collection Rule database with other databases on external exposure factors and physiologically based pharmacokinetic modeling to refine population-based estimates of exposure. Review Draft - by 2010, develop scientifically sound data and approaches to assess and manage risks to human health posed by exposure to specific regulated waterborne pathogens and chemicals, including those addressed by the Arsenic, M/DBP and Six-Year Review Rules.

  16. Final Report of the Mid-Atlantic Marine Wildlife Surveys, Modeling, and Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saracino-Brown, Jocelyn; Smith, Courtney; Gilman, Patrick

    The Wind Program hosted a two-day workshop on July 24-25, 2012 with scientists and regulators engaged in marine ecological survey, modeling, and database efforts pertaining to the waters of the Mid-Atlantic region. The workshop was planned by Federal agency, academic, and private partners to promote collaboration between ongoing offshore ecological survey efforts, and to promote the collaborative development of complementary predictive models and compatible databases. The meeting primarily focused on efforts to establish and predict marine mammal, seabird, and sea turtle abundance, density, and distributions extending from the shoreline to the edge of the Exclusive Economic Zone between Nantucket Sound,more » Massachusetts and Cape Hatteras, North Carolina.« less

  17. The Research of Spatial-Temporal Analysis and Decision-Making Assistant System for Disabled Person Affairs Based on Mapworld

    NASA Astrophysics Data System (ADS)

    Zhang, J. H.; Yang, J.; Sun, Y. S.

    2015-06-01

    This system combines the Mapworld platform and informationization of disabled person affairs, uses the basic information of disabled person as center frame. Based on the disabled person population database, the affairs management system and the statistical account system, the data were effectively integrated and the united information resource database was built. Though the data analysis and mining, the system provides powerful data support to the decision making, the affairs managing and the public serving. It finally realizes the rationalization, normalization and scientization of disabled person affairs management. It also makes significant contributions to the great-leap-forward development of the informationization of China Disabled Person's Federation.

  18. CSE database: extended annotations and new recommendations for ECG software testing.

    PubMed

    Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie

    2017-08-01

    Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.

  19. A search map for organic additives and solvents applicable in high-voltage rechargeable batteries.

    PubMed

    Park, Min Sik; Park, Insun; Kang, Yoon-Sok; Im, Dongmin; Doo, Seok-Gwang

    2016-09-29

    Chemical databases store information such as molecular formulas, chemical structures, and the physical and chemical properties of compounds. Although the massive databases of organic compounds exist, the search of target materials is constrained by a lack of physical and chemical properties necessary for specific applications. With increasing interest in the development of energy storage systems such as high-voltage rechargeable batteries, it is critical to find new electrolytes efficiently. Here we build a search map to screen organic additives and solvents with novel core and functional groups, and thus establish a database of electrolytes to identify the most promising electrolyte for high-voltage rechargeable batteries. This search map is generated from MAssive Molecular Map BUilder (MAMMBU) by combining a high-throughput quantum chemical simulation with an artificial neural network algorithm. MAMMBU is designed for predicting the oxidation and reduction potentials of organic compounds existing in the massive organic compound database, PubChem. We develop a search map composed of ∼1 000 000 redox potentials and elucidate the quantitative relationship between the redox potentials and functional groups. Finally, we screen a quinoxaline compound for an anode additive and apply it to electrolytes and improve the capacity retention from 64.3% to 80.8% near 200 cycles for a lithium ion battery in experiments.

  20. In silico identification of anti-cancer compounds and plants from traditional Chinese medicine database

    NASA Astrophysics Data System (ADS)

    Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei

    2016-05-01

    There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents.

  1. In silico identification of anti-cancer compounds and plants from traditional Chinese medicine database.

    PubMed

    Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei

    2016-05-05

    There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents.

  2. In silico identification of anti-cancer compounds and plants from traditional Chinese medicine database

    PubMed Central

    Dai, Shao-Xing; Li, Wen-Xing; Han, Fei-Fei; Guo, Yi-Cheng; Zheng, Jun-Juan; Liu, Jia-Qian; Wang, Qian; Gao, Yue-Dong; Li, Gong-Hua; Huang, Jing-Fei

    2016-01-01

    There is a constant demand to develop new, effective, and affordable anti-cancer drugs. The traditional Chinese medicine (TCM) is a valuable and alternative resource for identifying novel anti-cancer agents. In this study, we aim to identify the anti-cancer compounds and plants from the TCM database by using cheminformatics. We first predicted 5278 anti-cancer compounds from TCM database. The top 346 compounds were highly potent active in the 60 cell lines test. Similarity analysis revealed that 75% of the 5278 compounds are highly similar to the approved anti-cancer drugs. Based on the predicted anti-cancer compounds, we identified 57 anti-cancer plants by activity enrichment. The identified plants are widely distributed in 46 genera and 28 families, which broadens the scope of the anti-cancer drug screening. Finally, we constructed a network of predicted anti-cancer plants and approved drugs based on the above results. The network highlighted the supportive role of the predicted plant in the development of anti-cancer drug and suggested different molecular anti-cancer mechanisms of the plants. Our study suggests that the predicted compounds and plants from TCM database offer an attractive starting point and a broader scope to mine for potential anti-cancer agents. PMID:27145869

  3. Development of associations and kinetic models for microbiological data to be used in comprehensive food safety prediction software.

    PubMed

    Halder, Amit; Black, D Glenn; Davidson, P Michael; Datta, Ashim

    2010-08-01

    The objective of this study was to use an existing database of food products and their associated processes, link it with a list of the foodborne pathogenic microorganisms associated with those products and finally identify growth and inactivation kinetic parameters associated with those pathogens. The database was to be used as a part of the development of comprehensive software which could predict food safety and quality for any food product. The main issues in building such a predictive system included selection of predictive models, associations of different food types with pathogens (as determined from outbreak histories), and variability in data from different experiments. More than 1000 data sets from published literature were analyzed and grouped according to microorganisms and food types. Final grouping of data consisted of the 8 most prevalent pathogens for 14 different food groups, covering all of the foods (>7000) listed in the USDA Natl. Nutrient Database. Data for each group were analyzed in terms of 1st-order inactivation, 1st-order growth, and sigmoidal growth models, and their kinetic response for growth and inactivation as a function of temperature were reported. Means and 95% confidence intervals were calculated for prediction equations. The primary advantage in obtaining group-specific kinetic data is the ability to extend microbiological growth and death simulation to a large array of product and process possibilities, while still being reasonably accurate. Such simulation capability could provide vital ''what if'' scenarios for industry, Extension, and academia in food safety.

  4. Report on Approaches to Database Translation. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard; Salazar, Sandra

    This report describes approaches to database translation (i.e., transferring data and data definitions from a source, either a database management system (DBMS) or a batch file, to a target DBMS), and recommends a method for representing the data structures of newly-proposed network and relational data models in a form suitable for database…

  5. Korean Ministry of Environment's web-based visual consumer product exposure and risk assessment system (COPER).

    PubMed

    Lee, Hunjoo; Lee, Kiyoung; Park, Ji Young; Min, Sung-Gi

    2017-05-01

    With support from the Korean Ministry of the Environment (ME), our interdisciplinary research staff developed the COnsumer Product Exposure and Risk assessment system (COPER). This system includes various databases and features that enable the calculation of exposure and determination of risk caused by consumer products use. COPER is divided into three tiers: the integrated database layer (IDL), the domain specific service layer (DSSL), and the exposure and risk assessment layer (ERAL). IDL is organized by the form of the raw data (mostly non-aggregated data) and includes four sub-databases: a toxicity profile, an inventory of Korean consumer products, the weight fractions of chemical substances in the consumer products determined by chemical analysis and national representative exposure factors. DSSL provides web-based information services corresponding to each database within IDL. Finally, ERAL enables risk assessors to perform various exposure and risk assessments, including exposure scenario design via either inhalation or dermal contact by using or organizing each database in an intuitive manner. This paper outlines the overall architecture of the system and highlights some of the unique features of COPER based on visual and dynamic rendering engine for exposure assessment model on web.

  6. Improving retrospective characterization of the food environment for a large region in the United States during a historic time period.

    PubMed

    Auchincloss, Amy H; Moore, Kari A B; Moore, Latetia V; Diez Roux, Ana V

    2012-11-01

    Access to healthy foods has received increasing attention due to growing prevalence of obesity and diet-related health conditions yet there are major obstacles in characterizing the local food environment. This study developed a method to retrospectively characterize supermarkets for a single historic year, 2005, in 19 counties in 6 states in the USA using a supermarket chain-name list and two business databases. Data preparation, merging, overlaps, added-value amongst various approaches and differences by census tract area-level socio-demographic characteristics are described. Agreement between two food store databases was modest: 63%. Only 55% of the final list of supermarkets were identified by a single business database and selection criteria that included industry classification codes and sales revenue ≥$2 million. The added-value of using a supermarket chain-name list and second business database was identification of an additional 14% and 30% of supermarkets, respectively. These methods are particularly useful to retrospectively characterize access to supermarkets during a historic period and when field observations are not feasible and business databases are used. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. 48 CFR 32.1110 - Solicitation provision and contract clauses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... System for Award Management (SAM) database and maintain registration until final payment, unless— (i..., or a similar agency clause that requires the contractor to be registered in the SAM database. (ii)(A...

  8. 48 CFR 32.1110 - Solicitation provision and contract clauses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... System for Award Management (SAM) database and maintain registration until final payment, unless— (i..., or a similar agency clause that requires the contractor to be registered in the SAM database. (ii)(A...

  9. New tools for discovery from old databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.P.

    1990-05-01

    Very large quantities of information have been accumulated as a result of petroleum exploration and the practice of petroleum geology. New and more powerful methods to build and analyze databases have been developed. The new tools must be tested, and, as quickly as possible, combined with traditional methods to the full advantage of currently limited funds in the search for new and extended hydrocarbon reserves. A recommended combined sequence is (1) database validating, (2) category separating, (3) machine learning, (4) graphic modeling, (5) database filtering, and (6) regression for predicting. To illustrate this procedure, a database from the Railroad Commissionmore » of Texas has been analyzed. Clusters of information have been identified to prevent apples and oranges problems from obscuring the conclusions. Artificial intelligence has checked the database for potentially invalid entries and has identified rules governing the relationship between factors, which can be numeric or nonnumeric (words), or both. Graphic 3-Dimensional modeling has clarified relationships. Database filtering has physically separated the integral parts of the database, which can then be run through the sequence again, increasing the precision. Finally, regressions have been run on separated clusters giving equations, which can be used with confidence in making predictions. Advances in computer systems encourage the learning of much more from past records, and reduce the danger of prejudiced decisions. Soon there will be giant strides beyond current capabilities to the advantage of those who are ready for them.« less

  10. ReprDB and panDB: minimalist databases with maximal microbial representation.

    PubMed

    Zhou, Wei; Gay, Nicole; Oh, Julia

    2018-01-18

    Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.

  11. SMART operational field test evaluation : operations database report : final report

    DOT National Transportation Integrated Search

    1997-09-01

    Based on the Suburban Mobility Authority For Regional Transportations (SMART) weekly operating reports from its Macomb, Wayne, Troy, and Pontiac terminals, this Operations Database Report explores productivity measures over time, and examines how ...

  12. Design of a diagnostic encyclopaedia using AIDA.

    PubMed

    van Ginneken, A M; Smeulders, A W; Jansen, W

    1987-01-01

    Diagnostic Encyclopaedia Workstation (DEW) is the name of a digital encyclopaedia constructed to contain reference knowledge with respect to the pathology of the ovary. Comparing DEW with the common sources of reference knowledge (i.e. books) leads to the following advantages of DEW: it contains more verbal knowledge, pictures and case histories, and it offers information adjusted to the needs of the user. Based on an analysis of the structure of this reference knowledge we have chosen AIDA to develop a relational database and we use a video-disc player to contain the pictorial part of the database. The system consists of a database input version and a read-only run version. The design of the database input version is discussed. Reference knowledge for ovary pathology requires 1-3 Mbytes of memory. At present 15% of this amount is available. The design of the run version is based on an analysis of which information must necessarily be specified to the system by the user to access a desired item of information. Finally, the use of AIDA in constructing DEW is evaluated.

  13. Database for Parkinson Disease Mutations and Rare Variants

    DTIC Science & Technology

    2016-09-01

    AWARD NUMBER: W81XWH-14-1-0097 TITLE: “ Database for Parkinson Disease Mutations and Rare Variants” PRINCIPAL INVESTIGATOR: JEFFERY M. VANCE...TO THE ABOVE ADDRESS. 1. REPORT DATE September 2016 2. REPORT TYPE FINAL 3. DATES COVERED 1 Jul 2014 – 30 Jun 2016 4. TITLE AND SUBTITLE Database ...For Parkinson Disease (PD) specifically, the variant databases currently available are incomplete, don’t assess impact and/or are not equipped to

  14. Bioinformatics and molecular modeling in glycobiology

    PubMed Central

    Schloissnig, Siegfried

    2010-01-01

    The field of glycobiology is concerned with the study of the structure, properties, and biological functions of the family of biomolecules called carbohydrates. Bioinformatics for glycobiology is a particularly challenging field, because carbohydrates exhibit a high structural diversity and their chains are often branched. Significant improvements in experimental analytical methods over recent years have led to a tremendous increase in the amount of carbohydrate structure data generated. Consequently, the availability of databases and tools to store, retrieve and analyze these data in an efficient way is of fundamental importance to progress in glycobiology. In this review, the various graphical representations and sequence formats of carbohydrates are introduced, and an overview of newly developed databases, the latest developments in sequence alignment and data mining, and tools to support experimental glycan analysis are presented. Finally, the field of structural glycoinformatics and molecular modeling of carbohydrates, glycoproteins, and protein–carbohydrate interaction are reviewed. PMID:20364395

  15. Allocation of surgical procedures to operating rooms.

    PubMed

    Ozkarahan, I

    1995-08-01

    Reduction of health care costs is of paramount importance in our time. This paper is a part of the research which proposes an expert hospital decision support system for resource scheduling. The proposed system combines mathematical programming, knowledge base, and database technologies, and what is more, its friendly interface is suitable for any novice user. Operating rooms in hospitals represent big investments and must be utilized efficiently. In this paper, first a mathematical model similar to job shop scheduling models is developed. The model loads surgical cases to operating rooms by maximizing room utilization and minimizing overtime in a multiple operating room setting. Then a prototype expert system which replaces the expertise of the operations research analyst for the model, drives the modelbase, database, and manages the user dialog is developed. Finally, an overview of the sequencing procedures for operations within an operating room is also presented.

  16. Evolving Strategies for the Incorporation of Bioinformatics Within the Undergraduate Cell Biology Curriculum

    PubMed Central

    Honts, Jerry E.

    2003-01-01

    Recent advances in genomics and structural biology have resulted in an unprecedented increase in biological data available from Internet-accessible databases. In order to help students effectively use this vast repository of information, undergraduate biology students at Drake University were introduced to bioinformatics software and databases in three courses, beginning with an introductory course in cell biology. The exercises and projects that were used to help students develop literacy in bioinformatics are described. In a recently offered course in bioinformatics, students developed their own simple sequence analysis tool using the Perl programming language. These experiences are described from the point of view of the instructor as well as the students. A preliminary assessment has been made of the degree to which students had developed a working knowledge of bioinformatics concepts and methods. Finally, some conclusions have been drawn from these courses that may be helpful to instructors wishing to introduce bioinformatics within the undergraduate biology curriculum. PMID:14673489

  17. Design and development of a multimedia database for emergency telemedicine.

    PubMed

    Pavlopoulos, S; Berler, A; Kyriacou, E; Koutsouris, D

    1998-09-01

    Recent studies conclude that early and specialised pre-hospital patient management contributes to emergency cases survival. Recent developments in telecommunication and medical informatics by means of telemedicine can be extremely useful to accomplish such tasks in a cost-effective manner. Along that direction, we have designed a portable device for emergency telemedicine. This device is able to telematically "bring" the expert doctor at the emergency site, have him perform an accurate diagnosis, and subsequently direct the Emergency Medical Technicians on how to treat the patient until he arrives to the hospital. The need for storing and archiving all data being interchanged during the telemedicine sessions is very crucial for clinical, legal and administrative purposes. For this, we have developed a multimedia database able to store and manage the data collected by the AMBULANCE system. The database was equipped with a user-friendly graphical interface to enable use from computer naive users. Furthermore, the database has the possibility to display, in an standard way, ECG's, X-ray, CT and MRI images. The application is password protected with a three-level hierarchy access for users with different privileges. The scope of this application is to enhance the capabilities of the doctor on duty for a more precise and prompt diagnosis. The application has the ability to store audio files related to each emergency case and still images of the scene. Finally, this database can become a useful multimedia tool which will work together with the AMBULANCE portable device, the HIS and the PACS of the hospital. The system has been validated in selected non-critical cases and proved to be functional and successful in enhancing the ability of the doctor's on duty for prompt and accurate diagnosis and specialised pre-hospital treatment.

  18. Volcanic observation data and simulation database at NIED, Japan (Invited)

    NASA Astrophysics Data System (ADS)

    Fujita, E.; Ueda, H.; Kozono, T.

    2009-12-01

    NIED (Nat’l Res. Inst. for Earth Sci. & Disast. Prev.) has a project to develop two volcanic database systems: (1) volcanic observation database; (2) volcanic simulation database. The volcanic observation database is the data archive center obtained by the geophysical observation networks at Mt. Fuji, Miyake, Izu-Oshima, Iwo-jima and Nasu volcanoes, central Japan. The data consist of seismic (both high-sensitivity and broadband), ground deformation (tiltmeter, GPS) and those from other sensors (e.g., rain gauge, gravimeter, magnetometer, pressure gauge.) These data is originally stored in “WIN format,” the Japanese standard format, which is also at the Hi-net (High sensitivity seismic network Japan, http://www.hinet.bosai.go.jp/). NIED joins to WOVOdat and we have prepared to upload our data, via XML format. Our concept of the XML format is 1)a common format for intermediate files to upload into the WOVOdat DB, 2) for data files downloaded from the WOVOdat DB, 3) for data exchanges between observatories without the WOVOdat DB, 4) for common data files in each observatory, 5) for data communications between systems and softwares and 6)a for softwares. NIED is now preparing for (2) the volcanic simulation database. The objective of this project is to support to develop a “real-time” hazard map, i.e., the system which is effective to evaluate volcanic hazard in case of emergency, including the up-to-date conditions. Our system will include lava flow simulation (LavaSIM) and pyroclastic flow simulation (grvcrt). The database will keep many cases of assumed simulations and we can pick up the most probable case as the first evaluation in case the eruption started. The final goals of the both database will realize the volcanic eruption prediction and forecasting in real time by the combination of monitoring data and numerical simulations.

  19. Thresholds of Toxicological Concern for cosmetics-related substances: New database, thresholds, and enrichment of chemical space.

    PubMed

    Yang, Chihae; Barlow, Susan M; Muldoon Jacobs, Kristi L; Vitcheva, Vessela; Boobis, Alan R; Felter, Susan P; Arvidson, Kirk B; Keller, Detlef; Cronin, Mark T D; Enoch, Steven; Worth, Andrew; Hollnagel, Heli M

    2017-11-01

    A new dataset of cosmetics-related chemicals for the Threshold of Toxicological Concern (TTC) approach has been compiled, comprising 552 chemicals with 219, 40, and 293 chemicals in Cramer Classes I, II, and III, respectively. Data were integrated and curated to create a database of No-/Lowest-Observed-Adverse-Effect Level (NOAEL/LOAEL) values, from which the final COSMOS TTC dataset was developed. Criteria for study inclusion and NOAEL decisions were defined, and rigorous quality control was performed for study details and assignment of Cramer classes. From the final COSMOS TTC dataset, human exposure thresholds of 42 and 7.9 μg/kg-bw/day were derived for Cramer Classes I and III, respectively. The size of Cramer Class II was insufficient for derivation of a TTC value. The COSMOS TTC dataset was then federated with the dataset of Munro and colleagues, previously published in 1996, after updating the latter using the quality control processes for this project. This federated dataset expands the chemical space and provides more robust thresholds. The 966 substances in the federated database comprise 245, 49 and 672 chemicals in Cramer Classes I, II and III, respectively. The corresponding TTC values of 46, 6.2 and 2.3 μg/kg-bw/day are broadly similar to those of the original Munro dataset. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. In silico mining of putative microsatellite markers from whole genome sequence of water buffalo (Bubalus bubalis) and development of first BuffSatDB

    PubMed Central

    2013-01-01

    Background Though India has sequenced water buffalo genome but its draft assembly is based on cattle genome BTau 4.0, thus de novo chromosome wise assembly is a major pending issue for global community. The existing radiation hybrid of buffalo and these reported STR can be used further in final gap plugging and “finishing” expected in de novo genome assembly. QTL and gene mapping needs mining of putative STR from buffalo genome at equal interval on each and every chromosome. Such markers have potential role in improvement of desirable characteristics, such as high milk yields, resistance to diseases, high growth rate. The STR mining from whole genome and development of user friendly database is yet to be done to reap the benefit of whole genome sequence. Description By in silico microsatellite mining of whole genome, we have developed first STR database of water buffalo, BuffSatDb (Buffalo MicroSatellite Database (http://cabindb.iasri.res.in/buffsatdb/) which is a web based relational database of 910529 microsatellite markers, developed using PHP and MySQL database. Microsatellite markers have been generated using MIcroSAtellite tool. It is simple and systematic web based search for customised retrieval of chromosome wise and genome-wide microsatellites. Search has been enabled based on chromosomes, motif type (mono-hexa), repeat motif and repeat kind (simple and composite). The search may be customised by limiting location of STR on chromosome as well as number of markers in that range. This is a novel approach and not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of the selected markers enabling researcher to select markers of choice at desired interval over the chromosome. The unique add-on of degenerate bases further helps in resolving presence of degenerate bases in current buffalo assembly. Conclusion Being first buffalo STR database in the world , this would not only pave the way in resolving current assembly problem but shall be of immense use for global community in QTL/gene mapping critically required to increase knowledge in the endeavour to increase buffalo productivity, especially for third world country where rural economy is significantly dependent on buffalo productivity. PMID:23336431

  1. In silico mining of putative microsatellite markers from whole genome sequence of water buffalo (Bubalus bubalis) and development of first BuffSatDB.

    PubMed

    Sarika; Arora, Vasu; Iquebal, Mir Asif; Rai, Anil; Kumar, Dinesh

    2013-01-19

    Though India has sequenced water buffalo genome but its draft assembly is based on cattle genome BTau 4.0, thus de novo chromosome wise assembly is a major pending issue for global community. The existing radiation hybrid of buffalo and these reported STR can be used further in final gap plugging and "finishing" expected in de novo genome assembly. QTL and gene mapping needs mining of putative STR from buffalo genome at equal interval on each and every chromosome. Such markers have potential role in improvement of desirable characteristics, such as high milk yields, resistance to diseases, high growth rate. The STR mining from whole genome and development of user friendly database is yet to be done to reap the benefit of whole genome sequence. By in silico microsatellite mining of whole genome, we have developed first STR database of water buffalo, BuffSatDb (Buffalo MicroSatellite Database (http://cabindb.iasri.res.in/buffsatdb/) which is a web based relational database of 910529 microsatellite markers, developed using PHP and MySQL database. Microsatellite markers have been generated using MIcroSAtellite tool. It is simple and systematic web based search for customised retrieval of chromosome wise and genome-wide microsatellites. Search has been enabled based on chromosomes, motif type (mono-hexa), repeat motif and repeat kind (simple and composite). The search may be customised by limiting location of STR on chromosome as well as number of markers in that range. This is a novel approach and not been implemented in any of the existing marker database. This database has been further appended with Primer3 for primer designing of the selected markers enabling researcher to select markers of choice at desired interval over the chromosome. The unique add-on of degenerate bases further helps in resolving presence of degenerate bases in current buffalo assembly. Being first buffalo STR database in the world , this would not only pave the way in resolving current assembly problem but shall be of immense use for global community in QTL/gene mapping critically required to increase knowledge in the endeavour to increase buffalo productivity, especially for third world country where rural economy is significantly dependent on buffalo productivity.

  2. Hybrid Approach for Automatic Evaluation of Emotion Elicitation Oriented to People with Intellectual Disabilities

    NASA Astrophysics Data System (ADS)

    Martínez, R.; de Ipiña, K. López; Irigoyen, E.; Asla, N.

    People with intellectual disabilities and elderly need physical and intellectual support to ensuring independent living. This is one of the main issues in applying Information and Communication Technology (ICT) into Assistive Technology field. In this sense the development of appropriated Intelligent Systems (ISs) offers new perspectives to this community. In our project a new IS system (LAGUNTXO) which adds user affective information oriented to people with intellectual disabilities has been developed. The system integrates a Human Emotion Analysis System (HEAS) which attempts to solve critical situations for this community as block stages. In the development of the HEAS one of the critical issues was to create appropriated databases to train the system due to the difficulty to simulate pre-block stages in laboratory. Finally a films and real sequences based emotion elicitation database was created. The elicitation material was categorized with more actual features based on discrete emotions and dimensional terms (pleasant, unpleasant). Classically the evaluation is carried out by a specialist (psychologist). In this work we present a hybrid approach for Automatic Evaluation of Emotion Elicitation databases based on Machine Learning classifiers and K-means clustering. The new categorization and the automatic evaluation show a high level of accuracy with respect to others methodologies presented in the literature.

  3. The NASA Hyper-X Program

    NASA Technical Reports Server (NTRS)

    Freeman, Delman C., Jr.; Reubush, Daivd E.; McClinton, Charles R.; Rausch, Vincent L.; Crawford, J. Larry

    1997-01-01

    This paper provides an overview of NASA's Hyper-X Program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an overview of the flight test program, research objectives, approach, schedule and status. Substantial experimental database and concept validation have been completed. The program is currently concentrating on the first, Mach 7, vehicle development, verification and validation in preparation for wind-tunnel testing in 1998 and flight testing in 1999. Parallel to this effort the Mach 5 and 10 vehicle designs are being finalized. Detailed analytical and experimental evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a database for validation of design methods once flight test data are available.

  4. Job monitoring on DIRAC for Belle II distributed computing

    NASA Astrophysics Data System (ADS)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  5. DISCOS- Current Status and Future Developments

    NASA Astrophysics Data System (ADS)

    Flohrer, T.; Lemmens, S.; Bastida Virgili, B.; Krag, H.; Klinkrad, H.; Parrilla, E.; Sanchez, N.; Oliveira, J.; Pina, F.

    2013-08-01

    We present ESA's Database and Information System Characterizing Objects in Space (DISCOS). DISCOS not only plays an essential role in the collision avoidance and re-entry prediction services provided by ESA's Space Debris Office, it is also providing input to numerous and very differently scoped engineering activities, within ESA and throughout industry. We introduce the central functionalities of DISCOS, present the available reporting capabilities, and describe selected data modelling features. Finally, we revisit the developments of the recent years and take a sneak preview of the on-going replacement of DISCOS web front-end.

  6. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less

  7. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database

    DOE PAGES

    Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less

  8. Phenol-Explorer: an online comprehensive database on polyphenol contents in foods.

    PubMed

    Neveu, V; Perez-Jiménez, J; Vos, F; Crespy, V; du Chaffaut, L; Mennen, L; Knox, C; Eisner, R; Cruz, J; Wishart, D; Scalbert, A

    2010-01-01

    A number of databases on the plant metabolome describe the chemistry and biosynthesis of plant chemicals. However, no such database is specifically focused on foods and more precisely on polyphenols, one of the major classes of phytochemicals. As antioxidants, polyphenols influence human health and may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, some cancers or type 2 diabetes. To determine polyphenol intake in populations and study their association with health, it is essential to have detailed information on their content in foods. However this information is not easily collected due to the variety of their chemical structures and the variability of their content in a given food. Phenol-Explorer is the first comprehensive web-based database on polyphenol content in foods. It contains more than 37,000 original data points collected from 638 scientific articles published in peer-reviewed journals. The quality of these data has been evaluated before they were aggregated to produce final representative mean content values for 502 polyphenols in 452 foods. The web interface allows making various queries on the aggregated data to identify foods containing a given polyphenol or polyphenols present in a given food. For each mean content value, it is possible to trace all original content values and their literature sources. Phenol-Explorer is a major step forward in the development of databases on food constituents and the food metabolome. It should help researchers to better understand the role of phytochemicals in the technical and nutritional quality of food, and food manufacturers to develop tailor-made healthy foods. Database URL: http://www.phenol-explorer.eu.

  9. Phenol-Explorer: an online comprehensive database on polyphenol contents in foods

    PubMed Central

    Neveu, V.; Perez-Jiménez, J.; Vos, F.; Crespy, V.; du Chaffaut, L.; Mennen, L.; Knox, C.; Eisner, R.; Cruz, J.; Wishart, D.; Scalbert, A.

    2010-01-01

    A number of databases on the plant metabolome describe the chemistry and biosynthesis of plant chemicals. However, no such database is specifically focused on foods and more precisely on polyphenols, one of the major classes of phytochemicals. As antoxidants, polyphenols influence human health and may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, some cancers or type 2 diabetes. To determine polyphenol intake in populations and study their association with health, it is essential to have detailed information on their content in foods. However this information is not easily collected due to the variety of their chemical structures and the variability of their content in a given food. Phenol-Explorer is the first comprehensive web-based database on polyphenol content in foods. It contains more than 37 000 original data points collected from 638 scientific articles published in peer-reviewed journals. The quality of these data has been evaluated before they were aggregated to produce final representative mean content values for 502 polyphenols in 452 foods. The web interface allows making various queries on the aggregated data to identify foods containing a given polyphenol or polyphenols present in a given food. For each mean content value, it is possible to trace all original content values and their literature sources. Phenol-Explorer is a major step forward in the development of databases on food constituents and the food metabolome. It should help researchers to better understand the role of phytochemicals in the technical and nutritional quality of food, and food manufacturers to develop tailor-made healthy foods. Database URL: http://www.phenol-explorer.eu PMID:20428313

  10. Data Mining Research with the LSST

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.

    2007-12-01

    The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.

  11. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database.

    PubMed

    Scofield, Patricia A; Smith, Linda L; Johnson, David N

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.

  12. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less

  13. 2016 update of the PRIDE database and its related tools

    PubMed Central

    Vizcaíno, Juan Antonio; Csordas, Attila; del-Toro, Noemi; Dianes, José A.; Griss, Johannes; Lavidas, Ilias; Mayer, Gerhard; Perez-Riverol, Yasset; Reisinger, Florian; Ternent, Tobias; Xu, Qing-Wei; Wang, Rui; Hermjakob, Henning

    2016-01-01

    The PRoteomics IDEntifications (PRIDE) database is one of the world-leading data repositories of mass spectrometry (MS)-based proteomics data. Since the beginning of 2014, PRIDE Archive (http://www.ebi.ac.uk/pride/archive/) is the new PRIDE archival system, replacing the original PRIDE database. Here we summarize the developments in PRIDE resources and related tools since the previous update manuscript in the Database Issue in 2013. PRIDE Archive constitutes a complete redevelopment of the original PRIDE, comprising a new storage backend, data submission system and web interface, among other components. PRIDE Archive supports the most-widely used PSI (Proteomics Standards Initiative) data standard formats (mzML and mzIdentML) and implements the data requirements and guidelines of the ProteomeXchange Consortium. The wide adoption of ProteomeXchange within the community has triggered an unprecedented increase in the number of submitted data sets (around 150 data sets per month). We outline some statistics on the current PRIDE Archive data contents. We also report on the status of the PRIDE related stand-alone tools: PRIDE Inspector, PRIDE Converter 2 and the ProteomeXchange submission tool. Finally, we will give a brief update on the resources under development ‘PRIDE Cluster’ and ‘PRIDE Proteomes’, which provide a complementary view and quality-scored information of the peptide and protein identification data available in PRIDE Archive. PMID:26527722

  14. Molecular Quantum Similarity, Chemical Reactivity and Database Screening of 3D Pharmacophores of the Protein Kinases A, B and G from Mycobacterium tuberculosis.

    PubMed

    Morales-Bayuelo, Alejandro

    2017-06-21

    Mycobacterium tuberculosis remains one of the world's most devastating pathogens. For this reason, we developed a study involving 3D pharmacophore searching, selectivity analysis and database screening for a series of anti-tuberculosis compounds, associated with the protein kinases A, B, and G. This theoretical study is expected to shed some light onto some molecular aspects that could contribute to the knowledge of the molecular mechanics behind interactions of these compounds, with anti-tuberculosis activity. Using the Molecular Quantum Similarity field and reactivity descriptors supported in the Density Functional Theory, it was possible to measure the quantification of the steric and electrostatic effects through the Overlap and Coulomb quantitative convergence (alpha and beta) scales. In addition, an analysis of reactivity indices using global and local descriptors was developed, identifying the binding sites and selectivity on these anti-tuberculosis compounds in the active sites. Finally, the reported pharmacophores to PKn A, B and G, were used to carry out database screening, using a database with anti-tuberculosis drugs from the Kelly Chibale research group (http://www.kellychibaleresearch.uct.ac.za/), to find the compounds with affinity for the specific protein targets associated with PKn A, B and G. In this regard, this hybrid methodology (Molecular Mechanic/Quantum Chemistry) shows new insights into drug design that may be useful in the tuberculosis treatment today.

  15. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information

    PubMed Central

    van Heuven, Walter J. B.; Pitchford, Nicola J.; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/. PMID:28231303

  16. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information.

    PubMed

    Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.

  17. The Efficiency of Musical Emotions for the Reconciliation of Conceptual Dissonances

    DTIC Science & Technology

    2013-10-24

    Final/Annual/Midterm Report for AOARD Grant 114103 "The efficiency of musical emotions for the reconciliation of conceptual...and will be added to a searchable DoD database. In the present project, PI developed theoretical foundation for the evolution of music in...which was experimentally created in 4-year-old children, who obeyed an experimenter’s warning not to play with a desired toy. Without exposure to music

  18. Hierarchically-Driven Approach for Quantifying Fatigue Crack Initiation and Short Crack Growth Behavior in Aerospace Materials

    DTIC Science & Technology

    2016-08-31

    crack initiation and SCG mechanisms (initiation and growth versus resistance). 2. Final summary Here, we present a hierarchical form of multiscale...prismatic faults in -Ti: A combined quantum mechanics /molecular mechanics study 2. Nano-indentation and slip transfer (critical in understanding crack...initiation) 3. An extended-finite element framework (XFEM) to study SCG mechanisms 4. Atomistic methods to develop a grain and twin boundaries database

  19. Sequence tagging reveals unexpected modifications in toxicoproteomics

    PubMed Central

    Dasari, Surendra; Chambers, Matthew C.; Codreanu, Simona G.; Liebler, Daniel C.; Collins, Ben C.; Pennington, Stephen R.; Gallagher, William M.; Tabb, David L.

    2010-01-01

    Toxicoproteomic samples are rich in posttranslational modifications (PTMs) of proteins. Identifying these modifications via standard database searching can incur significant performance penalties. Here we describe the latest developments in TagRecon, an algorithm that leverages inferred sequence tags to identify modified peptides in toxicoproteomic data sets. TagRecon identifies known modifications more effectively than the MyriMatch database search engine. TagRecon outperformed state of the art software in recognizing unanticipated modifications from LTQ, Orbitrap, and QTOF data sets. We developed user-friendly software for detecting persistent mass shifts from samples. We follow a three-step strategy for detecting unanticipated PTMs in samples. First, we identify the proteins present in the sample with a standard database search. Next, identified proteins are interrogated for unexpected PTMs with a sequence tag-based search. Finally, additional evidence is gathered for the detected mass shifts with a refinement search. Application of this technology on toxicoproteomic data sets revealed unintended cross-reactions between proteins and sample processing reagents. Twenty five proteins in rat liver showed signs of oxidative stress when exposed to potentially toxic drugs. These results demonstrate the value of mining toxicoproteomic data sets for modifications. PMID:21214251

  20. Structure elucidation of organic compounds aided by the computer program system SCANNET

    NASA Astrophysics Data System (ADS)

    Guzowska-Swider, B.; Hippe, Z. S.

    1992-12-01

    Recognition of chemical structure is a very important problem currently solved by molecular spectroscopy, particularly IR, UV, NMR and Raman spectroscopy, and mass spectrometry. Nowadays, solution of the problem is frequently aided by the computer. SCANNET is a computer program system for structure elucidation of organic compounds, developed by our group. The structure recognition of an unknown substance is made by comparing its spectrum with successive reference spectra of standard compounds, i.e. chemical compounds of known chemical structure, stored in a spectral database. The computer program system SCANNET consists of six different spectral databases for following the analytical methods: IR, UV, 13C-NMR, 1H-NMR and Raman spectroscopy, and mass spectrometry. A chemist, to elucidate a structure, can use one of these spectral methods or a combination of them and search the appropriate databases. As the result of searching each spectral database, the user obtains a list of chemical substances whose spectra are identical and/or similar to the spectrum input into the computer. The final information obtained from searching the spectral databases is in the form of a list of chemical substances having all the examined spectra, for each type of spectroscopy, identical or simlar to those of the unknown compound.

  1. The TREAT-NMD DMD Global Database: Analysis of More than 7,000 Duchenne Muscular Dystrophy Mutations

    PubMed Central

    Bladen, Catherine L; Salgado, David; Monges, Soledad; Foncuberta, Maria E; Kekou, Kyriaki; Kosma, Konstantina; Dawkins, Hugh; Lamont, Leanne; Roy, Anna J; Chamova, Teodora; Guergueltcheva, Velina; Chan, Sophelia; Korngut, Lawrence; Campbell, Craig; Dai, Yi; Wang, Jen; Barišić, Nina; Brabec, Petr; Lahdetie, Jaana; Walter, Maggie C; Schreiber-Katz, Olivia; Karcagi, Veronika; Garami, Marta; Viswanathan, Venkatarman; Bayat, Farhad; Buccella, Filippo; Kimura, En; Koeks, Zaïda; van den Bergen, Janneke C; Rodrigues, Miriam; Roxburgh, Richard; Lusakowska, Anna; Kostera-Pruszczyk, Anna; Zimowski, Janusz; Santos, Rosário; Neagu, Elena; Artemieva, Svetlana; Rasic, Vedrana Milic; Vojinovic, Dina; Posada, Manuel; Bloetzer, Clemens; Jeannet, Pierre-Yves; Joncourt, Franziska; Díaz-Manera, Jordi; Gallardo, Eduard; Karaduman, A Ayşe; Topaloğlu, Haluk; El Sherif, Rasha; Stringer, Angela; Shatillo, Andriy V; Martin, Ann S; Peay, Holly L; Bellgard, Matthew I; Kirschner, Jan; Flanigan, Kevin M; Straub, Volker; Bushby, Kate; Verschuuren, Jan; Aartsma-Rus, Annemieke; Béroud, Christophe; Lochmüller, Hanns

    2015-01-01

    Analyzing the type and frequency of patient-specific mutations that give rise to Duchenne muscular dystrophy (DMD) is an invaluable tool for diagnostics, basic scientific research, trial planning, and improved clinical care. Locus-specific databases allow for the collection, organization, storage, and analysis of genetic variants of disease. Here, we describe the development and analysis of the TREAT-NMD DMD Global database (http://umd.be/TREAT_DMD/). We analyzed genetic data for 7,149 DMD mutations held within the database. A total of 5,682 large mutations were observed (80% of total mutations), of which 4,894 (86%) were deletions (1 exon or larger) and 784 (14%) were duplications (1 exon or larger). There were 1,445 small mutations (smaller than 1 exon, 20% of all mutations), of which 358 (25%) were small deletions and 132 (9%) small insertions and 199 (14%) affected the splice sites. Point mutations totalled 756 (52% of small mutations) with 726 (50%) nonsense mutations and 30 (2%) missense mutations. Finally, 22 (0.3%) mid-intronic mutations were observed. In addition, mutations were identified within the database that would potentially benefit from novel genetic therapies for DMD including stop codon read-through therapies (10% of total mutations) and exon skipping therapy (80% of deletions and 55% of total mutations). PMID:25604253

  2. Incremental Aerodynamic Coefficient Database for the USA2

    NASA Technical Reports Server (NTRS)

    Richardson, Annie Catherine

    2016-01-01

    In March through May of 2016, a wind tunnel test was conducted by the Aerosciences Branch (EV33) to visually study the unsteady aerodynamic behavior over multiple transition geometries for the Universal Stage Adapter 2 (USA2) in the MSFC Aerodynamic Research Facility's Trisonic Wind Tunnel (TWT). The purpose of the test was to make a qualitative comparison of the transonic flow field in order to provide a recommended minimum transition radius for manufacturing. Additionally, 6 Degree of Freedom force and moment data for each configuration tested was acquired in order to determine the geometric effects on the longitudinal aerodynamic coefficients (Normal Force, Axial Force, and Pitching Moment). In order to make a quantitative comparison of the aerodynamic effects of the USA2 transition geometry, the aerodynamic coefficient data collected during the test was parsed and incorporated into a database for each USA2 configuration tested. An incremental aerodynamic coefficient database was then developed using the generated databases for each USA2 geometry as a function of Mach number and angle of attack. The final USA2 coefficient increments will be applied to the aerodynamic coefficients of the baseline geometry to adjust the Space Launch System (SLS) integrated launch vehicle force and moment database based on the transition geometry of the USA2.

  3. Database of Renewable Energy and Energy Efficiency Incentives and Policies Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lips, Brian

    The Database of State Incentives for Renewables and Efficiency (DSIRE) is an online resource that provides summaries of all financial incentives and regulatory policies that support the use of renewable energy and energy efficiency across all 50 states. This project involved making enhancements to the database and website, and the ongoing research and maintenance of the policy and incentive summaries.

  4. User's manual for the national water information system of the U.S. Geological Survey: Ground-water site-inventory system

    USGS Publications Warehouse

    ,

    2004-01-01

    The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site-Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying ground-water and sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.

  5. User's Manual for the National Water Information System of the U.S. Geological Survey: Ground-water site-inventory system

    USGS Publications Warehouse

    ,

    2005-01-01

    The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site- Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying groundwater and Sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.

  6. Thermodynamic assessments and inter-relationships between systems involving Al, Am, Ga, Pu, and U

    NASA Astrophysics Data System (ADS)

    Perron, A.; Turchi, P. E. A.; Landa, A.; Oudot, B.; Ravat, B.; Delaunay, F.

    2016-12-01

    A newly developed self-consistent CALPHAD thermodynamic database involving Al, Am, Ga, Pu, and U is presented. A first optimization of the slightly characterized Am-Al and completely unknown Am-Ga phase diagrams is proposed. To this end, phase diagram features as crystal structures, stoichiometric compounds, solubility limits, and melting temperatures have been studied along the U-Al → Pu-Al → Am-Al, and U-Ga → Pu-Ga → Am-Ga series, and the thermodynamic assessments involving Al and Ga alloying are compared. In addition, two distinct optimizations of the Pu-Al phase diagram are proposed to account for the low temperature and Pu-rich region controversy. The previously assessed thermodynamics of the other binary systems (Am-Pu, Am-U, Pu-U, and Al-Ga) is also included in the database and is briefly described in the present work. Finally, predictions on phase stability of ternary and quaternary systems of interest are reported to check the consistency of the database.

  7. New database for improving virtual system “body-dress”

    NASA Astrophysics Data System (ADS)

    Yan, J. Q.; Zhang, S. C.; Kuzmichev, V. E.; Adolphe, D. C.

    2017-10-01

    The aim of this exploration is to develop a new database of solid algorithms and relations between the dress fit and the fabric mechanical properties, the pattern block construction for improving the reality of virtual system “body-dress”. In virtual simulation, the system “body-clothing” sometimes shown distinct results with reality, especially when important changes in pattern block and fabrics were involved. In this research, to enhance the simulation process, diverse fit parameters were proposed: bottom height of dress, angle of front center contours, air volume and its distribution between dress and dummy. Measurements were done and optimized by ruler, camera, 3D body scanner image processing software and 3D modeling software. In the meantime, pattern block indexes were measured and fabric properties were tested by KES. Finally, the correlation and linear regression equations between indexes of fabric properties, pattern blocks and fit parameters were investigated. In this manner, new database could be extended in programming modules of virtual design for more realistic results.

  8. Restoration, Enhancement, and Distribution of the ATLAS-1 Imaging Spectrometric Observatory (ISO) Space Science Data Set

    NASA Technical Reports Server (NTRS)

    Germany, G. A.

    2001-01-01

    The primary goal of the funded task was to restore and distribute the ISO ATLAS-1 space science data set with enhanced software and database utilities. The first year was primarily dedicated to physically transferring the data from its original format to its initial CD archival format. The remainder of the first year was devoted to the verification of the restored data set and database. The second year was devoted to the enhancement of the data set, especially the development of IDL utilities and redesign of the database and search interface as needed. This period was also devoted to distribution of the rescued data set, principally the creation and maintenance of a web interface to the data set. The final six months was dedicated to working with NSSDC to create a permanent, off site, hive of the data set and supporting utilities. This time was also used to resolve last minute quality and design issues.

  9. Exploring Human Cognition Using Large Image Databases.

    PubMed

    Griffiths, Thomas L; Abbott, Joshua T; Hsu, Anne S

    2016-07-01

    Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. Copyright © 2016 Cognitive Science Society, Inc.

  10. The diffusion tensor imaging (DTI) component of the NIH MRI study of normal brain development (PedsDTI).

    PubMed

    Walker, Lindsay; Chang, Lin-Ching; Nayak, Amritha; Irfanoglu, M Okan; Botteron, Kelly N; McCracken, James; McKinstry, Robert C; Rivkin, Michael J; Wang, Dah-Jyuu; Rumsey, Judith; Pierpaoli, Carlo

    2016-01-01

    The NIH MRI Study of normal brain development sought to characterize typical brain development in a population of infants, toddlers, children and adolescents/young adults, covering the socio-economic and ethnic diversity of the population of the United States. The study began in 1999 with data collection commencing in 2001 and concluding in 2007. The study was designed with the final goal of providing a controlled-access database; open to qualified researchers and clinicians, which could serve as a powerful tool for elucidating typical brain development and identifying deviations associated with brain-based disorders and diseases, and as a resource for developing computational methods and image processing tools. This paper focuses on the DTI component of the NIH MRI study of normal brain development. In this work, we describe the DTI data acquisition protocols, data processing steps, quality assessment procedures, and data included in the database, along with database access requirements. For more details, visit http://www.pediatricmri.nih.gov. This longitudinal DTI dataset includes raw and processed diffusion data from 498 low resolution (3 mm) DTI datasets from 274 unique subjects, and 193 high resolution (2.5 mm) DTI datasets from 152 unique subjects. Subjects range in age from 10 days (from date of birth) through 22 years. Additionally, a set of age-specific DTI templates are included. This forms one component of the larger NIH MRI study of normal brain development which also includes T1-, T2-, proton density-weighted, and proton magnetic resonance spectroscopy (MRS) imaging data, and demographic, clinical and behavioral data. Published by Elsevier Inc.

  11. DNA Commission of the International Society for Forensic Genetics: revised and extended guidelines for mitochondrial DNA typing.

    PubMed

    Parson, W; Gusmão, L; Hares, D R; Irwin, J A; Mayr, W R; Morling, N; Pokorak, E; Prinz, M; Salas, A; Schneider, P M; Parsons, T J

    2014-11-01

    The DNA Commission of the International Society of Forensic Genetics (ISFG) regularly publishes guidelines and recommendations concerning the application of DNA polymorphisms to the question of human identification. Previous recommendations published in 2000 addressed the analysis and interpretation of mitochondrial DNA (mtDNA) in forensic casework. While the foundations set forth in the earlier recommendations still apply, new approaches to the quality control, alignment and nomenclature of mitochondrial sequences, as well as the establishment of mtDNA reference population databases, have been developed. Here, we describe these developments and discuss their application to both mtDNA casework and mtDNA reference population databasing applications. While the generation of mtDNA for forensic casework has always been guided by specific standards, it is now well-established that data of the same quality are required for the mtDNA reference population data used to assess the statistical weight of the evidence. As a result, we introduce guidelines regarding sequence generation, as well as quality control measures based on the known worldwide mtDNA phylogeny, that can be applied to ensure the highest quality population data possible. For both casework and reference population databasing applications, the alignment and nomenclature of haplotypes is revised here and the phylogenetic alignment proffered as acceptable standard. In addition, the interpretation of heteroplasmy in the forensic context is updated, and the utility of alignment-free database searches for unbiased probability estimates is highlighted. Finally, we discuss statistical issues and define minimal standards for mtDNA database searches. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. EPSCoR Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holdmann, Gwen

    2016-12-20

    Alaska is considered a world leader in renewable energy and microgrid technologies. Our workplan started as an analysis of existing wind-diesel systems, many of which were not performing as designed. We aimed to analyze and understand the performance of existing wind-diesel systems, to establish a knowledge baseline from which to work towards improvement and maximizing renewable energy utilization. To accomplish this, we worked with the Alaska Energy Authority to develop a comprehensive database of wind system experience, including underlying climatic and socioeconomic characteristics, actual operating data, projected vs. actual capital and O&M costs, and a catalogue of catastrophic anomalies. Thismore » database formed the foundation for the rest of the research program, with the overarching goal of delivering low-cost, reliable, and sustainable energy to diesel microgrids.« less

  13. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisien, Lia

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  14. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  15. The Tropical Biominer Project: mining old sources for new drugs.

    PubMed

    Artiguenave, François; Lins, André; Maciel, Wesley Dias; Junior, Antonio Celso Caldeira; Nacif-Coelho, Carla; de Souza Linhares, Maria Margarida Ribeiro; de Oliveira, Guilherme Correa; Barbosa, Luis Humberto Rezende; Lopes, Júlio César Dias; Junior, Claudionor Nunes Coelho

    2005-01-01

    The Tropical Biominer Project is a recent initiative from the Federal University of Minas Gerais (UFMG) and the Oswaldo Cruz foundation, with the participation of the Biominas Foundation (Belo Horizonte, Minas Gerais, Brazil) and the start-up Homologix. The main objective of the project is to build a new resource for the chemogenomics research, on chemical compounds, with a strong emphasis on natural molecules. Adopted technologies include the search of information from structured, semi-structured, and non-structured documents (the last two from the web) and datamining tools in order to gather information from different sources. The database is the support for developing applications to find new potential treatments for parasitic infections by using virtual screening tools. We present here the midpoint of the project: the conception and implementation of the Tropical Biominer Database. This is a Federated Database designed to store data from different resources. Connected to the database, a web crawler is able to gather information from distinct, patented web sites and store them after automatic classification using datamining tools. Finally, we demonstrate the interest of the approach, by formulating new hypotheses on specific targets of a natural compound, violacein, using inferences from a Virtual Screening procedure.

  16. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  17. Terminological aspects of data elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.

    1991-01-01

    The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less

  18. Improved Dust Forecast Products for Southwest Asia Forecasters through Dust Source Database Advancements

    NASA Astrophysics Data System (ADS)

    Brooks, G. R.

    2011-12-01

    Dust storm forecasting is a critical part of military theater operations in Afghanistan and Iraq as well as other strategic areas of the globe. The Air Force Weather Agency (AFWA) has been using the Dust Transport Application (DTA) as a forecasting tool since 2001. Initially developed by The Johns Hopkins University Applied Physics Laboratory (JHUAPL), output products include dust concentration and reduction of visibility due to dust. The performance of the products depends on several factors including the underlying dust source database, treatment of soil moisture, parameterization of dust processes, and validity of the input atmospheric model data. Over many years of analysis, seasonal dust forecast biases of the DTA have been observed and documented. As these products are unique and indispensible for U.S. and NATO forces, amendments were required to provide the best forecasts possible. One of the quickest ways to scientifically address the dust concentration biases noted over time was to analyze the weaknesses in, and adjust the dust source database. Dust source database strengths and weaknesses, the satellite analysis and adjustment process, and tests which confirmed the resulting improvements in the final dust concentration and visibility products will be shown.

  19. DESPIC: Detecting Early Signatures of Persuasion in Information Cascades

    DTIC Science & Technology

    2015-08-27

    over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework

  20. Verification of the databases EXFOR and ENDF

    NASA Astrophysics Data System (ADS)

    Berton, Gottfried; Damart, Guillaume; Cabellos, Oscar; Beauzamy, Bernard; Soppera, Nicolas; Bossant, Manuel

    2017-09-01

    The objective of this work is for the verification of large experimental (EXFOR) and evaluated nuclear reaction databases (JEFF, ENDF, JENDL, TENDL…). The work is applied to neutron reactions in EXFOR data, including threshold reactions, isomeric transitions, angular distributions and data in the resonance region of both isotopes and natural elements. Finally, a comparison of the resonance integrals compiled in EXFOR database with those derived from the evaluated libraries is also performed.

  1. CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.

    PubMed

    Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-03-14

    Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A new method for recognizing quadric surfaces from range data and its application to telerobotics and automation, final phase

    NASA Technical Reports Server (NTRS)

    Mielke, Roland; Dcunha, Ivan; Alvertos, Nicolas

    1994-01-01

    In the final phase of the proposed research a complete top to down three dimensional object recognition scheme has been proposed. The various three dimensional objects included spheres, cones, cylinders, ellipsoids, paraboloids, and hyperboloids. Utilizing a newly developed blob determination technique, a given range scene with several non-cluttered quadric surfaces is segmented. Next, using the earlier (phase 1) developed alignment scheme, each of the segmented objects are then aligned in a desired coordinate system. For each of the quadric surfaces based upon their intersections with certain pre-determined planes, a set of distinct features (curves) are obtained. A database with entities such as the equations of the planes and angular bounds of these planes has been created for each of the quadric surfaces. Real range data of spheres, cones, cylinders, and parallelpipeds have been utilized for the recognition process. The developed algorithm gave excellent results for the real data as well as for several sets of simulated range data.

  3. An integrated database with system optimization and design features

    NASA Technical Reports Server (NTRS)

    Arabyan, A.; Nikravesh, P. E.; Vincent, T. L.

    1992-01-01

    A customized, mission-specific relational database package was developed to allow researchers working on the Mars oxygen manufacturing plant to enter physical description, engineering, and connectivity data through a uniform, graphical interface and to store the data in formats compatible with other software also developed as part of the project. These latter components include an optimization program to maximize or minimize various criteria as the system evolves into its final design; programs to simulate the behavior of various parts of the plant in Martian conditions; an animation program which, in different modes, provides visual feedback to designers and researchers about the location of and temperature distribution among components as well as heat, mass, and data flow through the plant as it operates in different scenarios; and a control program to investigate the stability and response of the system under different disturbance conditions. All components of the system are interconnected so that changes entered through one component are reflected in the others.

  4. BanTeC: a software tool for management of corneal transplantation.

    PubMed

    López-Alvarez, P; Caballero, F; Trias, J; Cortés, U; López-Navidad, A

    2005-11-01

    Until recently, all cornea information at our tissue bank was managed manually, no specific database or computer tool had been implemented to provide electronic versions of documents and medical reports. The main objective of the BanTeC project was therefore to create a computerized system to integrate and classify all the information and documents used in the center in order to facilitate management of retrieved, transplanted corneal tissues. We used the Windows platform to develop the project. Microsoft Access and Microsoft Jet Engine were used at the database level and Data Access Objects was the chosen data access technology. In short, the BanTeC software seeks to computerize the tissue bank. All the initial stages of the development have now been completed, from specification of needs, program design and implementation of the software components, to the total integration of the final result in the real production environment. BanTeC will allow the generation of statistical reports for analysis to improve our performance.

  5. The Cambridge Structural Database in retrospect and prospect.

    PubMed

    Groom, Colin R; Allen, Frank H

    2014-01-13

    The Cambridge Crystallographic Data Centre (CCDC) was established in 1965 to record numerical, chemical and bibliographic data relating to published organic and metal-organic crystal structures. The Cambridge Structural Database (CSD) now stores data for nearly 700,000 structures and is a comprehensive and fully retrospective historical archive of small-molecule crystallography. Nearly 40,000 new structures are added each year. As X-ray crystallography celebrates its centenary as a subject, and the CCDC approaches its own 50th year, this article traces the origins of the CCDC as a publicly funded organization and its onward development into a self-financing charitable institution. Principally, however, we describe the growth of the CSD and its extensive associated software system, and summarize its impact and value as a basis for research in structural chemistry, materials science and the life sciences, including drug discovery and drug development. Finally, the article considers the CCDC's funding model in relation to open access and open data paradigms. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Fuzzy Clustering Applied to ROI Detection in Helical Thoracic CT Scans with a New Proposal and Variants

    PubMed Central

    Castro, Alfonso; Boveda, Carmen; Arcay, Bernardino; Sanjurjo, Pedro

    2016-01-01

    The detection of pulmonary nodules is one of the most studied problems in the field of medical image analysis due to the great difficulty in the early detection of such nodules and their social impact. The traditional approach involves the development of a multistage CAD system capable of informing the radiologist of the presence or absence of nodules. One stage in such systems is the detection of ROI (regions of interest) that may be nodules in order to reduce the space of the problem. This paper evaluates fuzzy clustering algorithms that employ different classification strategies to achieve this goal. After characterising these algorithms, the authors propose a new algorithm and different variations to improve the results obtained initially. Finally it is shown as the most recent developments in fuzzy clustering are able to detect regions that may be nodules in CT studies. The algorithms were evaluated using helical thoracic CT scans obtained from the database of the LIDC (Lung Image Database Consortium). PMID:27517049

  7. Web interfaces to relational databases

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  8. t4 Workshop Report

    PubMed Central

    Silbergeld, Ellen K.; Contreras, Elizabeth Q.; Hartung, Thomas; Hirsch, Cordula; Hogberg, Helena; Jachak, Ashish C.; Jordan, William; Landsiedel, Robert; Morris, Jeffery; Patri, Anil; Pounds, Joel G.; de Vizcaya Ruiz, Andrea; Shvedova, Anna; Tanguay, Robert; Tatarazako, Norihasa; van Vliet, Erwin; Walker, Nigel J.; Wiesner, Mark; Wilcox, Neil; Zurlo, Joanne

    2014-01-01

    Summary In October 2010, a group of experts met as part of the transatlantic think tank for toxicology (t4) to exchange ideas about the current status and future of safety testing of nanomaterials. At present, there is no widely accepted path forward to assure appropriate and effective hazard identification for engineered nanomaterials. The group discussed needs for characterization of nanomaterials and identified testing protocols that incorporate the use of innovative alternative whole models such as zebrafish or C. elegans, as well as in vitro or alternative methods to examine specific functional pathways and modes of action. The group proposed elements of a potential testing scheme for nanomaterials that works towards an integrated testing strategy, incorporating the goals of the NRC report Toxicity Testing in the 21st Century: A Vision and a Strategy by focusing on pathways of toxic response, and utilizing an evidence-based strategy for developing the knowledge base for safety assessment. Finally, the group recommended that a reliable, open, curated database be developed that interfaces with existing databases to enable sharing of information. PMID:21993959

  9. South American foF2 database using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gularte, Erika; Bilitza, Dieter; Carpintero, Daniel; Jaen, Juliana

    2016-07-01

    We present the first step towards a new database of the ionospheric parameter foF2 for the South American region. The foF2 parameter, being the maximum of the ionospheric electronic density profile and its main sculptor, is of great interest not only in atmospheric studies but also in the realm of radio propagation. Due to its importance, its large variability and the difficulty to model it in time and space, it was the subject of an intense study since decades ago. The current databases, used by the IRI (International Reference Ionosphere) model, and based on Fourier expansions, has been built in the 60s from the available ionosondes at that time; therefore, it is still short of South American data. The main goal of this work is to upgrade the database, incorporating the now available data compiled by the RAPEAS (Red Argentina para el Estudio de la Atmósfera Superior, Argentine Network for the Study of the Upper Atmosphere) network. Also, we developed an algorithm to study the foF2 variability, based on the modern technique of genetic algorithms, which has been successfully applied on other disciplines. One of the main advantages of this technique is its ability in working with many variables and with unfavorable samples. The results are compared with the IRI databases, and improvements to the latter are suggested. Finally, it is important to notice that the new database is designed so that new available data can be easily incorporated.

  10. IRIS Toxicological Review of Naphthalene (1998 Final)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Naphthalene: in support of the Integrated Risk Information System (IRIS). The updated Summary for Naphthalene and accompanying toxicological review have been added to the IRIS Database.

  11. IRIS Toxicological Review of Phosgene (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Phosgene: in support of the Integrated Risk Information System (IRIS). The updated Summary for Phosgene and accompanying toxicological review have been added to the IRIS Database.

  12. IRIS Toxicological Review of Acrolein (2003 Final)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Acrolein: in support of the Integrated Risk Information System (IRIS). The updated Summary for Acrolein and accompanying toxicological review have been added to the IRIS Database.

  13. IRIS Toxicological Review of Chloroform (Final Report)

    EPA Science Inventory

    EPA is announcing the release of the final report, Toxicological Review of Chloroform: in support of the Integrated Risk Information System (IRIS). The updated Summary for Chloroform and accompanying Quickview have also been added to the IRIS Database.

  14. Development of an internationally agreed minimal dataset for juvenile dermatomyositis (JDM) for clinical and research use.

    PubMed

    McCann, Liza J; Kirkham, Jamie J; Wedderburn, Lucy R; Pilkington, Clarissa; Huber, Adam M; Ravelli, Angelo; Appelbe, Duncan; Williamson, Paula R; Beresford, Michael W

    2015-06-12

    Juvenile dermatomyositis (JDM) is a rare autoimmune inflammatory disorder associated with significant morbidity and mortality. International collaboration is necessary to better understand the pathogenesis of the disease, response to treatment and long-term outcome. To aid international collaboration, it is essential to have a core set of data that all researchers and clinicians collect in a standardised way for clinical purposes and for research. This should include demographic details, diagnostic data and measures of disease activity, investigations and treatment. Variables in existing clinical registries have been compared to produce a provisional data set for JDM. We now aim to develop this into a consensus-approved minimum core dataset, tested in a wider setting, with the objective of achieving international agreement. A two-stage bespoke Delphi-process will engage the opinion of a large number of key stakeholders through Email distribution via established international paediatric rheumatology and myositis organisations. This, together with a formalised patient/parent participation process will help inform a consensus meeting of international experts that will utilise a nominal group technique (NGT). The resulting proposed minimal dataset will be tested for feasibility within existing database infrastructures. The developed minimal dataset will be sent to all internationally representative collaborators for final comment. The participants of the expert consensus group will be asked to draw together these comments, ratify and 'sign off' the final minimal dataset. An internationally agreed minimal dataset has the potential to significantly enhance collaboration, allow effective communication between groups, provide a minimal standard of care and enable analysis of the largest possible number of JDM patients to provide a greater understanding of this disease. The final approved minimum core dataset could be rapidly incorporated into national and international collaborative efforts, including existing prospective databases, and be available for use in randomised controlled trials and for treatment/protocol comparisons in cohort studies.

  15. [Development of a consented set of criteria to evaluate post-rehabilitation support services].

    PubMed

    Parzanka, Susanne; Himstedt, Christian; Deck, Ruth

    2015-01-01

    Existing rehabilitation aftercare offers in Germany are heterogeneous, and there is a lack of transparency in terms of indications and methods as well as of (nationwide) availability and financial coverage. Also, there is no systematic and transparent synopsis. To close this gap a systematic review was conducted and a web-based database created for post-rehabilitation support. To allow a consistent assessment of the included aftercare offers, a quality profile of universally valid criteria was developed. This paper aims to outline the scientific approach. The procedure adapts the RAND/UCLA method, with the participation of the advisory board of the ReNa project. Preparations for the set included systematic searches in order to find possible criteria to assess the quality of aftercare offers. These criteria first were collected without any pre-selection involved. Every item of the adjusted collection was evaluated by every single member of the advisory board considering the topics "relevance", "feasibility" and "suitability for public coverage". Interpersonal analysis was conducted by relating the median and classification into consensus and dissent. All items that were considered to be "relevant" and "feasible" in the three stages of consensus building and deemed "suitable for public coverage" were transferred into the final set of criteria (ReNa set). A total of 82 publications were selected out of the 656 findings taken into account, which delivered 3,603 criteria of possible initial relevance. After a further removal of 2,598 redundant criteria, the panel needed to assess a set of 1,005 items. Finally we performed a quality assessment of aftercare offers using a set of 35 descriptive criteria merged into 8 conceptual clusters. The consented ReNa set of 35 items delivers a first generally valid tool to describe quality of structures, standards and processes of aftercare offers. So finally, the project developed into a complete collection of profiles characterizing each post-rehabilitation support service included in the database. Copyright © 2015. Published by Elsevier GmbH.

  16. The EGM2008 Global Gravitational Model

    NASA Astrophysics Data System (ADS)

    Pavlis, N. K.; Holmes, S. A.; Kenyon, S. C.; Factor, J. K.

    2008-12-01

    The development of a new Earth Gravitational Model (EGM) to degree 2160 has been completed. This model, designated EGM2008, is the product of the final re-iteration of our modelling and estimation approach. Our multi-year effort has produced several Preliminary Gravitational Models (PGM) of increasingly improved performance. One of these models (PGM2007A) was provided for evaluation to an independent Evaluation Working Group, sponsored by the International Association of Geodesy (IAG). In an effort to address certain shortcomings of PGM2007A, we have considered the feedback that we received from this Working Group. As part of this effort, EGM2008 incorporates an improved version of our 5'x5' global gravity anomaly database and has benefited from the latest GRACE based satellite-only solutions (e.g., ITG- GRACE03S). EGM2008 incorporates an improved ocean-wide set of altimetry-derived gravity anomalies that were estimated using PGM2007B (a variant of PGM2007A) and its associated Dynamic Ocean Topography (DOT) model as reference models in a "Remove-Compute-Restore" fashion. For the Least Squares Collocation estimation of our final global 5'x5' area-mean gravity anomaly database, we have used consistently PGM2007B as our reference model to degree 2160. We have developed and used a formulation that predicts area-mean gravity anomalies that are effectively band-limited to degree 2160, thereby minimizing aliasing effects during the harmonic analysis process. We have also placed special emphasis on the refinement and "calibration" of the error estimates that accompany our final combination solution EGM2008. We present the main aspects of the model's development and evaluation. This evaluation was accomplished primarily through the comparison of various model derived quantities with independent data and models (e.g., geoid undulations derived from GPS positioning and spirit levelling, astronomical deflections of the vertical, etc.). We will also present comparisons of our model-implied Dynamic Ocean Topography with other contemporary estimates (e.g., from ECCO).

  17. 48 CFR 32.1110 - Solicitation provision and contract clauses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... database and maintain registration until final payment, unless— (i) Payment will be made through a third... the contractor to be registered in the CCR database. (ii)(A) If permitted by agency procedures, the... authorized, in accordance with 32.1106, to use a nondomestic EFT mechanism, the contracting officer shall...

  18. Data management and language enhancement for generalized set theory computer language for operation of large relational databases

    NASA Technical Reports Server (NTRS)

    Finley, Gail T.

    1988-01-01

    This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.

  19. Supplier's Status for Critical Solid Propellants, Explosive, and Pyrotechnic Ingredients

    NASA Technical Reports Server (NTRS)

    Sims, B. L.; Painter, C. R.; Nauflett, G. W.; Cramer, R. J.; Mulder, E. J.

    2000-01-01

    In the early 1970's a program was initiated at the Naval Surface Warfare Center/Indian Head Division (NSWC/IHDIV) to address the well-known problems associated with availability and suppliers of critical ingredients. These critical ingredients are necessary for preparation of solid propellants and explosives manufactured by the Navy. The objective of the program was to identify primary and secondary (or back-up) vendor information for these critical ingredients, and to develop suitable alternative materials if an ingredient is unavailable. In 1992 NSWC/IHDIV funded Chemical Propulsion Information Agency (CPIA) under a Technical Area Task (TAT) to expedite the task of creating a database listing critical ingredients used to manufacture Navy propellant and explosives based on known formulation quantities. Under this task CPIA provided employees that were 100 percent dedicated to the task of obtaining critical ingredient suppliers information, selecting the software and designing the interface between the computer program and the database users. TAT objectives included creating the Explosive Ingredients Source Database (EISD) for Propellant, Explosive and Pyrotechnic (PEP) critical elements. The goal was to create a readily accessible database, to provide users a quick-view summary of critical ingredient supplier's information and create a centralized archive that CPIA would update and distribute. EISD funding ended in 1996. At that time, the database entries included 53 formulations and 108 critical used to manufacture Navy propellant and explosives. CPIA turned the database tasking back over to NSWC/IHDIV to maintain and distribute at their discretion. Due to significant interest in propellant/explosives critical ingredients suppliers' status, the Propellant Development and Characterization Subcommittee (PDCS) approached the JANNAF Executive committee (EC) for authorization to continue the critical ingredient database work. In 1999, JANNAF EC approved the PDCS panel task. This paper is designed to emphasize the necessity of maintaining a JANNAF community supported database, which monitors PEP critical ingredient suppliers' status. The final product of this task is a user friendly, searchable database that provides a quick-view summary of critical ingredient supplier's information. This database must be designed to serve the needs of JANNAF and the propellant and energetic commercial manufacturing community as well. This paper provides a summary of the type of information to archive each critical ingredient.

  20. BADERI: an online database to coordinate handsearching activities of controlled clinical trials for their potential inclusion in systematic reviews.

    PubMed

    Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier

    2017-06-13

    Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.

  1. Genotyping and interpretation of STR-DNA: Low-template, mixtures and database matches-Twenty years of research and development.

    PubMed

    Gill, Peter; Haned, Hinda; Bleka, Oyvind; Hansson, Oskar; Dørum, Guro; Egeland, Thore

    2015-09-01

    The introduction of Short Tandem Repeat (STR) DNA was a revolution within a revolution that transformed forensic DNA profiling into a tool that could be used, for the first time, to create National DNA databases. This transformation would not have been possible without the concurrent development of fluorescent automated sequencers, combined with the ability to multiplex several loci together. Use of the polymerase chain reaction (PCR) increased the sensitivity of the method to enable the analysis of a handful of cells. The first multiplexes were simple: 'the quad', introduced by the defunct UK Forensic Science Service (FSS) in 1994, rapidly followed by a more discriminating 'six-plex' (Second Generation Multiplex) in 1995 that was used to create the world's first national DNA database. The success of the database rapidly outgrew the functionality of the original system - by the year 2000 a new multiplex of ten-loci was introduced to reduce the chance of adventitious matches. The technology was adopted world-wide, albeit with different loci. The political requirement to introduce pan-European databases encouraged standardisation - the development of European Standard Set (ESS) of markers comprising twelve-loci is the latest iteration. Although development has been impressive, the methods used to interpret evidence have lagged behind. For example, the theory to interpret complex DNA profiles (low-level mixtures), had been developed fifteen years ago, but only in the past year or so, are the concepts starting to be widely adopted. A plethora of different models (some commercial and others non-commercial) have appeared. This has led to a confusing 'debate' about the 'best' to use. The different models available are described along with their advantages and disadvantages. A section discusses the development of national DNA databases, along with details of an associated controversy to estimate the strength of evidence of matches. Current methodology is limited to searches of complete profiles - another example where the interpretation of matches has not kept pace with development of theory. STRs have also transformed the area of Disaster Victim Identification (DVI) which frequently requires kinship analysis. However, genotyping efficiency is complicated by complex, degraded DNA profiles. Finally, there is now a detailed understanding of the causes of stochastic effects that cause DNA profiles to exhibit the phenomena of drop-out and drop-in, along with artefacts such as stutters. The phenomena discussed include: heterozygote balance; stutter; degradation; the effect of decreasing quantities of DNA; the dilution effect. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. The U.S. Geological Survey coal quality (COALQUAL) database version 3.0

    USGS Publications Warehouse

    Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.

    2015-12-21

    Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.

  3. Toxicity tests aiming to protect Brazilian aquatic systems: current status and implications for management.

    PubMed

    Martins, Samantha Eslava; Bianchini, Adalto

    2011-07-01

    The current status of toxicological tests performed with Brazilian native species was evaluated through a survey of the scientific data available in the literature. The information gathered was processed and an electronic toxicology database (http://www.inct-ta.furg.br/bd_toxicologico.php) was generated. This database provides valuable information for researchers to select sensitive and tolerant aquatic species to a large variety of aquatic pollutants. Furthermore, the toxicology database allows researchers to select species representative of an ecosystem of interest. Analysis of the toxicology database showed that ecotoxicological assays have significantly improved in Brazil over the last decade, in spite of the still relatively low number of tests performed and the restricted number of native species tested. This is because most of the research is developed in a few laboratories concentrated in certain regions of Brazil, especially in Southern and Southeast regions. Considering the extremely rich biodiversity and the large variety of aquatic ecosystems in Brazil, this finding points to the urgent need for the development of ecotoxicological studies with other groups of aquatic animals, such as insects, foraminifera, cnidarians, worms, amphibians, among others. This would help to derive more realistic water quality criteria (WQC) values, which would better protect the different aquatic ecosystems in Brazil. Finally, the toxicology database generated presents solid and science based information, which can encourage and drive the Environmental Regulatory Agencies in Brazil to derive WQC based on native species. In this context, the present paper discusses the historical evolution of ecotoxicological studies in Brazil, and how they have contributed to the improvement of the Brazilian Federal and Regional regulations for environment.

  4. De novo transcriptomic analysis and development of EST-SSRs for Sorbus pohuashanensis (Hance) Hedl.

    PubMed Central

    Guan, Xuelian; Fu, Qiang; Zhang, Ze; Hu, Zenghui; Zheng, Jian; Lu, Yizeng; Li, Wei

    2017-01-01

    Sorbus pohuashanensis is a native tree species of northern China that is used for a variety of ecological purposes. The species is often grown as an ornamental landscape tree because of its beautiful form, silver flowers in early summer, attractive pinnate leaves in summer, and red leaves and fruits in autumn. However, development and further utilization of the species are hindered by the lack of comprehensive genetic information, which impedes research into its genetics and molecular biology. Recent advances in de novo transcriptome sequencing (RNA-seq) technology have provided an effective means to obtain genomic information from non-model species. Here, we applied RNA-seq for sequencing S. pohuashanensis leaves and obtained a total of 137,506 clean reads. After assembly, 96,213 unigenes with an average length of 770 bp were obtained. We found that 64.5% of the unigenes could be annotated using bioinformatics tools to analyze gene function and alignment with the NCBI database. Overall, 59,089 unigenes were annotated using the Nr database(non-redundant protein database), 35,225 unigenes were annotated using the GO (Gene Ontology categories) database, and 33,168 unigenes were annotated using COG (Cluster of Orthologous Groups). Analysis of the unigenes using the KEGG (Kyoto Encyclopedia of Genes and Genomes) database indicated that 13,953 unigenes were involved in 322 metabolic pathways. Finally, simple sequence repeat (SSR) site detection identified 6,604 unigenes that included EST-SSRs and a total of 7,473 EST-SSRs in the unigene sequences. Fifteen polymorphic SSRs were screened and found to be of use for future genetic research. These unigene sequences will provide important genetic resources for genetic improvement and investigation of biochemical processes in S. pohuashanensis. PMID:28614366

  5. Research on spatio-temporal database techniques for spatial information service

    NASA Astrophysics Data System (ADS)

    Zhao, Rong; Wang, Liang; Li, Yuxiang; Fan, Rongshuang; Liu, Ping; Li, Qingyuan

    2007-06-01

    Geographic data should be described by spatial, temporal and attribute components, but the spatio-temporal queries are difficult to be answered within current GIS. This paper describes research into the development and application of spatio-temporal data management system based upon GeoWindows GIS software platform which was developed by Chinese Academy of Surveying and Mapping (CASM). Faced the current and practical requirements of spatial information application, and based on existing GIS platform, one kind of spatio-temporal data model which integrates vector and grid data together was established firstly. Secondly, we solved out the key technique of building temporal data topology, successfully developed a suit of spatio-temporal database management system adopting object-oriented methods. The system provides the temporal data collection, data storage, data management and data display and query functions. Finally, as a case study, we explored the application of spatio-temporal data management system with the administrative region data of multi-history periods of China as the basic data. With all the efforts above, the GIS capacity of management and manipulation in aspect of time and attribute of GIS has been enhanced, and technical reference has been provided for the further development of temporal geographic information system (TGIS).

  6. A Platform for Designing Genome-Based Personalized Immunotherapy or Vaccine against Cancer

    PubMed Central

    Gupta, Sudheer; Chaudhary, Kumardeep; Dhanda, Sandeep Kumar; Kumar, Rahul; Kumar, Shailesh; Sehgal, Manika; Nagpal, Gandharva

    2016-01-01

    Due to advancement in sequencing technology, genomes of thousands of cancer tissues or cell-lines have been sequenced. Identification of cancer-specific epitopes or neoepitopes from cancer genomes is one of the major challenges in the field of immunotherapy or vaccine development. This paper describes a platform Cancertope, developed for designing genome-based immunotherapy or vaccine against a cancer cell. Broadly, the integrated resources on this platform are apportioned into three precise sections. First section explains a cancer-specific database of neoepitopes generated from genome of 905 cancer cell lines. This database harbors wide range of epitopes (e.g., B-cell, CD8+ T-cell, HLA class I, HLA class II) against 60 cancer-specific vaccine antigens. Second section describes a partially personalized module developed for predicting potential neoepitopes against a user-specific cancer genome. Finally, we describe a fully personalized module developed for identification of neoepitopes from genomes of cancerous and healthy cells of a cancer-patient. In order to assist the scientific community, wide range of tools are incorporated in this platform that includes screening of epitopes against human reference proteome (http://www.imtech.res.in/raghava/cancertope/). PMID:27832200

  7. Archetype relational mapping - a practical openEHR persistence solution.

    PubMed

    Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong

    2015-11-05

    One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.

  8. Advanced Transportation System Studies. Technical Area 3: Alternate Propulsion Subsystem Concepts. Volume 1; Executive Summary

    NASA Technical Reports Server (NTRS)

    Levack, Daniel J. H.

    2000-01-01

    The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.

  9. Choosing an Optimal Database for Protein Identification from Tandem Mass Spectrometry Data.

    PubMed

    Kumar, Dhirendra; Yadav, Amit Kumar; Dash, Debasis

    2017-01-01

    Database searching is the preferred method for protein identification from digital spectra of mass to charge ratios (m/z) detected for protein samples through mass spectrometers. The search database is one of the major influencing factors in discovering proteins present in the sample and thus in deriving biological conclusions. In most cases the choice of search database is arbitrary. Here we describe common search databases used in proteomic studies and their impact on final list of identified proteins. We also elaborate upon factors like composition and size of the search database that can influence the protein identification process. In conclusion, we suggest that choice of the database depends on the type of inferences to be derived from proteomics data. However, making additional efforts to build a compact and concise database for a targeted question should generally be rewarding in achieving confident protein identifications.

  10. ISSARS Aerosol Database : an Incorporation of Atmospheric Particles into a Universal Tool to Simulate Remote Sensing Instruments

    NASA Technical Reports Server (NTRS)

    Goetz, Michael B.

    2011-01-01

    The Instrument Simulator Suite for Atmospheric Remote Sensing (ISSARS) entered its third and final year of development with an overall goal of providing a unified tool to simulate active and passive space borne atmospheric remote sensing instruments. These simulations focus on the atmosphere ranging from UV to microwaves. ISSARS handles all assumptions and uses various models on scattering and microphysics to fill the gaps left unspecified by the atmospheric models to create each instrument's measurements. This will help benefit mission design and reduce mission cost, create efficient implementation of multi-instrument/platform Observing System Simulation Experiments (OSSE), and improve existing models as well as new advanced models in development. In this effort, various aerosol particles are incorporated into the system, and a simulation of input wavelength and spectral refractive indices related to each spherical test particle(s) generate its scattering properties and phase functions. These atmospheric particles being integrated into the system comprise the ones observed by the Multi-angle Imaging SpectroRadiometer(MISR) and by the Multiangle SpectroPolarimetric Imager(MSPI). In addition, a complex scattering database generated by Prof. Ping Yang (Texas A&M) is also incorporated into this aerosol database. Future development with a radiative transfer code will generate a series of results that can be validated with results obtained by the MISR and MSPI instruments; nevertheless, test cases are simulated to determine the validity of various plugin libraries used to determine or gather the scattering properties of particles studied by MISR and MSPI, or within the Single-scattering properties of tri-axial ellipsoidal mineral dust particles database created by Prof. Ping Yang.

  11. HuMiChip: Development of a Functional Gene Array for the Study of Human Microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tu, Q.; Deng, Ye; Lin, Lu

    Microbiomes play very important roles in terms of nutrition, health and disease by interacting with their hosts. Based on sequence data currently available in public domains, we have developed a functional gene array to monitor both organismal and functional gene profiles of normal microbiota in human and mouse hosts, and such an array is called human and mouse microbiota array, HMM-Chip. First, seed sequences were identified from KEGG databases, and used to construct a seed database (seedDB) containing 136 gene families in 19 metabolic pathways closely related to human and mouse microbiomes. Second, a mother database (motherDB) was constructed withmore » 81 genomes of bacterial strains with 54 from gut and 27 from oral environments, and 16 metagenomes, and used for selection of genes and probe design. Gene prediction was performed by Glimmer3 for bacterial genomes, and by the Metagene program for metagenomes. In total, 228,240 and 801,599 genes were identified for bacterial genomes and metagenomes, respectively. Then the motherDB was searched against the seedDB using the HMMer program, and gene sequences in the motherDB that were highly homologous with seed sequences in the seedDB were used for probe design by the CommOligo software. Different degrees of specific probes, including gene-specific, inclusive and exclusive group-specific probes were selected. All candidate probes were checked against the motherDB and NCBI databases for specificity. Finally, 7,763 probes covering 91.2percent (12,601 out of 13,814) HMMer confirmed sequences from 75 bacterial genomes and 16 metagenomes were selected. This developed HMM-Chip is able to detect the diversity and abundance of functional genes, the gene expression of microbial communities, and potentially, the interactions of microorganisms and their hosts.« less

  12. Countermeasure Evaluation and Validation Project (CEVP) Database Requirement Documentation

    NASA Technical Reports Server (NTRS)

    Shin, Sung Y.

    2003-01-01

    The initial focus of the project by the JSC laboratories will be to develop, test and implement a standardized complement of integrated physiological test (Integrated Testing Regimen, ITR) that will examine both system and intersystem function, and will be used to validate and certify candidate countermeasures. The ITR will consist of medical requirements (MRs) and non-MR core ITR tests, and countermeasure-specific testing. Non-MR and countermeasure-specific test data will be archived in a database specific to the CEVP. Development of a CEVP Database will be critical to documenting the progress of candidate countermeasures. The goal of this work is a fully functional software system that will integrate computer-based data collection and storage with secure, efficient, and practical distribution of that data over the Internet. This system will provide the foundation of a new level of interagency and international cooperation for scientific experimentation and research, providing intramural, international, and extramural collaboration through management and distribution of the CEVP data. The research performed this summer includes the first phase of the project. The first phase of the project is a requirements analysis. This analysis will identify the expected behavior of the system under normal conditions and abnormal conditions; that could affect the system's ability to produce this behavior; and the internal features in the system needed to reduce the risk of unexpected or unwanted behaviors. The second phase of this project have also performed in this summer. The second phase of project is the design of data entry screen and data retrieval screen for a working model of the Ground Data Database. The final report provided the requirements for the CEVP system in a variety of ways, so that both the development team and JSC technical management have a thorough understanding of how the system is expected to behave.

  13. Inferring Network Controls from Topology Using the Chomp Database

    DTIC Science & Technology

    2015-12-03

    AFRL-AFOSR-VA-TR-2016-0033 INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE John Harer DUKE UNIVERSITY Final Report 12/03/2015...INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0436 5c. PROGRAM ELEMENT NUMBER 6...area of Topological Data Analysis (TDA) and it’s application to dynamical systems. The role of this work in the Complex Networks program is based on

  14. Beyond space and across time: non-finalized dialogue about science and religion discourse

    NASA Astrophysics Data System (ADS)

    Hsu, Pei-Ling

    2010-03-01

    This commentary dialogues with three articles that analyze the same database about science and religion discourse produced 17 years ago. Dialogues in these three articles and this commentary across space and time allow us to develop new and different understandings of the same database and situation. As part of this commentary, I discuss topics approached in the three articles including the collective nature of discourses, emotion, and constructivist view on learning. I draw on three essential concepts of the dialogical nature of utterance, the emotional-volitional tone and internally persuasive discourse informed by Bakhtin's dialogism. In particular, I conclude that Bakhtin's dialogism not only invites us to understand science learning discourse in a more holistic way but also encourages us to open up dialogues between science and religion that often are considered to be two hostile opponents.

  15. A MYSQL-BASED DATA ARCHIVER: PRELIMINARY RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Bickley; Christopher Slominski

    2008-01-23

    Following an evaluation of the archival requirements of the Jefferson Laboratory accelerator’s user community, a prototyping effort was executed to determine if an archiver based on MySQL had sufficient functionality to meet those requirements. This approach was chosen because an archiver based on a relational database enables the development effort to focus on data acquisition and management, letting the database take care of storage, indexing and data consistency. It was clear from the prototype effort that there were no performance impediments to successful implementation of a final system. With our performance concerns addressed, the lab undertook the design and developmentmore » of an operational system. The system is in its operational testing phase now. This paper discusses the archiver system requirements, some of the design choices and their rationale, and presents the acquisition, storage and retrieval performance.« less

  16. Database security and encryption technology research and application

    NASA Astrophysics Data System (ADS)

    Zhu, Li-juan

    2013-03-01

    The main purpose of this paper is to discuss the current database information leakage problem, and discuss the important role played by the message encryption techniques in database security, As well as MD5 encryption technology principle and the use in the field of website or application. This article is divided into introduction, the overview of the MD5 encryption technology, the use of MD5 encryption technology and the final summary. In the field of requirements and application, this paper makes readers more detailed and clearly understood the principle, the importance in database security, and the use of MD5 encryption technology.

  17. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    NASA Astrophysics Data System (ADS)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  18. Optics survivability support, volume 2

    NASA Astrophysics Data System (ADS)

    Wild, N.; Simpson, T.; Busdeker, A.; Doft, F.

    1993-01-01

    This volume of the Optics Survivability Support Final Report contains plots of all the data contained in the computerized Optical Glasses Database. All of these plots are accessible through the Database, but are included here as a convenient reference. The first three pages summarize the types of glass included with a description of the radiation source, test date, and the original data reference. This information is included in the database as a macro button labeled 'LLNL DATABASE'. Following this summary is an Abbe chart showing which glasses are included and where they lie as a function of nu(sub d) and n(sub d). This chart is also callable through the database as a macro button labeled 'ABBEC'.

  19. Health Assessment Document for Chromium (Final Report, 1983)

    EPA Science Inventory

    This final report summarizes a comprehensive database that considers all sources of chromium in the environment, the likelihood for its exposure to humans, and the possible consequences to man and lower organisms from its absorption. This information is integrated into a format t...

  20. IRIS Toxicological Review of Vinyl Chloride (Final Report, 2000)

    EPA Science Inventory

    EPA is announcing the release of the final report, Toxicological Review of Vinyl Chloride: in support of the Integrated Risk Information System (IRIS). The updated Summary for Vinyl Chloride and accompanying Quickview have also been added to the IRIS Database.

  1. NGS Catalog: A Database of Next Generation Sequencing Studies in Humans

    PubMed Central

    Xia, Junfeng; Wang, Qingguo; Jia, Peilin; Wang, Bing; Pao, William; Zhao, Zhongming

    2015-01-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research since its advent only a few years ago, and they are expected to advance at an unprecedented pace in the following years. To provide the research community with a comprehensive NGS resource, we have developed the database Next Generation Sequencing Catalog (NGS Catalog, http://bioinfo.mc.vanderbilt.edu/NGS/index.html), a continually updated database that collects, curates and manages available human NGS data obtained from published literature. NGS Catalog deposits publication information of NGS studies and their mutation characteristics (SNVs, small insertions/deletions, copy number variations, and structural variants), as well as mutated genes and gene fusions detected by NGS. Other functions include user data upload, NGS general analysis pipelines, and NGS software. NGS Catalog is particularly useful for investigators who are new to NGS but would like to take advantage of these powerful technologies for their own research. Finally, based on the data deposited in NGS Catalog, we summarized features and findings from whole exome sequencing, whole genome sequencing, and transcriptome sequencing studies for human diseases or traits. PMID:22517761

  2. Novel Method of Storing and Reconstructing Events at Fermilab E-906/SeaQuest Using a MySQL Database

    NASA Astrophysics Data System (ADS)

    Hague, Tyler

    2010-11-01

    Fermilab E-906/SeaQuest is a fixed target experiment at Fermi National Accelerator Laboratory. We are investigating the antiquark asymmetry in the nucleon sea. By examining the ratio of the Drell- Yan cross sections of proton-proton and proton-deuterium collisions we can determine the asymmetry ratio. An essential feature in the development of the analysis software is to update the event reconstruction to modern software tools. We are doing this in a unique way by doing a majority of the calculations within an SQL database. Using a MySQL database allows us to take advantage of off-the-shelf software without sacrificing ROOT compatibility and avoid network bottlenecks with server-side data selection. Using our raw data we create stubs, or partial tracks, at each station which are pieced together to create full tracks. Our reconstruction process uses dynamically created SQL statements to analyze the data. These SQL statements create tables that contain the final reconstructed tracks as well as intermediate values. This poster will explain the reconstruction process and how it is being implemented.

  3. Functional Interaction Network Construction and Analysis for Disease Discovery.

    PubMed

    Wu, Guanming; Haw, Robin

    2017-01-01

    Network-based approaches project seemingly unrelated genes or proteins onto a large-scale network context, therefore providing a holistic visualization and analysis platform for genomic data generated from high-throughput experiments, reducing the dimensionality of data via using network modules and increasing the statistic analysis power. Based on the Reactome database, the most popular and comprehensive open-source biological pathway knowledgebase, we have developed a highly reliable protein functional interaction network covering around 60 % of total human genes and an app called ReactomeFIViz for Cytoscape, the most popular biological network visualization and analysis platform. In this chapter, we describe the detailed procedures on how this functional interaction network is constructed by integrating multiple external data sources, extracting functional interactions from human curated pathway databases, building a machine learning classifier called a Naïve Bayesian Classifier, predicting interactions based on the trained Naïve Bayesian Classifier, and finally constructing the functional interaction database. We also provide an example on how to use ReactomeFIViz for performing network-based data analysis for a list of genes.

  4. Bayesian approach to transforming public gene expression repositories into disease diagnosis databases.

    PubMed

    Huang, Haiyan; Liu, Chun-Chi; Zhou, Xianghong Jasmine

    2010-04-13

    The rapid accumulation of gene expression data has offered unprecedented opportunities to study human diseases. The National Center for Biotechnology Information Gene Expression Omnibus is currently the largest database that systematically documents the genome-wide molecular basis of diseases. However, thus far, this resource has been far from fully utilized. This paper describes the first study to transform public gene expression repositories into an automated disease diagnosis database. Particularly, we have developed a systematic framework, including a two-stage Bayesian learning approach, to achieve the diagnosis of one or multiple diseases for a query expression profile along a hierarchical disease taxonomy. Our approach, including standardizing cross-platform gene expression data and heterogeneous disease annotations, allows analyzing both sources of information in a unified probabilistic system. A high level of overall diagnostic accuracy was shown by cross validation. It was also demonstrated that the power of our method can increase significantly with the continued growth of public gene expression repositories. Finally, we showed how our disease diagnosis system can be used to characterize complex phenotypes and to construct a disease-drug connectivity map.

  5. Results from a new die-to-database reticle inspection platform

    NASA Astrophysics Data System (ADS)

    Broadbent, William; Xiong, Yalin; Giusti, Michael; Walsh, Robert; Dayal, Aditya

    2007-03-01

    A new die-to-database high-resolution reticle defect inspection system has been developed for the 45nm logic node and extendable to the 32nm node (also the comparable memory nodes). These nodes will use predominantly 193nm immersion lithography although EUV may also be used. According to recent surveys, the predominant reticle types for the 45nm node are 6% simple tri-tone and COG. Other advanced reticle types may also be used for these nodes including: dark field alternating, Mask Enhancer, complex tri-tone, high transmission, CPL, EUV, etc. Finally, aggressive model based OPC will typically be used which will include many small structures such as jogs, serifs, and SRAF (sub-resolution assist features) with accompanying very small gaps between adjacent structures. The current generation of inspection systems is inadequate to meet these requirements. The architecture and performance of a new die-to-database inspection system is described. This new system is designed to inspect the aforementioned reticle types in die-to-database and die-to-die modes. Recent results from internal testing of the prototype systems are shown. The results include standard programmed defect test reticles and advanced 45nm and 32nm node reticles from industry sources. The results show high sensitivity and low false detections being achieved.

  6. IRIS Toxicological Review of Methyl Ethyl Ketone (2003 Final)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Methyl Ethyl Ketone: in support of the Integrated Risk Information System (IRIS). The updated Summary for Methyl Ethyl Ketone and accompanying toxicological review have been added to the IRIS Database....

  7. IRIS TOXICOLOGICAL REVIEW AND SUMMARY DOCUMENTS FOR CHLORDECONE (KEPONE) (2009 FINAL)

    EPA Science Inventory

    EPA is announcing the release of the final report, Toxicological Review of Chlorodecone (kepone): in support of the Integrated Risk Information System (IRIS). The updated Summary for Chlordecone (kepone) and accompanying Quickview have also been added to the IRIS Database....

  8. Payload accommodation and development planning tools - A Desktop Resource Leveling Model (DRLM)

    NASA Technical Reports Server (NTRS)

    Hilchey, John D.; Ledbetter, Bobby; Williams, Richard C.

    1989-01-01

    The Desktop Resource Leveling Model (DRLM) has been developed as a tool to rapidly structure and manipulate accommodation, schedule, and funding profiles for any kind of experiments, payloads, facilities, and flight systems or other project hardware. The model creates detailed databases describing 'end item' parameters, such as mass, volume, power requirements or costs and schedules for payload, subsystem, or flight system elements. It automatically spreads costs by calendar quarters and sums costs or accommodation parameters by total project, payload, facility, payload launch, or program phase. Final results can be saved or printed out, automatically documenting all assumptions, inputs, and defaults.

  9. "XANSONS for COD": a new small BOINC project in crystallography

    NASA Astrophysics Data System (ADS)

    Neverov, Vladislav S.; Khrapov, Nikolay P.

    2018-04-01

    "XANSONS for COD" (http://xansons4cod.com) is a new BOINC project aimed at creating the open-access database of simulated x-ray and neutron powder diffraction patterns for nanocrystalline phase of materials from the collection of the Crystallography Open Database (COD). The project uses original open-source software XaNSoNS to simulate diffraction patterns on CPU and GPU. This paper describes the scientific problem this project solves, the project's internal structure, its operation principles and organization of the final database.

  10. Modelling Conditions and Health Care Processes in Electronic Health Records: An Application to Severe Mental Illness with the Clinical Practice Research Datalink.

    PubMed

    Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos

    2016-01-01

    The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.

  11. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  12. Digital bedrock mapping at the Geological Survey of Norway: BGS SIGMA tool and in-house database structure

    NASA Astrophysics Data System (ADS)

    Gasser, Deta; Viola, Giulio; Bingen, Bernard

    2016-04-01

    Since 2010, the Geological Survey of Norway has been implementing and continuously developing a digital workflow for geological bedrock mapping in Norway, from fieldwork to final product. Our workflow is based on the ESRI ArcGIS platform, and we use rugged Windows computers in the field. Three different hardware solutions have been tested over the past 5 years (2010-2015). (1) Panasonic Toughbook CE-19 (2.3 kg), (2) Panasonic Toughbook CF H2 Field (1.6 kg) and (3) Motion MC F5t tablet (1.5 kg). For collection of point observations in the field we mainly use the SIGMA Mobile application in ESRI ArcGIS developed by the British Geological Survey, which allows the mappers to store georeferenced comments, structural measurements, sample information, photographs, sketches, log information etc. in a Microsoft Access database. The application is freely downloadable from the BGS websites. For line- and polygon work we use our in-house database, which is currently under revision. Our line database consists of three feature classes: (1) bedrock boundaries, (2) bedrock lineaments, and (3) bedrock lines, with each feature class having up to 24 different attribute fields. Our polygon database consists of one feature class with 38 attribute fields enabling to store various information concerning lithology, stratigraphic order, age, metamorphic grade and tectonic subdivision. The polygon and line databases are coupled via topology in ESRI ArcGIS, which allows us to edit them simultaneously. This approach has been applied in two large-scale 1:50 000 bedrock mapping projects, one in the Kongsberg domain of the Sveconorwegian orogen, and the other in the greater Trondheim area (Orkanger) in the Caledonian belt. The mapping projects combined collection of high-resolution geophysical data, digital acquisition of field data, and collection of geochronological, geochemical and petrological data. During the Kongsberg project, some 25000 field observation points were collected by eight geologists. For the Orkanger project, some 2100 field observation points were collected by three geologists. Several advantages of the applied digital approach became clear during these projects: (1) The systematic collection of geological field data in a common format allows easy access and exchange of data among different geologists, (2) Easier access to background information such as geophysics and DEMS in the field, (3) Faster workflow from field data collection to final map product. Obvious disadvantages include: (1) Heavy(ish) and expensive hardware, (2) Battery life and other technical issues in the field, (3) Need for a central field observation point storage inhouse (large amounts of data!), and (4) Acceptance of- and training in a common workflow from all involved geologists.

  13. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    NASA Astrophysics Data System (ADS)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  14. Alternative power supply systems for remote industrial customers

    NASA Astrophysics Data System (ADS)

    Kharlamova, N. V.; Khalyasmaa, A. I.; Eroshenko, S. A.

    2017-06-01

    The paper addresses the problem of alternative power supply of remote industrial clusters with renewable electric energy generation. As a result of different technologies comparison, consideration is given to wind energy application. The authors present a methodology of mean expected wind generation output calculation, based on Weibull distribution, which provides an effective express-tool for preliminary assessment of required installed generation capacity. The case study is based on real data including database of meteorological information, relief characteristics, power system topology etc. Wind generation feasibility estimation for a specific territory is followed by power flow calculations using Monte Carlo methodology. Finally, the paper provides a set of recommendations to ensure safe and reliable power supply for the final customers and, subsequently, to provide sustainable development of the regions, located far from megalopolises and industrial centres.

  15. How Artificial Intelligence Can Improve Our Understanding of the Genes Associated with Endometriosis: Natural Language Processing of the PubMed Database

    PubMed Central

    Mashiach, R.; Cohen, S.; Kedem, A.; Baron, A.; Zajicek, M.; Feldman, I.; Seidman, D.; Soriano, D.

    2018-01-01

    Endometriosis is a disease characterized by the development of endometrial tissue outside the uterus, but its cause remains largely unknown. Numerous genes have been studied and proposed to help explain its pathogenesis. However, the large number of these candidate genes has made functional validation through experimental methodologies nearly impossible. Computational methods could provide a useful alternative for prioritizing those most likely to be susceptibility genes. Using artificial intelligence applied to text mining, this study analyzed the genes involved in the pathogenesis, development, and progression of endometriosis. The data extraction by text mining of the endometriosis-related genes in the PubMed database was based on natural language processing, and the data were filtered to remove false positives. Using data from the text mining and gene network information as input for the web-based tool, 15,207 endometriosis-related genes were ranked according to their score in the database. Characterization of the filtered gene set through gene ontology, pathway, and network analysis provided information about the numerous mechanisms hypothesized to be responsible for the establishment of ectopic endometrial tissue, as well as the migration, implantation, survival, and proliferation of ectopic endometrial cells. Finally, the human genome was scanned through various databases using filtered genes as a seed to determine novel genes that might also be involved in the pathogenesis of endometriosis but which have not yet been characterized. These genes could be promising candidates to serve as useful diagnostic biomarkers and therapeutic targets in the management of endometriosis. PMID:29750165

  16. How Artificial Intelligence Can Improve Our Understanding of the Genes Associated with Endometriosis: Natural Language Processing of the PubMed Database.

    PubMed

    Bouaziz, J; Mashiach, R; Cohen, S; Kedem, A; Baron, A; Zajicek, M; Feldman, I; Seidman, D; Soriano, D

    2018-01-01

    Endometriosis is a disease characterized by the development of endometrial tissue outside the uterus, but its cause remains largely unknown. Numerous genes have been studied and proposed to help explain its pathogenesis. However, the large number of these candidate genes has made functional validation through experimental methodologies nearly impossible. Computational methods could provide a useful alternative for prioritizing those most likely to be susceptibility genes. Using artificial intelligence applied to text mining, this study analyzed the genes involved in the pathogenesis, development, and progression of endometriosis. The data extraction by text mining of the endometriosis-related genes in the PubMed database was based on natural language processing, and the data were filtered to remove false positives. Using data from the text mining and gene network information as input for the web-based tool, 15,207 endometriosis-related genes were ranked according to their score in the database. Characterization of the filtered gene set through gene ontology, pathway, and network analysis provided information about the numerous mechanisms hypothesized to be responsible for the establishment of ectopic endometrial tissue, as well as the migration, implantation, survival, and proliferation of ectopic endometrial cells. Finally, the human genome was scanned through various databases using filtered genes as a seed to determine novel genes that might also be involved in the pathogenesis of endometriosis but which have not yet been characterized. These genes could be promising candidates to serve as useful diagnostic biomarkers and therapeutic targets in the management of endometriosis.

  17. A future Outlook: Web based Simulation of Hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Islam, A. S.; Piasecki, M.

    2003-12-01

    Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML

  18. Guide on Data Models in the Selection and Use of Database Management Systems. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard J.; Draper, Jesse M.

    A tutorial introduction to data models in general is provided, with particular emphasis on the relational and network models defined by the two proposed ANSI (American National Standards Institute) database language standards. Examples based on the network and relational models include specific syntax and semantics, while examples from the other…

  19. Exploring the potential offered by legacy soil databases for ecosystem services mapping of Central African soils

    NASA Astrophysics Data System (ADS)

    Verdoodt, Ann; Baert, Geert; Van Ranst, Eric

    2014-05-01

    Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based on analyses done in Rwanda, but can be complemented with ongoing research results or prospects for Burundi.

  20. Development of an Online Library of Patient-Reported Outcome Measures in Gastroenterology: The GI-PRO Database

    PubMed Central

    Khanna, Puja; Agarwal, Nikhil; Khanna, Dinesh; Hays, Ron D.; Chang, Lin; Bolus, Roger; Melmed, Gil; Whitman, Cynthia B.; Kaplan, Robert M.; Ogawa, Rikke; Snyder, Bradley; Spiegel, Brennan M.R.

    2014-01-01

    OBJECTIVES Because gastrointestinal (GI) illnesses can cause physical, emotional, and social distress, patient-reported outcomes (PROs) are used to guide clinical decision making, conduct research, and seek drug approval. It is important to develop a mechanism for identifying, categorizing, and evaluating the over 100 GI PROs that exist. Here we describe a new, National Institutes of Health (NIH)-supported, online PRO clearinghouse—the GI-PRO database. METHODS Using a protocol developed by the NIH Patient-Reported Outcome Measurement Information System (PROMIS®), we performed a systematic review to identify English-language GI PROs. We abstracted PRO items and developed an online searchable item database. We categorized symptoms into content “bins” to evaluate a framework for GI symptom reporting. Finally, we assigned a score for the methodological quality of each PRO represented in the published literature (0–20 range; higher indicates better). RESULTS We reviewed 15,697 titles (κ > 0.6 for title and abstract selection), from which we identified 126 PROs. Review of the PROs revealed eight GI symptom “bins”: (i) abdominal pain, (ii) bloat/gas, (iii) diarrhea, (iv) constipation, (v) bowel incontinence/soilage, (vi) heartburn/reflux, (vii) swallowing, and (viii) nausea/vomiting. In addition to these symptoms, the PROs covered four psychosocial domains: (i) behaviors, (ii) cognitions, (iii) emotions, and (iv) psychosocial impact. The quality scores were generally low (mean 8.88±4.19; 0 (min)−20 (max)). In addition, 51% did not include patient input in developing the PRO, and 41% provided no information on score interpretation. CONCLUSIONS GI PROs cover a wide range of biopsychosocial symptoms. Although plentiful, GI PROs are limited by low methodological quality. Our online PRO library (www.researchcore.org/gipro/) can help in selecting PROs for clinical and research purposes. PMID:24343547

  1. PHENOPSIS DB: an Information System for Arabidopsis thaliana phenotypic data in an environmental context

    PubMed Central

    2011-01-01

    Background Renewed interest in plant × environment interactions has risen in the post-genomic era. In this context, high-throughput phenotyping platforms have been developed to create reproducible environmental scenarios in which the phenotypic responses of multiple genotypes can be analysed in a reproducible way. These platforms benefit hugely from the development of suitable databases for storage, sharing and analysis of the large amount of data collected. In the model plant Arabidopsis thaliana, most databases available to the scientific community contain data related to genetic and molecular biology and are characterised by an inadequacy in the description of plant developmental stages and experimental metadata such as environmental conditions. Our goal was to develop a comprehensive information system for sharing of the data collected in PHENOPSIS, an automated platform for Arabidopsis thaliana phenotyping, with the scientific community. Description PHENOPSIS DB is a publicly available (URL: http://bioweb.supagro.inra.fr/phenopsis/) information system developed for storage, browsing and sharing of online data generated by the PHENOPSIS platform and offline data collected by experimenters and experimental metadata. It provides modules coupled to a Web interface for (i) the visualisation of environmental data of an experiment, (ii) the visualisation and statistical analysis of phenotypic data, and (iii) the analysis of Arabidopsis thaliana plant images. Conclusions Firstly, data stored in the PHENOPSIS DB are of interest to the Arabidopsis thaliana community, particularly in allowing phenotypic meta-analyses directly linked to environmental conditions on which publications are still scarce. Secondly, data or image analysis modules can be downloaded from the Web interface for direct usage or as the basis for modifications according to new requirements. Finally, the structure of PHENOPSIS DB provides a useful template for the development of other similar databases related to genotype × environment interactions. PMID:21554668

  2. Cost and Schedule Analytical Techniques Development

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This Final Report summarizes the activities performed by Science Applications International Corporation (SAIC) under contract NAS 8-40431 "Cost and Schedule Analytical Techniques Development Contract" (CSATD) during Option Year 3 (December 1, 1997 through November 30, 1998). This Final Report is in compliance with Paragraph 5 of Section F of the contract. This CSATD contract provides technical products and deliverables in the form of parametric models, databases, methodologies, studies, and analyses to the NASA Marshall Space Flight Center's (MSFC) Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) and other user organizations. Detailed Monthly Reports were submitted to MSFC in accordance with the contract's Statement of Work, Section IV "Reporting and Documentation". These reports spelled out each month's specific work performed, deliverables submitted, major meetings conducted, and other pertinent information. Therefore, this Final Report will summarize these activities at a higher level. During this contract Option Year, SAIC expended 25,745 hours in the performance of tasks called out in the Statement of Work. This represents approximately 14 full-time EPs. Included are the Huntsville-based team, plus SAIC specialists in San Diego, Ames Research Center, Tampa, and Colorado Springs performing specific tasks for which they are uniquely qualified.

  3. User's guide: Minerals management service outer continental shelf activity database (moad). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, C.K.; Causley, M.C.; Yocke, M.A.

    1994-04-01

    The 1990 Clean Air Act Amendments require the Minerals Management Service (MMS) to conduct a research study to assess the potential onshore air quality impact from the development of outer continental shelf (OCS) petroleum resources in the Gulf of Mexico. The need for this study arises from concern about the cumulative impacts of current and future OCS emissions on ozone concentrations on nonattainment areas, particularly in Texas and Louisiana. To make quantitative assessments of these impacts, MMS has commissioned an air quality study which includes as a major component the development of a comprehensive emission inventory for photochemical grid modeling.more » The emission inventories prepared in this study include both onshore and offshore emissions. All relevant emissions from anthropogenic and biogenic sources are considered, with special attention focused on offshore anthropogenic sources, including OCS oil and gas production facilities, crew and supply vessels and helicopters serving OCS facilities, commercial shipping and fishing, recreational boating, intercoastal barge traffic and other sources located in the adjacent state waters. This document describes the database created during this study that contains the activity information collected for the development of the OCS platform, and crew/supply vessel and helicopter emission inventories.« less

  4. An expert system based software sizing tool, phase 2

    NASA Technical Reports Server (NTRS)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  5. Image Reference Database in Teleradiology: Migrating to WWW

    NASA Astrophysics Data System (ADS)

    Pasqui, Valdo

    The paper presents a multimedia Image Reference Data Base (IRDB) used in Teleradiology. The application was developed at the University of Florence in the framework of the European Community TELEMED Project. TELEMED overall goals and IRDB requirements are outlined and the resulting architecture is described. IRDB is a multisite database containing radiological images, selected because their scientific interest, and their related information. The architecture consists of a set of IRDB Installations which are accessed from Viewing Stations (VS) located at different medical sites. The interaction between VS and IRDB Installations follows the client-server paradigm and uses an OSI level-7 protocol, named Telemed Communication Language. After reviewing Florence prototype implementation and experimentation, IRDB migration to World Wide Web (WWW) is discussed. A possible scenery to implement IRDB on the basis of WWW model is depicted in order to exploit WWW servers and browsers capabilities. Finally, the advantages of this conversion are outlined.

  6. SBROME: a scalable optimization and module matching framework for automated biosystems design.

    PubMed

    Huynh, Linh; Tsoukalas, Athanasios; Köppe, Matthias; Tagkopoulos, Ilias

    2013-05-17

    The development of a scalable framework for biodesign automation is a formidable challenge given the expected increase in part availability and the ever-growing complexity of synthetic circuits. To allow for (a) the use of previously constructed and characterized circuits or modules and (b) the implementation of designs that can scale up to hundreds of nodes, we here propose a divide-and-conquer Synthetic Biology Reusable Optimization Methodology (SBROME). An abstract user-defined circuit is first transformed and matched against a module database that incorporates circuits that have previously been experimentally characterized. Then the resulting circuit is decomposed to subcircuits that are populated with the set of parts that best approximate the desired function. Finally, all subcircuits are subsequently characterized and deposited back to the module database for future reuse. We successfully applied SBROME toward two alternative designs of a modular 3-input multiplexer that utilize pre-existing logic gates and characterized biological parts.

  7. LabPatch, an acquisition and analysis program for patch-clamp electrophysiology.

    PubMed

    Robinson, T; Thomsen, L; Huizinga, J D

    2000-05-01

    An acquisition and analysis program, "LabPatch," has been developed for use in patch-clamp research. LabPatch controls any patch-clamp amplifier, acquires and records data, runs voltage protocols, plots and analyzes data, and connects to spreadsheet and database programs. Controls within LabPatch are grouped by function on one screen, much like an oscilloscope front panel. The software is mouse driven, so that the user need only point and click. Finally, the ability to copy data to other programs running in Windows 95/98, and the ability to keep track of experiments using a database, make LabPatch extremely versatile. The system requirements include Windows 95/98, at least a 100-MHz processor and 16 MB RAM, a data acquisition card, digital-to-analog converter, and a patch-clamp amplifier. LabPatch is available free of charge at http://www.fhs.mcmaster.ca/huizinga/.

  8. Probing concept of critical thinking in nursing education in Iran: a concept analysis.

    PubMed

    Tajvidi, Mansooreh; Ghiyasvandian, Shahrzad; Salsali, Mahvash

    2014-06-01

    Given the wide disagreement over the definition of critical thinking in different disciplines, defining and standardizing the concept according to the discipline of nursing is essential. Moreover, there is limited scientific evidence regarding critical thinking in the context of nursing in Iran. The aim of this study was to analyze and clarify the concept of critical thinking in nursing education in Iran. We employed the hybrid model to define the concept of critical thinking. The hybrid model has three interconnected phases--the theoretical phase, the fieldwork phase, and the final analytic phase. In the theoretical phase, we searched the online scientific databases (such as Elsevier, Wiley, CINAHL, Proquest, Ovid, and Springer as well as Iranian databases such as SID, Magiran, and Iranmedex). In the fieldwork phase, a purposive sample of 17 nursing faculties, PhD students, clinical instructors, and clinical nurses was recruited. Participants were interviewed by using an interview guide. In the analytical phase we compared the data from the theoretical and the fieldwork phases. The concept of critical thinking had many different antecedents, attributes, and consequences. Antecedents, attributes, and consequences of critical thinking concept identified in the theoretical phase were in some ways different and in some way similar to antecedents, attributes, and consequences identified in the fieldwork phase. Finally critical thinking in nursing education in Iran was clarified. Critical thinking is a logical, situational, purposive, and outcome-oriented thinking process. It is an acquired and evolving ability which develops individually. Such thinking process could lead to the professional accountability, personal development, God's consent, conscience appeasement, and personality development. Copyright © 2014. Published by Elsevier B.V.

  9. Megastudies, crowdsourcing, and large datasets in psycholinguistics: An overview of recent developments.

    PubMed

    Keuleers, Emmanuel; Balota, David A

    2015-01-01

    This paper introduces and summarizes the special issue on megastudies, crowdsourcing, and large datasets in psycholinguistics. We provide a brief historical overview and show how the papers in this issue have extended the field by compiling new databases and making important theoretical contributions. In addition, we discuss several studies that use text corpora to build distributional semantic models to tackle various interesting problems in psycholinguistics. Finally, as is the case across the papers, we highlight some methodological issues that are brought forth via the analyses of such datasets.

  10. Pulse combustion engineering research laboratory for indirect heating applications (PERL-IH). Final report, October 1, 1989-June 30, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belles, F.E.

    1993-01-01

    Uncontrolled NOx emissions from a variety of pulse combustors were measured. The implementation of flue-gas recirculation to reduce NOx was studied. A flexible workstation for parametric testing was built and used to study the phasing between pressure and heat release, and effects of fuel/air mixing on performance. Exhaust-pipe heat transfer was analyzed. An acoustic model of pulse combustion was developed. Technical support was provided to manufacturers on noise, ignition and condensation. A computerized bibliographic database on pulse combustion was created.

  11. High-level specification of a proposed information architecture for support of a bioterrorism early-warning system.

    PubMed

    Berkowitz, Murray R

    2013-01-01

    Current information systems for use in detecting bioterrorist attacks lack a consistent, overarching information architecture. An overview of the use of biological agents as weapons during a bioterrorist attack is presented. Proposed are the design, development, and implementation of a medical informatics system to mine pertinent databases, retrieve relevant data, invoke appropriate biostatistical and epidemiological software packages, and automatically analyze these data. The top-level information architecture is presented. Systems requirements and functional specifications for this level are presented. Finally, future studies are identified.

  12. Physics in Cuba from the Perspective of Bibliometrics

    NASA Astrophysics Data System (ADS)

    Marx, Werner; Cardona, Manuel

    We present a bibliometric analysis of the development of the physical sciences in Cuba since the revolution of 1959. We analyze, using available databases (Web of Science, Essential Science Indicators, INSPEC), the development of the output (number of publications of authors based in Cuba) and of their impact (number of citations) from 1959 until now. We discuss the productivity of Cuba in comparison to the Latin American sister republics and the collaborative efforts between Cuba and highly developed countries. The most important areas of scientific activity within the field of physics, the preferred journals and the leading affiliations are identified. The most frequently cited Cuban physics publications are given. Finally, the overall scientific ranking of Cuba among the world nations is investigated.

  13. S-World: A high resolution global soil database for simulation modelling (Invited)

    NASA Astrophysics Data System (ADS)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property at a particular location given a specific soil type. The soil properties are predicted for each grid cell based on the soil type, the corresponding ranges of soil properties, and the co-variables. Step 4: Standard depth profiles are developed for each of the soil types using the diagnostic criteria of the soil types and soil profile information from the ISRIC-WISE database. The standard soil profiles are combined with the the predicted values for the topsoil and subsoil yielding unique soil profiles at each location. Step 5: In a final step, additional soil properties are added to the database using averages for the soil types and pedo-transfer functions. The methodology, denominated S-World (Soils of the World), results in readily available global maps with quantitative pedon data for modelling purposes. It forms the basis for the Global Gridded Crop Model Intercomparison carried out within AgMIP.

  14. Using binary classification to prioritize and curate articles for the Comparative Toxicogenomics Database.

    PubMed

    Vishnyakova, Dina; Pasche, Emilie; Ruch, Patrick

    2012-01-01

    We report on the original integration of an automatic text categorization pipeline, so-called ToxiCat (Toxicogenomic Categorizer), that we developed to perform biomedical documents classification and prioritization in order to speed up the curation of the Comparative Toxicogenomics Database (CTD). The task can be basically described as a binary classification task, where a scoring function is used to rank a selected set of articles. Then components of a question-answering system are used to extract CTD-specific annotations from the ranked list of articles. The ranking function is generated using a Support Vector Machine, which combines three main modules: an information retrieval engine for MEDLINE (EAGLi), a gene normalization service (NormaGene) developed for a previous BioCreative campaign and finally, a set of answering components and entity recognizer for diseases and chemicals. The main components of the pipeline are publicly available both as web application and web services. The specific integration performed for the BioCreative competition is available via a web user interface at http://pingu.unige.ch:8080/Toxicat.

  15. Construction of a century solar chromosphere data set for solar activity related research

    NASA Astrophysics Data System (ADS)

    Lin, Ganghua; Wang, Xiao Fan; Yang, Xiao; Liu, Suo; Zhang, Mei; Wang, Haimin; Liu, Chang; Xu, Yan; Tlatov, Andrey; Demidov, Mihail; Borovik, Aleksandr; Golovko, Aleksey

    2017-06-01

    This article introduces our ongoing project "Construction of a Century Solar Chromosphere Data Set for Solar Activity Related Research". Solar activities are the major sources of space weather that affects human lives. Some of the serious space weather consequences, for instance, include interruption of space communication and navigation, compromising the safety of astronauts and satellites, and damaging power grids. Therefore, the solar activity research has both scientific and social impacts. The major database is built up from digitized and standardized film data obtained by several observatories around the world and covers a time span of more than 100 years. After careful calibration, we will develop feature extraction and data mining tools and provide them together with the comprehensive database for the astronomical community. Our final goal is to address several physical issues: filament behavior in solar cycles, abnormal behavior of solar cycle 24, large-scale solar eruptions, and sympathetic remote brightenings. Significant signs of progress are expected in data mining algorithms and software development, which will benefit the scientific analysis and eventually advance our understanding of solar cycles.

  16. Electromyography data for non-invasive naturally-controlled robotic hand prostheses

    PubMed Central

    Atzori, Manfredo; Gijsberts, Arjan; Castellini, Claudio; Caputo, Barbara; Hager, Anne-Gabrielle Mittaz; Elsig, Simone; Giatsidis, Giorgio; Bassetto, Franco; Müller, Henning

    2014-01-01

    Recent advances in rehabilitation robotics suggest that it may be possible for hand-amputated subjects to recover at least a significant part of the lost hand functionality. The control of robotic prosthetic hands using non-invasive techniques is still a challenge in real life: myoelectric prostheses give limited control capabilities, the control is often unnatural and must be learned through long training times. Meanwhile, scientific literature results are promising but they are still far from fulfilling real-life needs. This work aims to close this gap by allowing worldwide research groups to develop and test movement recognition and force control algorithms on a benchmark scientific database. The database is targeted at studying the relationship between surface electromyography, hand kinematics and hand forces, with the final goal of developing non-invasive, naturally controlled, robotic hand prostheses. The validation section verifies that the data are similar to data acquired in real-life conditions, and that recognition of different hand tasks by applying state-of-the-art signal features and machine-learning algorithms is possible. PMID:25977804

  17. An Online Database Producer's Memoirs and Memories of an Online Pioneer and The Database Industry: Looking into the Future.

    ERIC Educational Resources Information Center

    Kollegger, James G.; And Others

    1988-01-01

    In the first of three articles, the producer of Energyline, Energynet, and Tele/Scope recalls the development of the databases and database business strategies. The second describes the development of biomedical online databases, and the third discusses future developments, including full text databases, database producers as online host, and…

  18. Application Program Interface for the Orion Aerodynamics Database

    NASA Technical Reports Server (NTRS)

    Robinson, Philip E.; Thompson, James

    2013-01-01

    The Application Programming Interface (API) for the Crew Exploration Vehicle (CEV) Aerodynamic Database has been developed to provide the developers of software an easily implemented, fully self-contained method of accessing the CEV Aerodynamic Database for use in their analysis and simulation tools. The API is programmed in C and provides a series of functions to interact with the database, such as initialization, selecting various options, and calculating the aerodynamic data. No special functions (file read/write, table lookup) are required on the host system other than those included with a standard ANSI C installation. It reads one or more files of aero data tables. Previous releases of aerodynamic databases for space vehicles have only included data tables and a document of the algorithm and equations to combine them for the total aerodynamic forces and moments. This process required each software tool to have a unique implementation of the database code. Errors or omissions in the documentation, or errors in the implementation, led to a lengthy and burdensome process of having to debug each instance of the code. Additionally, input file formats differ for each space vehicle simulation tool, requiring the aero database tables to be reformatted to meet the tool s input file structure requirements. Finally, the capabilities for built-in table lookup routines vary for each simulation tool. Implementation of a new database may require an update to and verification of the table lookup routines. This may be required if the number of dimensions of a data table exceeds the capability of the simulation tools built-in lookup routines. A single software solution was created to provide an aerodynamics software model that could be integrated into other simulation and analysis tools. The highly complex Orion aerodynamics model can then be quickly included in a wide variety of tools. The API code is written in ANSI C for ease of portability to a wide variety of systems. The input data files are in standard formatted ASCII, also for improved portability. The API contains its own implementation of multidimensional table reading and lookup routines. The same aerodynamics input file can be used without modification on all implementations. The turnaround time from aerodynamics model release to a working implementation is significantly reduced

  19. MAPS: The Organization of a Spatial Database System Using Imagery, Terrain, and Map Data

    DTIC Science & Technology

    1983-06-01

    segments which share the same pixel position. Finally, in any largo system, a logical partitioning of the database must be performed in order to avoid...34theodore roosevelt memoria entry 0; entry 1: Virginia ’northwest Washington* 2 en 11" ies for "crossover" for ’theodore roosevelt memor i entry 0

  20. [The 'Beijing clinical database' on severe acute respiratory syndrome patients: its design, process, quality control and evaluation].

    PubMed

    2004-04-01

    To develop a large database on clinical presentation, treatment and prognosis of all clinical diagnosed severe acute respiratory syndrome (SARS) cases in Beijing during the 2003 "crisis", in order to conduct further clinical studies. The database was designed by specialists, under the organization of the Beijing Commanding Center for SARS Treatment and Cure, including 686 data items in six sub-databases: primary medical-care seeking, vital signs, common symptoms and signs, treatment, laboratory and auxiliary test, and cost. All hospitals having received SARS inpatients were involved in the project. Clinical data was transferred and coded by trained doctors and data entry was carried out by trained nurses, according to a uniformed protocol. A series of procedures had been taken before the database was finally established which included programmed logic checking, digit-by-digit check on 5% random sample, data linkage for transferred cases, coding of characterized information, database structure standardization, case reviewe by computer program according to SARS Clinical Diagnosis Criteria issued by the Ministry of Health, and exclusion of unqualified patients. The database involved 2148 probable SARS cases in accordant with the clinical diagnosis criteria, including 1291 with complete records. All cases and record-complete cases showed an almost identical distribution in sex, age, occupation, residence areas and time of onset. The completion rate of data was not significantly different between the two groups except for some items on primary medical-care seeking. Specifically, the data completion rate was 73% - 100% in primary medical-care seeking, 90% in common symptoms and signs, 100% for treatment, 98% for temperature, 90% for pulse, 100% for outcomes and 98% for costs in hospital. The number of cases collected in the Beijing Clinical Database of SARS Patients was fairly complete. Cases with complete records showed that they could serve as excellent representatives of all cases. The completeness of data was quite satisfactory with primary clinical items which allowed for further clinical studies.

  1. β-secretase inhibitors for Alzheimer's disease: identification using pharmacoinformatics.

    PubMed

    Islam, Md Ataul; Pillay, Tahir S

    2018-02-01

    In this study we searched for potential β-site amyloid precursor protein cleaving enzyme1 (BACE1) inhibitors using pharmacoinformatics. A large dataset containing 7155 known BACE1 inhibitors was evaluated for pharmacophore model generation. The final model (R = 0.950, RMSD = 1.094, Q 2  = 0.901, se = 0.332, [Formula: see text] = 0.901, [Formula: see text] = 0.756, sp = 0.468, [Formula: see text] = 0.667) was revealed with the importance of spatial arrangement of hydrogen bond acceptor and donor, hydrophobicity and aromatic ring features. The validated model was then used to search NCI and InterBioscreen databases for promising BACE1 inhibitors. The initial hits from both databases were sorted using a number of criteria and finally three molecules from each database were considered for further validation using molecular docking and molecular dynamics studies. Different protonation states of Asp32 and Asp228 dyad were analysed and best protonated form used for molecular docking study. Observation of the number of binding interactions in the molecular docking study supported the potential of these molecules being promising inhibitors. Values of RMSD, RMSF, Rg in molecular dynamics study and binding energies unquestionably explained that final screened molecules formed stable complexes inside the receptor cavity of BACE1. Hence, it can be concluded that the final screened six compounds may be potential therapeutic agents for Alzheimer's disease.

  2. The prescribable drugs with efficacy in experimental epilepsies (PDE3) database for drug repurposing research in epilepsy.

    PubMed

    Sivapalarajah, Shayeeshan; Krishnakumar, Mathangi; Bickerstaffe, Harry; Chan, YikYing; Clarkson, Joseph; Hampden-Martin, Alistair; Mirza, Ahmad; Tanti, Matthew; Marson, Anthony; Pirmohamed, Munir; Mirza, Nasir

    2018-02-01

    Current antiepileptic drugs (AEDs) have several shortcomings. For example, they fail to control seizures in 30% of patients. Hence, there is a need to identify new AEDs. Drug repurposing is the discovery of new indications for approved drugs. This drug "recycling" offers the potential of significant savings in the time and cost of drug development. Many drugs licensed for other indications exhibit antiepileptic efficacy in animal models. Our aim was to create a database of "prescribable" drugs, approved for other conditions, with published evidence of efficacy in animal models of epilepsy, and to collate data that would assist in choosing the most promising candidates for drug repurposing. The database was created by the following: (1) computational literature-mining using novel software that identifies Medline abstracts containing the name of a prescribable drug, a rodent model of epilepsy, and a phrase indicating seizure reduction; then (2) crowdsourced manual curation of the identified abstracts. The final database includes 173 drugs and 500 abstracts. It is made freely available at www.liverpool.ac.uk/D3RE/PDE3. The database is reliable: 94% of the included drugs have corroborative evidence of efficacy in animal models (for example, evidence from multiple independent studies). The database includes many drugs that are appealing candidates for repurposing, as they are widely accepted by prescribers and patients-the database includes half of the 20 most commonly prescribed drugs in England-and they target many proteins involved in epilepsy but not targeted by current AEDs. It is important to note that the drugs are of potential relevance to human epilepsy-the database is highly enriched with drugs that target proteins of known causal human epilepsy genes (Fisher's exact test P-value < 3 × 10 -5 ). We present data to help prioritize the most promising candidates for repurposing from the database. The PDE3 database is an important new resource for drug repurposing research in epilepsy. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  3. Conceptual and logical level of database modeling

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  4. Automated database-guided expert-supervised orientation for immunophenotypic diagnosis and classification of acute leukemia

    PubMed Central

    Lhermitte, L; Mejstrikova, E; van der Sluijs-Gelling, A J; Grigore, G E; Sedek, L; Bras, A E; Gaipa, G; Sobral da Costa, E; Novakova, M; Sonneveld, E; Buracchi, C; de Sá Bacelar, T; te Marvelde, J G; Trinquand, A; Asnafi, V; Szczepanski, T; Matarraz, S; Lopez, A; Vidriales, B; Bulsa, J; Hrusak, O; Kalina, T; Lecrevisse, Q; Martin Ayuso, M; Brüggemann, M; Verde, J; Fernandez, P; Burgos, L; Paiva, B; Pedreira, C E; van Dongen, J J M; Orfao, A; van der Velden, V H J

    2018-01-01

    Precise classification of acute leukemia (AL) is crucial for adequate treatment. EuroFlow has previously designed an AL orientation tube (ALOT) to guide towards the relevant classification panel (T-cell acute lymphoblastic leukemia (T-ALL), B-cell precursor (BCP)-ALL and/or acute myeloid leukemia (AML)) and final diagnosis. Now we built a reference database with 656 typical AL samples (145 T-ALL, 377 BCP-ALL, 134 AML), processed and analyzed via standardized protocols. Using principal component analysis (PCA)-based plots and automated classification algorithms for direct comparison of single-cells from individual patients against the database, another 783 cases were subsequently evaluated. Depending on the database-guided results, patients were categorized as: (i) typical T, B or Myeloid without or; (ii) with a transitional component to another lineage; (iii) atypical; or (iv) mixed-lineage. Using this automated algorithm, in 781/783 cases (99.7%) the right panel was selected, and data comparable to the final WHO-diagnosis was already provided in >93% of cases (85% T-ALL, 97% BCP-ALL, 95% AML and 87% mixed-phenotype AL patients), even without data on the full-characterization panels. Our results show that database-guided analysis facilitates standardized interpretation of ALOT results and allows accurate selection of the relevant classification panels, hence providing a solid basis for designing future WHO AL classifications. PMID:29089646

  5. The EXOSAT database and archive

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.; Parmar, A. N.

    1992-01-01

    The EXOSAT database provides on-line access to the results and data products (spectra, images, and lightcurves) from the EXOSAT mission as well as access to data and logs from a number of other missions (such as EINSTEIN, COS-B, ROSAT, and IRAS). In addition, a number of familiar optical, infrared, and x ray catalogs, including the Hubble Space Telescope (HST) guide star catalog are available. The complete database is located at the EXOSAT observatory at ESTEC in the Netherlands and is accessible remotely via a captive account. The database management system was specifically developed to efficiently access the database and to allow the user to perform statistical studies on large samples of astronomical objects as well as to retrieve scientific and bibliographic information on single sources. The system was designed to be mission independent and includes timing, image processing, and spectral analysis packages as well as software to allow the easy transfer of analysis results and products to the user's own institute. The archive at ESTEC comprises a subset of the EXOSAT observations, stored on magnetic tape. Observations of particular interest were copied in compressed format to an optical jukebox, allowing users to retrieve and analyze selected raw data entirely from their terminals. Such analysis may be necessary if the user's needs are not accommodated by the products contained in the database (in terms of time resolution, spectral range, and the finesse of the background subtraction, for instance). Long-term archiving of the full final observation data is taking place at ESRIN in Italy as part of the ESIS program, again using optical media, and ESRIN have now assumed responsibility for distributing the data to the community. Tests showed that raw observational data (typically several tens of megabytes for a single target) can be transferred via the existing networks in reasonable time.

  6. Modelling Conditions and Health Care Processes in Electronic Health Records: An Application to Severe Mental Illness with the Clinical Practice Research Datalink

    PubMed Central

    Olier, Ivan; Springate, David A.; Ashcroft, Darren M.; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos

    2016-01-01

    Background The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. Methods We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. Results We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. Conclusion We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists. PMID:26918439

  7. A strong-motion database from the Central American subduction zone

    NASA Astrophysics Data System (ADS)

    Arango, Maria Cristina; Strasser, Fleur O.; Bommer, Julian J.; Hernández, Douglas A.; Cepeda, Jose M.

    2011-04-01

    Subduction earthquakes along the Pacific Coast of Central America generate considerable seismic risk in the region. The quantification of the hazard due to these events requires the development of appropriate ground-motion prediction equations, for which purpose a database of recordings from subduction events in the region is indispensable. This paper describes the compilation of a comprehensive database of strong ground-motion recordings obtained during subduction-zone events in Central America, focusing on the region from 8 to 14° N and 83 to 92° W, including Guatemala, El Salvador, Nicaragua and Costa Rica. More than 400 accelerograms recorded by the networks operating across Central America during the last decades have been added to data collected by NORSAR in two regional projects for the reduction of natural disasters. The final database consists of 554 triaxial ground-motion recordings from events of moment magnitudes between 5.0 and 7.7, including 22 interface and 58 intraslab-type events for the time period 1976-2006. Although the database presented in this study is not sufficiently complete in terms of magnitude-distance distribution to serve as a basis for the derivation of predictive equations for interface and intraslab events in Central America, it considerably expands the Central American subduction data compiled in previous studies and used in early ground-motion modelling studies for subduction events in this region. Additionally, the compiled database will allow the assessment of the existing predictive models for subduction-type events in terms of their applicability for the Central American region, which is essential for an adequate estimation of the hazard due to subduction earthquakes in this region.

  8. The Co-regulation Data Harvester: Automating gene annotation starting from a transcriptome database

    NASA Astrophysics Data System (ADS)

    Tsypin, Lev M.; Turkewitz, Aaron P.

    Identifying co-regulated genes provides a useful approach for defining pathway-specific machinery in an organism. To be efficient, this approach relies on thorough genome annotation, a process much slower than genome sequencing per se. Tetrahymena thermophila, a unicellular eukaryote, has been a useful model organism and has a fully sequenced but sparsely annotated genome. One important resource for studying this organism has been an online transcriptomic database. We have developed an automated approach to gene annotation in the context of transcriptome data in T. thermophila, called the Co-regulation Data Harvester (CDH). Beginning with a gene of interest, the CDH identifies co-regulated genes by accessing the Tetrahymena transcriptome database. It then identifies their closely related genes (orthologs) in other organisms by using reciprocal BLAST searches. Finally, it collates the annotations of those orthologs' functions, which provides the user with information to help predict the cellular role of the initial query. The CDH, which is freely available, represents a powerful new tool for analyzing cell biological pathways in Tetrahymena. Moreover, to the extent that genes and pathways are conserved between organisms, the inferences obtained via the CDH should be relevant, and can be explored, in many other systems.

  9. Evolution of a Structure-Searchable Database into a Prototype for a High-Fidelity SmartPhone App for 62 Common Pesticides Used in Delaware.

    PubMed

    D'Souza, Malcolm J; Barile, Benjamin; Givens, Aaron F

    2015-05-01

    Synthetic pesticides are widely used in the modern world for human benefit. They are usually classified according to their intended pest target. In Delaware (DE), approximately 42 percent of the arable land is used for agriculture. In order to manage insectivorous and herbaceous pests (such as insects, weeds, nematodes, and rodents), pesticides are used profusely to biologically control the normal pest's life stage. In this undergraduate project, we first created a usable relational database containing 62 agricultural pesticides that are common in Delaware. Chemically pertinent quantitative and qualitative information was first stored in Bio-Rad's KnowItAll® Informatics System. Next, we extracted the data out of the KnowItAll® system and created additional sections on a Microsoft® Excel spreadsheet detailing pesticide use(s) and safety and handling information. Finally, in an effort to promote good agricultural practices, to increase efficiency in business decisions, and to make pesticide data globally accessible, we developed a mobile application for smartphones that displayed the pesticide database using Appery.io™; a cloud-based HyperText Markup Language (HTML5), jQuery Mobile and Hybrid Mobile app builder.

  10. The Chinese Lexicon Project: A megastudy of lexical decision performance for 25,000+ traditional Chinese two-character compound words.

    PubMed

    Tse, Chi-Shing; Yap, Melvin J; Chan, Yuen-Lai; Sze, Wei Ping; Shaoul, Cyrus; Lin, Dan

    2017-08-01

    Using a megastudy approach, we developed a database of lexical variables and lexical decision reaction times and accuracy rates for more than 25,000 traditional Chinese two-character compound words. Each word was responded to by about 33 native Cantonese speakers in Hong Kong. This resource provides a valuable adjunct to influential mega-databases, such as the Chinese single-character, English, French, and Dutch Lexicon Projects. Three analyses were conducted to illustrate the potential uses of the database. First, we compared the proportion of variance in lexical decision performance accounted for by six word frequency measures and established that the best predictor was Cai and Brysbaert's (PLoS One, 5, e10729, 2010) contextual diversity subtitle frequency. Second, we ran virtual replications of three previously published lexical decision experiments and found convergence between the original experiments and the present megastudy. Finally, we conducted item-level regression analyses to examine the effects of theoretically important lexical variables in our normative data. This is the first publicly available large-scale repository of behavioral responses pertaining to Chinese two-character compound word processing, which should be of substantial interest to psychologists, linguists, and other researchers.

  11. Cosmetics Europe compilation of historical serious eye damage/eye irritation in vivo data analysed by drivers of classification to support the selection of chemicals for development and evaluation of alternative methods/strategies: the Draize eye test Reference Database (DRD).

    PubMed

    Barroso, João; Pfannenbecker, Uwe; Adriaens, Els; Alépée, Nathalie; Cluzel, Magalie; De Smedt, Ann; Hibatallah, Jalila; Klaric, Martina; Mewes, Karsten R; Millet, Marion; Templier, Marie; McNamee, Pauline

    2017-02-01

    A thorough understanding of which of the effects assessed in the in vivo Draize eye test are responsible for driving UN GHS/EU CLP classification is critical for an adequate selection of chemicals to be used in the development and/or evaluation of alternative methods/strategies and for properly assessing their predictive capacity and limitations. For this reason, Cosmetics Europe has compiled a database of Draize data (Draize eye test Reference Database, DRD) from external lists that were created to support past validation activities. This database contains 681 independent in vivo studies on 634 individual chemicals representing a wide range of chemical classes. A description of all the ocular effects observed in vivo, i.e. degree of severity and persistence of corneal opacity (CO), iritis, and/or conjunctiva effects, was added for each individual study in the database, and the studies were categorised according to their UN GHS/EU CLP classification and the main effect driving the classification. An evaluation of the various in vivo drivers of classification compiled in the database was performed to establish which of these are most important from a regulatory point of view. These analyses established that the most important drivers for Cat 1 Classification are (1) CO mean ≥ 3 (days 1-3) (severity) and (2) CO persistence on day 21 in the absence of severity, and those for Cat 2 classification are (3) CO mean ≥ 1 and (4) conjunctival redness mean ≥ 2. Moreover, it is shown that all classifiable effects (including persistence and CO = 4) should be present in ≥60 % of the animals to drive a classification. As a consequence, our analyses suggest the need for a critical revision of the UN GHS/EU CLP decision criteria for the Cat 1 classification of chemicals. Finally, a number of key criteria are identified that should be taken into consideration when selecting reference chemicals for the development, evaluation and/or validation of alternative methods and/or strategies for serious eye damage/eye irritation testing. Most important, the DRD is an invaluable tool for any future activity involving the selection of reference chemicals.

  12. The ASTARTE Paleotsunami Deposits data base - web-based references for tsunami research in the NEAM region

    NASA Astrophysics Data System (ADS)

    De Martini, P. M.; Pantosti, D.; Orefice, S.; Smedile, A.; Patera, A.; Paris, R.; Terrinha, P.; Hunt, J.; Papadopoulos, G. A.; Noiva, J.; Triantafyllou, I.; Yalciner, A. C.

    2017-12-01

    EU project ASTARTE aimed at developing a higher level of tsunami hazard assessment in the North-Eastern Atlantic, the Mediterranean and connected seas (NEAM) region by a combination of field work, experimental work, numerical modeling and technical development. The project was a cooperative work of 26 institutes from 16 countries and linked together the description of past tsunamigenic events, the identification and characterization of tsunami sources, the calculation of the impact of such events, and the development of adequate resilience and risks mitigation strategies (http://www.astarte-project.eu/). Within ASTARTE, a web-based database on Paleotsunami Deposits in the NEAM area was created with the purpose to be the future information repository for tsunami research in the broad region. The aim of the project is the integration of every existing official scientific reports and peer reviewed papers on this topic. The database, which archives information and detailed data crucial for tsunami modeling, will be updated on new entries every 12 months. A relational database managed by ArcGIS for Desktop 10.x software has been implemented. One of the final goals of the project is the public sharing of the archived dataset through a web-based map service that will allow visualizing, querying, analyzing, and interpreting the dataset. The interactive map service is hosted by ArcGIS Online and will deploy the cloud capabilities of the portal. Any interested users will be able to access the online GIS resources through any Internet browser or specific apps that run on desktop machines, smartphones, or tablets and will be able to use the analytical tools, key tasks, and workflows of the service. We will present the database structure (characterized by the presence of two main tables: the Site table and the Event table) and topics as well as their ArcGIS Online version. To date, a total of 151 sites and 220 tsunami evidence have been recorded within the ASTARTE database. The ASTARTE Paleotsunami Deposits database - NEAM region is now available online at the address http://arcg.is/1CWz0. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe).

  13. Implementation of the CDC translational informatics platform--from genetic variants to the national Swedish Rheumatology Quality Register.

    PubMed

    Abugessaisa, Imad; Gomez-Cabrero, David; Snir, Omri; Lindblad, Staffan; Klareskog, Lars; Malmström, Vivianne; Tegnér, Jesper

    2013-04-02

    Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort.

  14. Implementation of the CDC translational informatics platform - from genetic variants to the national Swedish Rheumatology Quality Register

    PubMed Central

    2013-01-01

    Background Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Methods Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. Results We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Conclusions Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort. PMID:23548156

  15. Digital hand atlas and computer-aided bone age assessment via the Web

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    1999-07-01

    A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.

  16. The opportunities and obstacles in developing a vascular birthmark database for clinical and research use.

    PubMed

    Sharma, Vishal K; Fraulin, Frankie Og; Harrop, A Robertson; McPhalen, Donald F

    2011-01-01

    Databases are useful tools in clinical settings. The authors review the benefits and challenges associated with the development and implementation of an efficient electronic database for the multidisciplinary Vascular Birthmark Clinic at the Alberta Children's Hospital, Calgary, Alberta. The content and structure of the database were designed using the technical expertise of a data analyst from the Calgary Health Region. Relevant clinical and demographic data fields were included with the goal of documenting ongoing care of individual patients, and facilitating future epidemiological studies of this patient population. After completion of this database, 10 challenges encountered during development were retrospectively identified. Practical solutions for these challenges are presented. THE CHALLENGES IDENTIFIED DURING THE DATABASE DEVELOPMENT PROCESS INCLUDED: identification of relevant data fields; balancing simplicity and user-friendliness with complexity and comprehensive data storage; database expertise versus clinical expertise; software platform selection; linkage of data from the previous spreadsheet to a new data management system; ethics approval for the development of the database and its utilization for research studies; ensuring privacy and limited access to the database; integration of digital photographs into the database; adoption of the database by support staff in the clinic; and maintaining up-to-date entries in the database. There are several challenges involved in the development of a useful and efficient clinical database. Awareness of these potential obstacles, in advance, may simplify the development of clinical databases by others in various surgical settings.

  17. Online Petroleum Industry Bibliographic Databases: A Review.

    ERIC Educational Resources Information Center

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  18. Implementation of a data management software system for SSME test history data

    NASA Technical Reports Server (NTRS)

    Abernethy, Kenneth

    1986-01-01

    The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.

  19. [Development of an analyzing system for soil parameters based on NIR spectroscopy].

    PubMed

    Zheng, Li-Hua; Li, Min-Zan; Sun, Hong

    2009-10-01

    A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.

  20. SeTES, a Self-Teaching Expert System for the analysis, design and prediction of gas production from shales and a prototype for a new generation of Expert Systems in the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Boyle, K.; Pullman, S.; Reagan, M. T.; Moridis, G. J.; Blasingame, T. A.; Rector, J. W.; Nikolaou, M.

    2010-12-01

    A Self Teaching Expert System (SeTES) is being developed for the analysis, design and prediction of gas production from shales. An Expert System is a computer program designed to answer questions or clarify uncertainties that its designers did not necessarily envision which would otherwise have to be addressed by consultation with one or more human experts. Modern developments in computer learning, data mining, database management, web integration and cheap computing power are bringing the promise of expert systems to fruition. SeTES is a partial successor to Prospector, a system to aid in the identification and evaluation of mineral deposits developed by Stanford University and the USGS in the late 1970s, and one of the most famous early expert systems. Instead of the text dialogue used in early systems, the web user interface of SeTES helps a non-expert user to articulate, clarify and reason about a problem by navigating through a series of interactive wizards. The wizards identify potential solutions to queries by retrieving and combining together relevant records from a database. Inferences, decisions and predictions are made from incomplete and noisy inputs using a series of probabilistic models (Bayesian Networks) which incorporate records from the database, physical laws and empirical knowledge in the form of prior probability distributions. The database is mainly populated with empirical measurements, however an automatic algorithm supplements sparse data with synthetic data obtained through physical modeling. This constitutes the mechanism for how SeTES self-teaches. SeTES’ predictive power is expected to grow as users contribute more data into the system. Samples are appropriately weighted to favor high quality empirical data over low quality or synthetic data. Finally, a set of data visualization tools digests the output measurements into graphical outputs.

  1. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.

    PubMed

    Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B

    2017-05-01

    To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.

  2. Comparison between satellite wildfire databases in Europe

    NASA Astrophysics Data System (ADS)

    Amraoui, Malik; Pereira, Mário; DaCamara, Carlos

    2013-04-01

    For Europe, several databases of wildfires based on the satellite imagery are currently available and being used to conduct various studies and produce official reports. The European Forest Fire Information System (EFFIS) burned area perimeters database comprises fires with burnt area greater than 1.0 ha occurred in the Europe countries during the 2000 - 2011 period. The MODIS Burned Area Product (MCD45A1) is a monthly global Level 3 gridded 500m product containing per-pixel burning, quality information, and tile-level metadata. The Burned Area Product was developed by the MODIS Fire Team at the University of Maryland and is available April 2000 onwards. Finally, for Portugal the National Forest Authority (AFN) discloses the national mapping of burned areas of the years 1990 to 2011, based on Landsat imagery which accounts for fires larger than 5.0 ha. This study main objectives are: (i) provide a comprehensive description of the datasets, its limitations and potential; (ii) do preliminary statistics on the data; and, (iii) to compare the MODIS and EFFIS satellite wildfires databases throughout/across the entire European territory, based on indicators such as the spatial location of the burned areas and the extent of area burned annually and complement the analysis for Portugal will the inclusion of database AFN. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  3. An Update of the Bodeker Scientific Vertically Resolved, Global, Gap-Free Ozone Database

    NASA Astrophysics Data System (ADS)

    Kremser, S.; Bodeker, G. E.; Lewis, J.; Hassler, B.

    2016-12-01

    High vertical resolution ozone measurements from multiple satellite-based instruments have been merged with measurements from the global ozonesonde network to calculate monthly mean ozone values in 5º latitude zones. Ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced approximately 1 km apart (878.4 hPa to 0.046 hPa). These data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to these data and then evaluated globally. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. This presentation reports on updates to an earlier version of the vertically resolved ozone database, including the incorporation of new ozone measurements and new techniques for combining the data. Compared to previous versions of the database, particular attention is paid to avoiding spatial and temporal sampling biases and tracing uncertainties through to the final product. This updated database, developed within the New Zealand Deep South National Science Challenge, is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric chemistry interactively.

  4. Global Precipitation Estimates from Cross-Track Passive Microwave Observations Using a Physically-Based Retrieval Scheme

    NASA Technical Reports Server (NTRS)

    Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave

    2015-01-01

    The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.

  5. Shuttle Case Study Collection Website Development

    NASA Technical Reports Server (NTRS)

    Ransom, Khadijah S.; Johnson, Grace K.

    2012-01-01

    As a continuation from summer 2012, the Shuttle Case Study Collection has been developed using lessons learned documented by NASA engineers, analysts, and contractors. Decades of information related to processing and launching the Space Shuttle is gathered into a single database to provide educators with an alternative means to teach real-world engineering processes. The goal is to provide additional engineering materials that enhance critical thinking, decision making, and problem solving skills. During this second phase of the project, the Shuttle Case Study Collection website was developed. Extensive HTML coding to link downloadable documents, videos, and images was required, as was training to learn NASA's Content Management System (CMS) for website design. As the final stage of the collection development, the website is designed to allow for distribution of information to the public as well as for case study report submissions from other educators online.

  6. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2012-01-01

    described in III. Experimental evaluations on a comprehensive multimodal dataset and a face database have been described in section V. Finally, in...WVU Multimodal Dataset The WVU multimodal dataset is a comprehensive collection of different biometric modalities such as fingerprint, iris, palmprint ...Martnez and R. Benavente, “The AR face database ,” CVC Technical Report, June 1998. [29] U. Park and A. Jain, “Face matching and retrieval using soft

  7. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2014-01-01

    comprehensive multimodal dataset and a face database are described in section V. Finally, in section VI, we discuss the computational complexity of...fingerprint, iris, palmprint , hand geometry and voice from subjects of different age, gender and ethnicity as described in Table I. It is a...Taylor, “Constructing nonlinear discriminants from multiple data views,” Machine Learning and Knowl- edge Discovery in Databases , pp. 328–343, 2010

  8. Understanding arid environments using fossil rodent middens

    USGS Publications Warehouse

    Pearson, S.; Betancourt, J.L.

    2002-01-01

    American rodent middens have made a more dramatic contribution to understanding past environments and the development of ecological theory than Australian rodent middens. This relates to differences in the natural environment, the landscape histories, the scale and scientific approaches of the researchers. The comparison demonstrates: the power of synoptic perspectives; the value of thorough macrofossil identification in midden analysis and its potential advance in Australia where pollen has dominated analyses, the value of herbaria and reference collections; the potential of environmental databases; the importance of scientific history and 'critical research mass' and; finally, the opportunistic nature of palaeoecological research. ?? 2002 Elsevier Science Ltd.

  9. A comprehensive inpatient discharge system.

    PubMed Central

    O'Connell, E. M.; Teich, J. M.; Pedraza, L. A.; Thomas, D.

    1996-01-01

    Our group has developed a computer system that supports all phases of the inpatient discharge process. The system fills in most of the physician's discharge order form and the nurse's discharge abstract, using information available from sign-out, order entry, scheduling, and other databases. It supplies information for referrals to outside institutions, and provides a variety of instruction materials for patients. Discharge forms can be completed in advance, so that the patient is not waiting for final paperwork. Physicians and nurses can work on their components independently, rather than in series. Response to the system has been very favorable. PMID:8947755

  10. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    PubMed Central

    Gu, Shuo

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed. PMID:28690664

  11. Intelligent Data Analysis in the EMERCOM Information System

    NASA Astrophysics Data System (ADS)

    Elena, Sharafutdinova; Tatiana, Avdeenko; Bakaev, Maxim

    2017-01-01

    The paper describes an information system development project for the Russian Ministry of Emergency Situations (MES, whose international operations body is known as EMERCOM), which was attended by the representatives of both the IT industry and the academia. Besides the general description of the system, we put forward OLAP and Data Mining-based approaches towards the intelligent analysis of the data accumulated in the database. In particular, some operational OLAP reports and an example of multi-dimensional information space based on OLAP Data Warehouse are presented. Finally, we outline Data Mining application to support decision-making regarding security inspections planning and results consideration.

  12. Advancing Consumer Product Composition and Chemical ...

    EPA Pesticide Factsheets

    This presentation describes EPA efforts to collect, model, and measure publically available consumer product data for use in exposure assessment. The development of the ORD Chemicals and Products database will be described, as will machine-learning based models for predicting chemical function. Finally, the talk describes new mass spectrometry-based methods for measuring chemicals in formulation and articles. This presentation is an invited talk to the ICCA-LRI workshop "Fit-For-Purpose Exposure Assessments For Risk-Based Decision Making". The talk will share EPA efforts to characterize the components of consumer products for use in exposure assessment with the international exposure science community.

  13. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    PubMed

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  14. Real-Time Simulation

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Coryphaeus Software, founded in 1989 by former NASA electronic engineer Steve Lakowske, creates real-time 3D software. Designer's Workbench, the company flagship product, is a modeling and simulation tool for the development of both static and dynamic 3D databases. Other products soon followed. Activation, specifically designed for game developers, allows developers to play and test the 3D games before they commit to a target platform. Game publishers can shorten development time and prove the "playability" of the title, maximizing their chances of introducing a smash hit. Another product, EasyT, lets users create massive, realistic representation of Earth terrains that can be viewed and traversed in real time. Finally, EasyScene software control the actions among interactive objects within a virtual world. Coryphaeus products are used on Silican Graphics workstation and supercomputers to simulate real-world performance in synthetic environments. Customers include aerospace, aviation, architectural and engineering firms, game developers, and the entertainment industry.

  15. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  16. Evidence-based practice guideline of Chinese herbal medicine for primary open-angle glaucoma (qingfeng -neizhang).

    PubMed

    Yang, Yingxin; Ma, Qiu-Yan; Yang, Yue; He, Yu-Peng; Ma, Chao-Ting; Li, Qiang; Jin, Ming; Chen, Wei

    2018-03-01

    Primary open angle glaucoma (POAG) is a chronic, progressive optic neuropathy. The aim was to develop an evidence-based clinical practice guideline of Chinese herbal medicine (CHM) for POAG with focus on Chinese medicine pattern differentiation and treatment as well as approved herbal proprietary medicine. The guideline development group involved in various pieces of expertise in contents and methods. Authors searched electronic databases include CNKI, VIP, Sino-Med, Wanfang data, PubMed, the Cochrane Library, EMBASE, as well as checked China State Food and Drug Administration (SFDA) from the inception of these databases to June 30, 2015. Systematic reviews and randomized controlled trials of Chinese herbal medicine treating adults with POAG were evaluated. Risk of bias tool in the Cochrane Handbook and evidence strength developed by the GRADE group were applied for the evaluation, and recommendations were based on the findings incorporating evidence strength. After several rounds of Expert consensus, the final guideline was endorsed by relevant professional committees. CHM treatment principle and formulae based on pattern differentiation together with approved patent herbal medicines are the main treatments for POAG, and the diagnosis and treatment focusing on blood related patterns is the major domain. CHM therapy alone or combined with other conventional treatment reported in clinical studies together with Expert consensus were recommended for clinical practice.

  17. Patterns of developmental plasticity in response to incubation temperature in reptiles.

    PubMed

    While, Geoffrey M; Noble, Daniel W A; Uller, Tobias; Warner, Daniel A; Riley, Julia L; Du, Wei-Guo; Schwanz, Lisa E

    2018-05-28

    Early life environments shape phenotypic development in important ways that can lead to long-lasting effects on phenotype and fitness. In reptiles, one aspect of the early environment that impacts development is temperature (termed 'thermal developmental plasticity'). Indeed, the thermal environment during incubation is known to influence morphological, physiological, and behavioral traits, some of which have important consequences for many ecological and evolutionary processes. Despite this, few studies have attempted to synthesize and collate data from this expansive and important body of research. Here, we systematically review research into thermal developmental plasticity across reptiles, structured around the key papers and findings that have shaped the field over the past 50 years. From these papers, we introduce a large database (the 'Reptile Development Database') consisting of 9,773 trait means across 300 studies examining thermal developmental plasticity. This dataset encompasses data on a range of phenotypes, including morphological, physiological, behavioral, and performance traits along with growth rate, incubation duration, sex ratio, and survival (e.g., hatching success) across all major reptile clades. Finally, from our literature synthesis and data exploration, we identify key research themes associated with thermal developmental plasticity, important gaps in empirical research, and demonstrate how future progress can be made through targeted empirical, meta-analytic, and comparative work. © 2018 Wiley Periodicals, Inc.

  18. H. Pylori as a predictor of marginal ulceration: A nationwide analysis.

    PubMed

    Schulman, Allison R; Abougergi, Marwan S; Thompson, Christopher C

    2017-03-01

    Helicobacter pylori has been implicated as a risk factor for development of marginal ulceration following gastric bypass, although studies have been small and yielded conflicting results. This study sought to determine the relationship between H. pylori infection and development of marginal ulceration following bariatric surgery in a nationwide analysis. This was a retrospective cohort study using the 2012 Nationwide Inpatient Sample (NIS) database. Discharges with ICD-9-CM code indicating marginal ulceration and a secondary ICD-9-CM code for bariatric surgery were included. Primary outcome was incidence of marginal ulceration. A stepwise forward selection model was used to build the multivariate logistic regression model based on known risk factors. A P value of 0.05 was considered significant. There were 253,765 patients who met inclusion criteria. Prevalence of marginal ulceration was 3.90%. Of those patients found to have marginal ulceration, 31.20% of patients were H. pylori-positive. Final multivariate regression analysis revealed that H. pylori was the strongest independent predictor of marginal ulceration. H. pylori is an independent predictor of marginal ulceration using a large national database. Preoperative testing for and eradication of H. pylori prior to bariatric surgery may be an important preventive measure to reduce the incidence of ulcer development. © 2017 The Obesity Society.

  19. The U.S. Geological Survey mapping and cartographic database activities, 2006-2010

    USGS Publications Warehouse

    Craun, Kari J.; Donnelly, John P.; Allord, Gregory J.

    2011-01-01

    The U.S. Geological Survey (USGS) began systematic topographic mapping of the United States in the 1880s, beginning with scales of 1:250,000 and 1:125,000 in support of geological mapping. Responding to the need for higher resolution and more detail, the 1:62,500-scale, 15-minute, topographic map series was begun in the beginning of the 20th century. Finally, in the 1950s the USGS adopted the 1:24,000-scale, 7.5-minute topographic map series to portray even more detail, completing the coverage of the conterminous 48 states of the United States with this series in 1992. In 2001, the USGS developed the vision and concept of The National Map, a topographic database for the 21st century and the source for a new generation of topographic maps (http://nationalmap.gov/). In 2008, the initial production of those maps began with a 1:24,000-scale digital product. In a separate, but related project, the USGS began scanning the existing inventory of historical topographic maps at all scales to accompany the new topographic maps. The USGS also had developed a digital database of The National Atlas of the United States. The digital version of Atlas is now Web-available and supports a mapping engine for small scale maps of the United States and North America. These three efforts define topographic mapping activities of the USGS during the last few years and are discussed below.

  20. [The design and implementation of the web typical surface object spectral information system in arid areas based on .NET and SuperMap].

    PubMed

    Xia, Jun; Tashpolat, Tiyip; Zhang, Fei; Ji, Hong-jiang

    2011-07-01

    The characteristic of object spectrum is not only the base of the quantification analysis of remote sensing, but also the main content of the basic research of remote sensing. The typical surface object spectral database in arid areas oasis is of great significance for applied research on remote sensing in soil salinization. In the present paper, the authors took the Ugan-Kuqa River Delta Oasis as an example, unified .NET and the SuperMap platform with SQL Server database stored data, used the B/S pattern and the C# language to design and develop the typical surface object spectral information system, and established the typical surface object spectral database according to the characteristics of arid areas oasis. The system implemented the classified storage and the management of typical surface object spectral information and the related attribute data of the study areas; this system also implemented visualized two-way query between the maps and attribute data, the drawings of the surface object spectral response curves and the processing of the derivative spectral data and its drawings. In addition, the system initially possessed a simple spectral data mining and analysis capabilities, and this advantage provided an efficient, reliable and convenient data management and application platform for the Ugan-Kuqa River Delta Oasis's follow-up study in soil salinization. Finally, It's easy to maintain, convinient for secondary development and practically operating in good condition.

  1. Identification of Potent Chloride Intracellular Channel Protein 1 Inhibitors from Traditional Chinese Medicine through Structure-Based Virtual Screening and Molecular Dynamics Analysis

    PubMed Central

    Wan, Minghui; Liao, Dongjiang; Peng, Guilin; Xu, Xin; Yin, Weiqiang; Guo, Guixin; Jiang, Funeng; Zhong, Weide

    2017-01-01

    Chloride intracellular channel 1 (CLIC1) is involved in the development of most aggressive human tumors, including gastric, colon, lung, liver, and glioblastoma cancers. It has become an attractive new therapeutic target for several types of cancer. In this work, we aim to identify natural products as potent CLIC1 inhibitors from Traditional Chinese Medicine (TCM) database using structure-based virtual screening and molecular dynamics (MD) simulation. First, structure-based docking was employed to screen the refined TCM database and the top 500 TCM compounds were obtained and reranked by X-Score. Then, 30 potent hits were achieved from the top 500 TCM compounds using cluster and ligand-protein interaction analysis. Finally, MD simulation was employed to validate the stability of interactions between each hit and CLIC1 protein from docking simulation, and Molecular Mechanics/Generalized Born Surface Area (MM-GBSA) analysis was used to refine the virtual hits. Six TCM compounds with top MM-GBSA scores and ideal-binding models were confirmed as the final hits. Our study provides information about the interaction between TCM compounds and CLIC1 protein, which may be helpful for further experimental investigations. In addition, the top 6 natural products structural scaffolds could serve as building blocks in designing drug-like molecules for CLIC1 inhibition. PMID:29147652

  2. Mapping grassland productivity with 250-m eMODIS NDVI and SSURGO database over the Greater Platte River Basin, USA

    USGS Publications Warehouse

    Gu, Yingxin; Wylie, Bruce K.; Bliss, Norman B.

    2013-01-01

    This study assessed and described a relationship between satellite-derived growing season averaged Normalized Difference Vegetation Index (NDVI) and annual productivity for grasslands within the Greater Platte River Basin (GPRB) of the United States. We compared growing season averaged NDVI (GSN) with Soil Survey Geographic (SSURGO) database rangeland productivity and flux tower Gross Primary Productivity (GPP) for grassland areas. The GSN was calculated for each of nine years (2000–2008) using the 7-day composite 250-m eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. Strong correlations exist between the nine-year mean GSN (MGSN) and SSURGO annual productivity for grasslands (R2 = 0.74 for approximately 8000 pixels randomly selected from eight homogeneous regions within the GPRB; R2 = 0.96 for the 14 cluster-averaged points). Results also reveal a strong correlation between GSN and flux tower growing season averaged GPP (R2 = 0.71). Finally, we developed an empirical equation to estimate grassland productivity based on the MGSN. Spatially explicit estimates of grassland productivity over the GPRB were generated, which improved the regional consistency of SSURGO grassland productivity data and can help scientists and land managers to better understand the actual biophysical and ecological characteristics of grassland systems in the GPRB. This final estimated grassland production map can also be used as an input for biogeochemical, ecological, and climate change models.

  3. University Real Estate Development Database: A Database-Driven Internet Research Tool

    ERIC Educational Resources Information Center

    Wiewel, Wim; Kunst, Kara

    2008-01-01

    The University Real Estate Development Database is an Internet resource developed by the University of Baltimore for the Lincoln Institute of Land Policy, containing over six hundred cases of university expansion outside of traditional campus boundaries. The University Real Estate Development database is a searchable collection of real estate…

  4. Tracking the Short Term Planning (STP) Development Process

    NASA Technical Reports Server (NTRS)

    Price, Melanie; Moore, Alexander

    2010-01-01

    Part of the National Aeronautics and Space Administration?s mission is to pioneer the future in space exploration, scientific discovery and aeronautics research is enhanced by discovering new scientific tools to improve life on earth. Sequentially, to successfully explore the unknown, there has to be a planning process that organizes certain events in the right priority. Therefore, the planning support team has to continually improve their processes so the ISS Mission Operations can operate smoothly and effectively. The planning support team consists of people in the Long Range Planning area that develop timelines that includes International Partner?s Preliminary STP inputs all the way through to publishing of the Final STP. Planning is a crucial part of the NASA community when it comes to planning the astronaut?s daily schedule in great detail. The STP Process is in need of improvement, because of the various tasks that are required to be broken down in order to get the overall objective of developing a Final STP done correctly. Then a new project came along in order to store various data in a more efficient database. "The SharePoint site is a Web site that provides a central storage and collaboration space for documents, information, and ideas."

  5. EXERGAMES AS A TOOL FOR THE ACQUISITION AND DEVELOPMENT OF MOTOR SKILLS AND ABILITIES: A SYSTEMATIC REVIEW.

    PubMed

    Medeiros, Pâmella de; Capistrano, Renata; Zequinão, Marcela Almeida; Silva, Siomara Aparecida da; Beltrame, Thais Silva; Cardoso, Fernando Luiz

    2017-01-01

    To analyze the literature on the effectiveness of exergames in physical education classes and in the acquisition and development of motor skills and abilities. The analyses were carried out by two independent evaluators, limited to English and Portuguese, in four databases: Web of Science, Science Direct, Scopus and PubMed, without restrictions related with year. The keywords used were: "Exergames and motor learning and motor skill" and "Exergames and motor skill and physical education". The inclusion criteria were: articles that evaluated the effectiveness of exergames in physical education classes regarding the acquisition and development of motor skills. The following were excluded: books, theses and dissertations; repetitions; articles published in proceedings and conference summaries; and studies with sick children and/or use of the tool for rehabilitation purposes. 96 publications were found, and 8 studies were selected for a final review. The quality of the articles was evaluated using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) scale and the Physiotherapy Evidence Database (PEDro) scale. Evidence was found on the recurring positive effects of exergames in both motor skills acquisition and motor skills development. Exergames, when used in a conscious manner - so as to not completely replace sports and other recreational activities -, incorporate good strategies for parents and physical education teachers in motivating children and adolescents to practice physical exercise.

  6. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  7. Full cost accounting as a tool for the financial assessment of Pay-As-You-Throw schemes: a case study for the Panorama municipality, Greece.

    PubMed

    Karagiannidis, Avraam; Xirogiannopoulou, Anna; Tchobanoglous, George

    2008-12-01

    In the present paper, implementation scenarios of a Pay-As-You-Throw program were developed and analyzed for the first time in Greece. Firstly, the necessary steps for implementing a Pay-As-You-Throw program were determined. A database was developed for the needs of the full cost accounting method, where all financial and waste-production data were inserted, in order to calculate the unit price of charging for four different implementation scenarios of the "polluter-pays" principle. For each scenario, the input in waste management cost was estimated, as well as the total waste charges for households. Finally, a comparative analysis of the results was performed.

  8. DeTEXT: A Database for Evaluating Text Extraction from Biomedical Literature Figures

    PubMed Central

    Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong

    2015-01-01

    Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/. PMID:25951377

  9. Data mining for multiagent rules, strategies, and fuzzy decision tree structure

    NASA Astrophysics Data System (ADS)

    Smith, James F., III; Rhyne, Robert D., II; Fisher, Kristin

    2002-03-01

    A fuzzy logic based resource manager (RM) has been developed that automatically allocates electronic attack resources in real-time over many dissimilar platforms. Two different data mining algorithms have been developed to determine rules, strategies, and fuzzy decision tree structure. The first data mining algorithm uses a genetic algorithm as a data mining function and is called from an electronic game. The game allows a human expert to play against the resource manager in a simulated battlespace with each of the defending platforms being exclusively directed by the fuzzy resource manager and the attacking platforms being controlled by the human expert or operating autonomously under their own logic. This approach automates the data mining problem. The game automatically creates a database reflecting the domain expert's knowledge. It calls a data mining function, a genetic algorithm, for data mining of the database as required and allows easy evaluation of the information mined in the second step. The criterion for re- optimization is discussed as well as experimental results. Then a second data mining algorithm that uses a genetic program as a data mining function is introduced to automatically discover fuzzy decision tree structures. Finally, a fuzzy decision tree generated through this process is discussed.

  10. Ophthalmology and vision science research: part 5: surfing or sieving--using literature databases wisely.

    PubMed

    Sherwin, Trevor; Gilhotra, Amardeep K

    2006-02-01

    Literature databases are an ever-expanding resource available to the field of medical sciences. Understanding how to use such databases efficiently is critical for those involved in research. However, for the uninitiated, getting started is a major hurdle to overcome and for the occasional user, the finer points of database searching remain an unacquired skill. In the fifth and final article in this series aimed at those embarking on ophthalmology and vision science research, we look at how the beginning researcher can start to use literature databases and, by using a stepwise approach, how they can optimize their use. This instructional paper gives a hypothetical example of a researcher writing a review article and how he or she acquires the necessary scientific literature for the article. A prototype search of the Medline database is used to illustrate how even a novice might swiftly acquire the skills required for a medium-level search. It provides examples and key tips that can increase the proficiency of the occasional user. Pitfalls of database searching are discussed, as are the limitations of which the user should be aware.

  11. Using ontology databases for scalable query answering, inconsistency detection, and data integration

    PubMed Central

    Dou, Dejing

    2011-01-01

    An ontology database is a basic relational database management system that models an ontology plus its instances. To reason over the transitive closure of instances in the subsumption hierarchy, for example, an ontology database can either unfold views at query time or propagate assertions using triggers at load time. In this paper, we use existing benchmarks to evaluate our method—using triggers—and we demonstrate that by forward computing inferences, we not only improve query time, but the improvement appears to cost only more space (not time). However, we go on to show that the true penalties were simply opaque to the benchmark, i.e., the benchmark inadequately captures load-time costs. We have applied our methods to two case studies in biomedicine, using ontologies and data from genetics and neuroscience to illustrate two important applications: first, ontology databases answer ontology-based queries effectively; second, using triggers, ontology databases detect instance-based inconsistencies—something not possible using views. Finally, we demonstrate how to extend our methods to perform data integration across multiple, distributed ontology databases. PMID:22163378

  12. Final report on AFRIMETS comparison of liquid in glass thermometer calibrations from -35 °C to 250 °C

    NASA Astrophysics Data System (ADS)

    Liedberg, Hans; Kebede Ejigu, Efrem; Madiba, Tshifhiwa; du Clou, Sven; Chibaya, Blessing; Mwazi, Victor; Kajane, Tebogo; Mundembe, Victor; Kwong, Christian Ng Ha; Madeleine, Gilbert

    2017-01-01

    A Regional Metrology Organization (RMO) supplementary comparison of liquid in glass thermometer (AFRIMETS.T-S5) was carried out by the National Metrology Institute of South Africa (NMISA), Zimbabwe Scientific & Industrial Research & Development Centre—National Metrology Institute (SIRDC-NMI), Zambia Bureau of Standards (ZABS), Botswana Bureau of Standards (BOBS), Namibian Standards Institute (NSI), Mauritius Standards Bureau (MSB) and Seychelles Bureau of Standards (SBS) between January and September 2016. The temperature range of the inter comparison is -35 °C to 250 °C. The results of this comparison are reported here, along with descriptions of the Artefacts used. This report also presents the uncertainty budget of each participant. The results are analysed and normalized error (En) values are reported. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  13. Potential clinical impact of advanced imaging and computer-aided diagnosis in chest radiology: importance of radiologist's role and successful observer study.

    PubMed

    Li, Feng

    2015-07-01

    This review paper is based on our research experience in the past 30 years. The importance of radiologists' role is discussed in the development or evaluation of new medical images and of computer-aided detection (CAD) schemes in chest radiology. The four main topics include (1) introducing what diseases can be included in a research database for different imaging techniques or CAD systems and what imaging database can be built by radiologists, (2) understanding how radiologists' subjective judgment can be combined with technical objective features to improve CAD performance, (3) sharing our experience in the design of successful observer performance studies, and (4) finally, discussing whether the new images and CAD systems can improve radiologists' diagnostic ability in chest radiology. In conclusion, advanced imaging techniques and detection/classification of CAD systems have a potential clinical impact on improvement of radiologists' diagnostic ability, for both the detection and the differential diagnosis of various lung diseases, in chest radiology.

  14. Driving change in rural workforce planning: the medical schools outcomes database.

    PubMed

    Gerber, Jonathan P; Landau, Louis I

    2010-01-01

    The Medical Schools Outcomes Database (MSOD) is an ongoing longitudinal tracking project ofmedical students from all medical schools in Australia and New Zealand. It was established in 2005 to track the career trajectories of medical students and will directly help develop models of workforce flow, particularly with respect to rural and remote shortages. This paper briefly outlines the MSOD project and reports on key methodological factors in tracking medical students. Finally, the potential impact of the MSOD on understanding changes in rural practice intentions is illustrated using data from the 2005 pilot cohort (n = 112). Rural placements were associated with a shift towards rural practice intentions, while those who intended to practice rurally at both the start and end of medical school tended to be older and interested in a generalist career. Continuing work will track these and future students as they progress through the workforce, as well as exploring issues such as the career trajectories of international fee-paying students, workforce succession planning, and the evaluation of medical education initiatives.

  15. Baseline information development for energy smart schools -- applied research, field testing and technology integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Tengfang; Piette, Mary Ann

    2004-08-05

    The original scope of work was to obtain and analyze existing and emerging data in four states: California, Florida, New York, and Wisconsin. The goal of this data collection was to deliver a baseline database or recommendations for such a database that could possibly contain window and daylighting features and energy performance characteristics of Kindergarten through 12th grade (K-12) school buildings (or those of classrooms when available). In particular, data analyses were performed based upon the California Commercial End-Use Survey (CEUS) databases to understand school energy use, features of window glazing, and availability of daylighting in California K-12 schools. Themore » outcomes from this baseline task can be used to assist in establishing a database of school energy performance, assessing applications of existing technologies relevant to window and daylighting design, and identifying future R&D needs. These are in line with the overall project goals as outlined in the proposal. Through the review and analysis of this data, it is clear that there are many compounding factors impacting energy use in K-12 school buildings in the U.S., and that there are various challenges in understanding the impact of K-12 classroom energy use associated with design features of window glazing and skylight. First, the energy data in the existing CEUS databases has, at most, provided the aggregated electricity and/or gas usages for the building establishments that include other school facilities on top of the classroom spaces. Although the percentage of classroom floor area in schools is often available from the databases, there is no additional information that can be used to quantitatively segregate the EUI for classroom spaces. In order to quantify the EUI for classrooms, sub-metering of energy usage by classrooms must be obtained. Second, magnitudes of energy use for electricity lighting are not attainable from the existing databases, nor are the lighting levels contributed by artificial lighting or daylight. It is impossible to reasonably estimate the lighting energy consumption for classroom areas in the sample of schools studied in this project. Third, there are many other compounding factors that may as well influence the overall classroom energy use, e.g., ventilation, insulation, system efficiency, occupancy, control, schedules, and weather. Fourth, although we have examined the school EUI grouped by various factors such as climate zones, window and daylighting design features from the California databases, no statistically significant associations can be identified from the sampled California K-12 schools in the current California CEUS. There are opportunities to expand such analyses by developing and including more powerful CEUS databases in the future. Finally, a list of parameters is recommended for future database development and for use of future investigation in K-12 classroom energy use, window and skylight design, and possible relations between them. Some of the key parameters include: (1) Energy end use data for lighting systems, classrooms, and schools; (2) Building design and operation including features for windows and daylighting; and (3) Other key parameters and information that would be available to investigate overall energy uses, building and systems design, their operation, and services provided.« less

  16. Characterization of hypersensitivity reactions reported among Andrographis paniculata users in Thailand using Health Product Vigilance Center (HPVC) database.

    PubMed

    Suwankesawong, Wimon; Saokaew, Surasak; Permsuwan, Unchalee; Chaiyakunapruk, Nathorn

    2014-12-24

    Andrographis paniculata (andrographis) is one of the herbal products that are widely used for various indications. Hypersensitivity reactions have been reported among subjects receiving Andrographis paniculata in Thailand. Understanding of characteristics of patients, adverse events, and clinical outcomes is essential for ensuring population safety.This study aimed to describe the characteristics of hypersensitivity reactions reported in patients receiving andrographis containing products in Thailand using national pharmacovigilance database. Thai Vigibase data from February 2001 to December 2012 involving andrographis products were used. This database includes the reports submitted through the spontaneous reporting system and intensive monitoring programmes. The database contained patient characteristic, adverse events associated with andrographis products, and details on seriousness, causality, and clinical outcomes. Case reports were included for final analysis if they met the inclusion criteria; 1) reports with andrographis being the only suspected cause, 2) reports with terms consistent with the constellation of hypersensitivity reactions, and 3) reports with terms considered critical terms according to WHO criteria. Descriptive statistics were used. A total of 248 case reports of andrographis-associated adverse events were identified. Only 106 case reports specified andrographis herbal product as the only suspected drug and reported at least one term consistent with constellation of hypersensitivity reactions. Most case reports (89%) came from spontaneous reporting system with no previously documented history of drug allergy (88%). Of these, 18 case reports were classified as serious with 16 cases requiring hospitalization. For final assessment, the case reports with terms consistent with constellation of hypersensitivity reactions and critical terms were included. Thirteen case reports met such criteria including anaphylactic shock (n = 5), anaphylactic reaction (n = 4) and angioedema (n = 4). Time to development of symptoms ranged from 5 minutes to 1 day. The doses of andrographis used varied from 352 mg to 1,750 mg. Causality assessment of 13 case reports were certain (n = 3), probable (n = 8) and possible (n = 2). Our findings suggested that hypersensitivity reactions have been reported among patients receiving Andrographis paniculata. Healthcare professionals should be aware of this potential risk. Further investigation of the causal relationship is needed; meanwhile including hypersensitivity reactions for andrographis product labeling should be considered.

  17. IRIS Toxicological Review of Methanol (Non-Cancer) ...

    EPA Pesticide Factsheets

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of methanol (non-cancer) that when finalized will appear on the Integrated Risk Information System (IRIS) database. EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of methanol (non-cancer) that will appear in the Integrated Risk Information System (IRIS) database.

  18. The European general thoracic surgery database project.

    PubMed

    Falcoz, Pierre Emmanuel; Brunelli, Alessandro

    2014-05-01

    The European Society of Thoracic Surgeons (ESTS) Database is a free registry created by ESTS in 2001. The current online version was launched in 2007. It runs currently on a Dendrite platform with extensive data security and frequent backups. The main features are a specialty-specific, procedure-specific, prospectively maintained, periodically audited and web-based electronic database, designed for quality control and performance monitoring, which allows for the collection of all general thoracic procedures. Data collection is the "backbone" of the ESTS database. It includes many risk factors, processes of care and outcomes, which are specially designed for quality control and performance audit. The user can download and export their own data and use them for internal analyses and quality control audits. The ESTS database represents the gold standard of clinical data collection for European General Thoracic Surgery. Over the past years, the ESTS database has achieved many accomplishments. In particular, the database hit two major milestones: it now includes more than 235 participating centers and 70,000 surgical procedures. The ESTS database is a snapshot of surgical practice that aims at improving patient care. In other words, data capture should become integral to routine patient care, with the final objective of improving quality of care within Europe.

  19. Managing, profiling and analyzing a library of 2.6 million compounds gathered from 32 chemical providers.

    PubMed

    Monge, Aurélien; Arrault, Alban; Marot, Christophe; Morin-Allory, Luc

    2006-08-01

    The data for 3.8 million compounds from structural databases of 32 providers were gathered and stored in a single chemical database. Duplicates are removed using the IUPAC International Chemical Identifier. After this, 2.6 million compounds remain. Each database and the final one were studied in term of uniqueness, diversity, frameworks, 'drug-like' and 'lead-like' properties. This study also shows that there are more than 87 000 frameworks in the database. It contains 2.1 million 'drug-like' molecules among which, more than one million are 'lead-like'. This study has been carried out using 'ScreeningAssistant', a software dedicated to chemical databases management and screening sets generation. Compounds are stored in a MySQL database and all the operations on this database are carried out by Java code. The druglikeness and leadlikeness are estimated with 'in-house' scores using functions to estimate convenience to properties; unicity using the InChI code and diversity using molecular frameworks and fingerprints. The software has been conceived in order to facilitate the update of the database. 'ScreeningAssistant' is freely available under the GPL license.

  20. Relational-database model for improving quality assurance and process control in a composite manufacturing environment

    NASA Astrophysics Data System (ADS)

    Gentry, Jeffery D.

    2000-05-01

    A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.

  1. Effect of the sequence data deluge on the performance of methods for detecting protein functional residues.

    PubMed

    Garrido-Martín, Diego; Pazos, Florencio

    2018-02-27

    The exponential accumulation of new sequences in public databases is expected to improve the performance of all the approaches for predicting protein structural and functional features. Nevertheless, this was never assessed or quantified for some widely used methodologies, such as those aimed at detecting functional sites and functional subfamilies in protein multiple sequence alignments. Using raw protein sequences as only input, these approaches can detect fully conserved positions, as well as those with a family-dependent conservation pattern. Both types of residues are routinely used as predictors of functional sites and, consequently, understanding how the sequence content of the databases affects them is relevant and timely. In this work we evaluate how the growth and change with time in the content of sequence databases affect five sequence-based approaches for detecting functional sites and subfamilies. We do that by recreating historical versions of the multiple sequence alignments that would have been obtained in the past based on the database contents at different time points, covering a period of 20 years. Applying the methods to these historical alignments allows quantifying the temporal variation in their performance. Our results show that the number of families to which these methods can be applied sharply increases with time, while their ability to detect potentially functional residues remains almost constant. These results are informative for the methods' developers and final users, and may have implications in the design of new sequencing initiatives.

  2. LHCb Conditions database operation assistance systems

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  3. Compiling Holocene RSL databases from near- to far-field regions: proxies, difficulties and possible solutions

    NASA Astrophysics Data System (ADS)

    Vacchi, M.; Horton, B.; Mann, T.; Engelhart, S. E.; Rovere, A.; Nikitina, D.; Bender, M.; Roy, K.; Peltier, W. R.

    2017-12-01

    Reconstructions of relative sea level (RSL) have implications for investigation of crustal movements, calibration of earth rheology models and the reconstruction of ice sheets. In recent years, efforts were made to create RSL databases following a standardized methodology. These regional databases provide a framework for developing our understanding of the primary mechanisms of RSL change since the Last Glacial Maximum and a long-term baseline against which to gauge changes in sea level during the 20th century and forecasts for the 21st. We report here the results of recently compiled databases in very different climatic and geographic contexts that are the northeastern Canadian coast, the Mediterranean Sea as well as the southeastern Asiatic region. Our re-evaluation of sea-level indicators from geological and archaeological investigations have yielded more than 3000 RSL data-points mainly from salt and freshwater wetlands or adjacent estuarine sediment, isolation basins, beach ridges, fixed biological indicators, beachrocks as well as coastal archaeological structures. We outline some of the inherent difficulties, and potential solutions to analyse sea-level data in such different depositional environments. In particular, we discuss problems related with the definition of standardized indicative meaning, and with the re-evaluation of old radiocarbon samples. We further address complex tectonics influences and the framework to compare such large variability of RSL data-points. Finally we discuss the implications of our results for the patterns of glacio-isostatic adjustment in these regions.

  4. Nuclear Data and Reaction Rate Databases in Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Lippuner, Jonas

    2018-06-01

    Astrophysical simulations and models require a large variety of micro-physics data, such as equation of state tables, atomic opacities, properties of nuclei, and nuclear reaction rates. Some of the required data is experimentally accessible, but the extreme conditions present in many astrophysical scenarios cannot be reproduced in the laboratory and thus theoretical models are needed to supplement the empirical data. Collecting data from various sources and making them available as a database in a unified format is a formidable task. I will provide an overview of the data requirements in astrophysics with an emphasis on nuclear astrophysics. I will then discuss some of the existing databases, the science they enable, and their limitations. Finally, I will offer some thoughts on how to design a useful database.

  5. VIEWCACHE: An incremental pointer-base access method for distributed databases. Part 1: The universal index system design document. Part 2: The universal index system low-level design document. Part 3: User's guide. Part 4: Reference manual. Part 5: UIMS test suite

    NASA Technical Reports Server (NTRS)

    Kelley, Steve; Roussopoulos, Nick; Sellis, Timos

    1992-01-01

    The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.

  6. Analysing inter-relationships among water, governance, human development variables in developing countries

    NASA Astrophysics Data System (ADS)

    Dondeynaz, C.; Carmona Moreno, C.; Céspedes Lorente, J. J.

    2012-10-01

    The "Integrated Water Resources Management" principle was formally laid down at the International Conference on Water and Sustainable development in Dublin 1992. One of the main results of this conference is that improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). These sectors influence or are influenced by the access to WSS. The understanding of these interrelations appears as crucial for decision makers in the water sector. In this framework, the Joint Research Centre (JRC) of the European Commission (EC) has developed a new database (WatSan4Dev database) containing 42 indicators (called variables in this paper) from environmental, socio-economic, governance and financial aid flows data in developing countries. This paper describes the development of the WatSan4Dev dataset, the statistical processes needed to improve the data quality, and finally, the analysis to verify the database coherence is presented. Based on 25 relevant variables, the relationships between variables are described and organised into five factors (HDP - Human Development against Poverty, AP - Human Activity Pressure on water resources, WR - Water Resources, ODA - Official Development Aid, CEC - Country Environmental Concern). Linear regression methods are used to identify key variables having influence on water supply and sanitation. First analysis indicates that the informal urbanisation development is an important factor negatively influencing the percentage of the population having access to WSS. Health, and in particular children's health, benefits from the improvement of WSS. Irrigation is also enhancing Water Supply service thanks to multi-purpose infrastructure. Five country profiles are also created to deeper understand and synthetize the amount of information gathered. This new classification of countries is useful in identifying countries with a less advanced position and weaknesses to be tackled. The relevance of indicators gathered to represent environmental and water resources state is questioned in the discussion section. The paper concludes with the necessity to increase the reliability of current indicators and calls for further research on specific indicators, in particular on water quality at national scale, in order to better include environmental state in analysis to WSS.

  7. The Magnetics Information Consortium (MagIC)

    NASA Astrophysics Data System (ADS)

    Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.

    2003-12-01

    The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.

  8. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024

  9. Aerodynamic Analyses and Database Development for Ares I Vehicle First Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Pei, Jing; Pinier, Jeremy T.; Klopfer, Goetz H.; Holland, Scott D.; Covell, Peter F.

    2011-01-01

    This paper presents the aerodynamic analysis and database development for first stage separation of Ares I A106 crew launch vehicle configuration. Separate 6-DOF databases were created for the first stage and upper stage and each database consists of three components: (a) isolated or freestream coefficients, (b) power-off proximity increments, and (c) power-on proximity increments. The isolated and power-off incremental databases were developed using data from 1% scaled model tests in AEDC VKF Tunnel A. The power-on proximity increments were developed using OVERFLOW CFD solutions. The database also includes incremental coefficients for one BDM and one USM failure scenarios.

  10. [Development of the nursing diagnosis risk for pressure ulcer].

    PubMed

    Santos, Cássia Teixeira Dos; Almeida, Miriam de Abreu; Oliveira, Magáli Costa; Victor, Marco Antônio de Goes; Lucena, Amália de Fátima

    2015-06-01

    The study objective was to develop the definition and compile the risk factors for a new Nursing Diagnosis entitled "Risk for pressure ulcer". The process was guided using the research question, "What are the risk factors for development of a PU and what is its definition?" An integrative literature review was conducted of articles published in Portuguese, English or Spanish from 2002 to 2012 and indexed on the Lilacs/SCIELO, MEDLINE/PubMed Central and Web of Science databases. The final sample comprised 21 articles that provided answers to the research question. These articles were analyzed and summarized in charts. A definition was constructed and 19 risk factors were selected for the new nursing diagnosis, "Risk for pressure ulcer". Identification and definition of the components of the new nursing diagnosis should aid nurses to prevent pressure ulcer events.

  11. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    PubMed

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  12. The EPIC nutrient database project (ENDB): a first attempt to standardize nutrient databases across the 10 European countries participating in the EPIC study.

    PubMed

    Slimani, N; Deharveng, G; Unwin, I; Southgate, D A T; Vignat, J; Skeie, G; Salvini, S; Parpinel, M; Møller, A; Ireland, J; Becker, W; Farran, A; Westenbrink, S; Vasilopoulou, E; Unwin, J; Borgejordet, A; Rohrmann, S; Church, S; Gnagnarella, P; Casagrande, C; van Bakel, M; Niravong, M; Boutron-Ruault, M C; Stripp, C; Tjønneland, A; Trichopoulou, A; Georga, K; Nilsson, S; Mattisson, I; Ray, J; Boeing, H; Ocké, M; Peeters, P H M; Jakszyn, P; Amiano, P; Engeset, D; Lund, E; de Magistris, M Santucci; Sacerdote, C; Welch, A; Bingham, S; Subar, A F; Riboli, E

    2007-09-01

    This paper describes the ad hoc methodological concepts and procedures developed to improve the comparability of Nutrient databases (NDBs) across the 10 European countries participating in the European Prospective Investigation into Cancer and Nutrition (EPIC). This was required because there is currently no European reference NDB available. A large network involving national compilers, nutritionists and experts on food chemistry and computer science was set up for the 'EPIC Nutrient DataBase' (ENDB) project. A total of 550-1500 foods derived from about 37,000 standardized EPIC 24-h dietary recalls (24-HDRS) were matched as closely as possible to foods available in the 10 national NDBs. The resulting national data sets (NDS) were then successively documented, standardized and evaluated according to common guidelines and using a DataBase Management System specifically designed for this project. The nutrient values of foods unavailable or not readily available in NDSs were approximated by recipe calculation, weighted averaging or adjustment for weight changes and vitamin/mineral losses, using common algorithms. The final ENDB contains about 550-1500 foods depending on the country and 26 common components. Each component value was documented and standardized for unit, mode of expression, definition and chemical method of analysis, as far as possible. Furthermore, the overall completeness of NDSs was improved (>or=99%), particularly for beta-carotene and vitamin E. The ENDB constitutes a first real attempt to improve the comparability of NDBs across European countries. This methodological work will provide a useful tool for nutritional research as well as end-user recommendations to improve NDBs in the future.

  13. Geologic Map and Map Database of Eastern Sonoma and Western Napa Counties, California

    USGS Publications Warehouse

    Graymer, R.W.; Brabb, E.E.; Jones, D.L.; Barnes, J.; Nicholson, R.S.; Stamski, R.E.

    2007-01-01

    Introduction This report contains a new 1:100,000-scale geologic map, derived from a set of geologic map databases (Arc-Info coverages) containing information at 1:62,500-scale resolution, and a new description of the geologic map units and structural relations in the map area. Prepared as part of the San Francisco Bay Region Mapping Project, the study area includes the north-central part of the San Francisco Bay region, and forms the final piece of the effort to generate new, digital geologic maps and map databases for an area which includes Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, Solano, and Sonoma Counties. Geologic mapping in Lake County in the north-central part of the map extent was not within the scope of the Project. The map and map database integrates both previously published reports and new geologic mapping and field checking by the authors (see Sources of Data index map on the map sheet or the Arc-Info coverage eswn-so and the textfile eswn-so.txt). This report contains new ideas about the geologic structures in the map area, including the active San Andreas Fault system, as well as the geologic units and their relations. Together, the map (or map database) and the unit descriptions in this report describe the composition, distribution, and orientation of geologic materials and structures within the study area at regional scale. Regional geologic information is important for analysis of earthquake shaking, liquifaction susceptibility, landslide susceptibility, engineering materials properties, mineral resources and hazards, as well as groundwater resources and hazards. These data also assist in answering questions about the geologic history and development of the California Coast Ranges.

  14. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org

    PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less

  15. dbPTM 2016: 10-year anniversary of a resource for post-translational modification of proteins.

    PubMed

    Huang, Kai-Yao; Su, Min-Gang; Kao, Hui-Ju; Hsieh, Yun-Chung; Jhong, Jhih-Hua; Cheng, Kuang-Hao; Huang, Hsien-Da; Lee, Tzong-Yi

    2016-01-04

    Owing to the importance of the post-translational modifications (PTMs) of proteins in regulating biological processes, the dbPTM (http://dbPTM.mbc.nctu.edu.tw/) was developed as a comprehensive database of experimentally verified PTMs from several databases with annotations of potential PTMs for all UniProtKB protein entries. For this 10th anniversary of dbPTM, the updated resource provides not only a comprehensive dataset of experimentally verified PTMs, supported by the literature, but also an integrative interface for accessing all available databases and tools that are associated with PTM analysis. As well as collecting experimental PTM data from 14 public databases, this update manually curates over 12 000 modified peptides, including the emerging S-nitrosylation, S-glutathionylation and succinylation, from approximately 500 research articles, which were retrieved by text mining. As the number of available PTM prediction methods increases, this work compiles a non-homologous benchmark dataset to evaluate the predictive power of online PTM prediction tools. An increasing interest in the structural investigation of PTM substrate sites motivated the mapping of all experimental PTM peptides to protein entries of Protein Data Bank (PDB) based on database identifier and sequence identity, which enables users to examine spatially neighboring amino acids, solvent-accessible surface area and side-chain orientations for PTM substrate sites on tertiary structures. Since drug binding in PDB is annotated, this update identified over 1100 PTM sites that are associated with drug binding. The update also integrates metabolic pathways and protein-protein interactions to support the PTM network analysis for a group of proteins. Finally, the web interface is redesigned and enhanced to facilitate access to this resource. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  17. UKPMC: a full text article resource for the life sciences.

    PubMed

    McEntyre, Johanna R; Ananiadou, Sophia; Andrews, Stephen; Black, William J; Boulderstone, Richard; Buttery, Paula; Chaplin, David; Chevuru, Sandeepreddy; Cobley, Norman; Coleman, Lee-Ann; Davey, Paul; Gupta, Bharti; Haji-Gholam, Lesley; Hawkins, Craig; Horne, Alan; Hubbard, Simon J; Kim, Jee-Hyub; Lewin, Ian; Lyte, Vic; MacIntyre, Ross; Mansoor, Sami; Mason, Linda; McNaught, John; Newbold, Elizabeth; Nobata, Chikashi; Ong, Ernest; Pillai, Sharmila; Rebholz-Schuhmann, Dietrich; Rosie, Heather; Rowbotham, Rob; Rupp, C J; Stoehr, Peter; Vaughan, Philip

    2011-01-01

    UK PubMed Central (UKPMC) is a full-text article database that extends the functionality of the original PubMed Central (PMC) repository. The UKPMC project was launched as the first 'mirror' site to PMC, which in analogy to the International Nucleotide Sequence Database Collaboration, aims to provide international preservation of the open and free-access biomedical literature. UKPMC (http://ukpmc.ac.uk) has undergone considerable development since its inception in 2007 and now includes both a UKPMC and PubMed search, as well as access to other records such as Agricola, Patents and recent biomedical theses. UKPMC also differs from PubMed/PMC in that the full text and abstract information can be searched in an integrated manner from one input box. Furthermore, UKPMC contains 'Cited By' information as an alternative way to navigate the literature and has incorporated text-mining approaches to semantically enrich content and integrate it with related database resources. Finally, UKPMC also offers added-value services (UKPMC+) that enable grantees to deposit manuscripts, link papers to grants, publish online portfolios and view citation information on their papers. Here we describe UKPMC and clarify the relationship between PMC and UKPMC, providing historical context and future directions, 10 years on from when PMC was first launched.

  18. Bio-psycho-social factors affecting sexual self-concept: A systematic review.

    PubMed

    Potki, Robabeh; Ziaei, Tayebe; Faramarzi, Mahbobeh; Moosazadeh, Mahmood; Shahhosseini, Zohreh

    2017-09-01

    Nowadays, it is believed that mental and emotional aspects of sexual well-being are the important aspects of sexual health. Sexual self-concept is a major component of sexual health and the core of sexuality. It is defined as the cognitive perspective concerning the sexual aspects of 'self' and refers to the individual's self-perception as a sexual creature. The aim of this study was to assess the different factors affecting sexual self-concept. English electronic databases including PubMed, Scopus, Web of Science and Google Scholar as well as two Iranian databases including Scientific Information Database and Iranmedex were searched for English and Persian-language articles published between 1996 and 2016. Of 281 retrieved articles, 37 articles were finally included for writing this review article. Factors affecting sexual self-concept were categorized to biological, psychological and social factors. In the category of biological factors, age gender, marital status, race, disability and sexual transmitted infections are described. In the psychological category, the impact of body image, sexual abuse in childhood and mental health history are present. Lastly, in the social category, the roles of parents, peers and the media are discussed. As the development of sexual self-concept is influenced by multiple events in individuals' lives, to promotion of sexual self-concept, an integrated implementation of health policies is recommended.

  19. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  20. The human role in space (THURIS) applications study. Final briefing

    NASA Technical Reports Server (NTRS)

    Maybee, George W.

    1987-01-01

    The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.

  1. Handwritten numeral databases of Indian scripts and multistage recognition of mixed numerals.

    PubMed

    Bhattacharya, Ujjwal; Chaudhuri, B B

    2009-03-01

    This article primarily concerns the problem of isolated handwritten numeral recognition of major Indian scripts. The principal contributions presented here are (a) pioneering development of two databases for handwritten numerals of two most popular Indian scripts, (b) a multistage cascaded recognition scheme using wavelet based multiresolution representations and multilayer perceptron classifiers and (c) application of (b) for the recognition of mixed handwritten numerals of three Indian scripts Devanagari, Bangla and English. The present databases include respectively 22,556 and 23,392 handwritten isolated numeral samples of Devanagari and Bangla collected from real-life situations and these can be made available free of cost to researchers of other academic Institutions. In the proposed scheme, a numeral is subjected to three multilayer perceptron classifiers corresponding to three coarse-to-fine resolution levels in a cascaded manner. If rejection occurred even at the highest resolution, another multilayer perceptron is used as the final attempt to recognize the input numeral by combining the outputs of three classifiers of the previous stages. This scheme has been extended to the situation when the script of a document is not known a priori or the numerals written on a document belong to different scripts. Handwritten numerals in mixed scripts are frequently found in Indian postal mails and table-form documents.

  2. UKPMC: a full text article resource for the life sciences

    PubMed Central

    McEntyre, Johanna R.; Ananiadou, Sophia; Andrews, Stephen; Black, William J.; Boulderstone, Richard; Buttery, Paula; Chaplin, David; Chevuru, Sandeepreddy; Cobley, Norman; Coleman, Lee-Ann; Davey, Paul; Gupta, Bharti; Haji-Gholam, Lesley; Hawkins, Craig; Horne, Alan; Hubbard, Simon J.; Kim, Jee-Hyub; Lewin, Ian; Lyte, Vic; MacIntyre, Ross; Mansoor, Sami; Mason, Linda; McNaught, John; Newbold, Elizabeth; Nobata, Chikashi; Ong, Ernest; Pillai, Sharmila; Rebholz-Schuhmann, Dietrich; Rosie, Heather; Rowbotham, Rob; Rupp, C. J.; Stoehr, Peter; Vaughan, Philip

    2011-01-01

    UK PubMed Central (UKPMC) is a full-text article database that extends the functionality of the original PubMed Central (PMC) repository. The UKPMC project was launched as the first ‘mirror’ site to PMC, which in analogy to the International Nucleotide Sequence Database Collaboration, aims to provide international preservation of the open and free-access biomedical literature. UKPMC (http://ukpmc.ac.uk) has undergone considerable development since its inception in 2007 and now includes both a UKPMC and PubMed search, as well as access to other records such as Agricola, Patents and recent biomedical theses. UKPMC also differs from PubMed/PMC in that the full text and abstract information can be searched in an integrated manner from one input box. Furthermore, UKPMC contains ‘Cited By’ information as an alternative way to navigate the literature and has incorporated text-mining approaches to semantically enrich content and integrate it with related database resources. Finally, UKPMC also offers added-value services (UKPMC+) that enable grantees to deposit manuscripts, link papers to grants, publish online portfolios and view citation information on their papers. Here we describe UKPMC and clarify the relationship between PMC and UKPMC, providing historical context and future directions, 10 years on from when PMC was first launched. PMID:21062818

  3. ThermoFit: A Set of Software Tools, Protocols and Schema for the Organization of Thermodynamic Data and for the Development, Maintenance, and Distribution of Internally Consistent Thermodynamic Data/Model Collections

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2013-12-01

    Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.

  4. [Review of digital ground object spectral library].

    PubMed

    Zhou, Xiao-Hu; Zhou, Ding-Wu

    2009-06-01

    A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alekhin, S.I.; Ezhela, V.V.; Filimonov, B.B.

    We present an indexed guide to the literature experimental particle physics for the years 1988--1992. About 4,000 papers are indexed by Beam/Target/Momentum, Reaction Momentum (including the final state), Final State Particle, and Accelerator/Detector/Experiment. All indices are cross-referenced to the paper`s title and reference in the ID/Reference/Title Index. The information in this guide is also publicly available from a regularly updated computer database.

  6. NIE Final Report. Vermont Special Purpose Grant, January 1979 through September 1979 [and] Evaluation of the Resource Agent Program.

    ERIC Educational Resources Information Center

    Perry, Mary; Miller, Pamela A.

    Vermont's adaptation of a federal Resource Agent Program (RAP), designed to meet the in-service training needs of teachers, is described in this final report. Part of a complete dissemination system, RAP was funded as a pilot program to initiate a collection of Vermont-originated resources to be entered into a state educational database. Described…

  7. Requirements Analysis for the Army Safety Management Information System (ASMIS)

    DTIC Science & Technology

    1989-03-01

    8217_>’ Telephone Number « .. PNL-6819 Limited Distribution Requirements Analysis for the Army Safety Management Information System (ASMIS) Final...PNL-6819 REQUIREMENTS ANALYSIS FOR THE ARMY SAFETY MANAGEMENT INFORMATION SYSTEM (ASMIS) FINAL REPORT J. S. Littlefield A. L. Corrigan March...accidents. This accident data is available under the Army Safety Management Information System (ASMIS) which is an umbrella for many databases

  8. Reliability database development for use with an object-oriented fault tree evaluation program

    NASA Technical Reports Server (NTRS)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  9. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  10. Content based information retrieval in forensic image databases.

    PubMed

    Geradts, Zeno; Bijhold, Jurrien

    2002-03-01

    This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.

  11. Individual Pesticides in Registration Review

    EPA Pesticide Factsheets

    You can used the Chemical Search database to search pesticides by chemical name and find their registration review dockets, along with Work Plans, risk assessments, interim and final decisions, tolerance rules, and cancellation actions.

  12. Aerodynamic Analyses and Database Development for Ares I Vehicle First Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Pei, Jing; Pinier, Jeremy T.; Holland, Scott D.; Covell, Peter F.; Klopfer, Goetz, H.

    2012-01-01

    This paper presents the aerodynamic analysis and database development for the first stage separation of the Ares I A106 Crew Launch Vehicle configuration. Separate databases were created for the first stage and upper stage. Each database consists of three components: isolated or free-stream coefficients, power-off proximity increments, and power-on proximity increments. The power-on database consists of three parts, all plumes firing at nominal conditions, the one booster deceleration motor out condition, and the one ullage settling motor out condition. The isolated and power-off incremental databases were developed using wind tunnel test data. The power-on proximity increments were developed using CFD solutions.

  13. Development of Elevation and Relief Databases for ICESat-2/ATLAS Receiver Algorithms

    NASA Astrophysics Data System (ADS)

    Leigh, H. W.; Magruder, L. A.; Carabajal, C. C.; Saba, J. L.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.

    2013-12-01

    The Advanced Topographic Laser Altimeter System (ATLAS) is planned to launch onboard NASA's ICESat-2 spacecraft in 2016. ATLAS operates at a wavelength of 532 nm with a laser repeat rate of 10 kHz and 6 individual laser footprints. The satellite will be in a 500 km, 91-day repeat ground track orbit at an inclination of 92°. A set of onboard Receiver Algorithms has been developed to reduce the data volume and data rate to acceptable levels while still transmitting the relevant ranging data. The onboard algorithms limit the data volume by distinguishing between surface returns and background noise and selecting a small vertical region around the surface return to be included in telemetry. The algorithms make use of signal processing techniques, along with three databases, the Digital Elevation Model (DEM), the Digital Relief Map (DRM), and the Surface Reference Mask (SRM), to find the signal and determine the appropriate dynamic range of vertical data surrounding the surface for downlink. The DEM provides software-based range gating for ATLAS. This approach allows the algorithm to limit the surface signal search to the vertical region between minimum and maximum elevations provided by the DEM (plus some margin to account for uncertainties). The DEM is constructed in a nested, three-tiered grid to account for a hardware constraint limiting the maximum vertical range to 6 km. The DRM is used to select the vertical width of the telemetry band around the surface return. The DRM contains global values of relief calculated along 140 m and 700 m ground track segments consistent with a 92° orbit. The DRM must contain the maximum value of relief seen in any given area, but must be as close to truth as possible as the DRM directly affects data volume. The SRM, which has been developed independently from the DEM and DRM, is used to set parameters within the algorithm and select telemetry bands for downlink. Both the DEM and DRM are constructed from publicly available digital elevation models. No elevation models currently exist that provide global coverage at a sufficient resolution, so several regional models have been mosaicked together to produce global databases. In locations where multiple data sets are available, evaluations have been made to determine the optimal source for the databases, primarily based on resolution and accuracy. Separate procedures for calculating relief were developed for high latitude (>60N/S) regions in order to take advantage of polar stereographic projections. An additional method for generating the databases was developed for use over Antarctica, such that high resolution, regional elevation models can be easily incorporated as they become available in the future. The SRM is used to facilitate DEM and DRM production by defining those regions that are ocean and sea ice. Ocean and sea ice elevation values are defined by the geoid, while relief is set to a constant value. Results presented will include the details of data source selection, the methodologies used to create the databases, and the final versions of both the DEM and DRM databases. Companion presentations by McGarry, et al. and Carabajal, et al. describe the ATLAS onboard Receiver Algorithms and the database verification, respectively.

  14. Final report of APMP.QM-K46 ammonia in nitrogen at 30 μmol/mol level

    NASA Astrophysics Data System (ADS)

    Uehara, Shinji; Qiao, Han; Shimosaka, Takuya

    2017-01-01

    This report describes the results for a bilateral comparison of ammonia in nitrogen gas mixture. The nominal amount-of-substance fraction was 30 μmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  15. Ontology-Oriented Programming for Biomedical Informatics.

    PubMed

    Lamy, Jean-Baptiste

    2016-01-01

    Ontologies are now widely used in the biomedical domain. However, it is difficult to manipulate ontologies in a computer program and, consequently, it is not easy to integrate ontologies with databases or websites. Two main approaches have been proposed for accessing ontologies in a computer program: traditional API (Application Programming Interface) and ontology-oriented programming, either static or dynamic. In this paper, we will review these approaches and discuss their appropriateness for biomedical ontologies. We will also present an experience feedback about the integration of an ontology in a computer software during the VIIIP research project. Finally, we will present OwlReady, the solution we developed.

  16. FY09 Final Report for LDRD Project: Understanding Viral Quasispecies Evolution through Computation and Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, C

    2009-11-12

    In FY09 they will (1) complete the implementation, verification, calibration, and sensitivity and scalability analysis of the in-cell virus replication model; (2) complete the design of the cell culture (cell-to-cell infection) model; (3) continue the research, design, and development of their bioinformatics tools: the Web-based structure-alignment-based sequence variability tool and the functional annotation of the genome database; (4) collaborate with the University of California at San Francisco on areas of common interest; and (5) submit journal articles that describe the in-cell model with simulations and the bioinformatics approaches to evaluation of genome variability and fitness.

  17. Computerized decision support system for mass identification in breast using digital mammogram: a study on GA-based neuro-fuzzy approaches.

    PubMed

    Das, Arpita; Bhattacharya, Mahua

    2011-01-01

    In the present work, authors have developed a treatment planning system implementing genetic based neuro-fuzzy approaches for accurate analysis of shape and margin of tumor masses appearing in breast using digital mammogram. It is obvious that a complicated structure invites the problem of over learning and misclassification. In proposed methodology, genetic algorithm (GA) has been used for searching of effective input feature vectors combined with adaptive neuro-fuzzy model for final classification of different boundaries of tumor masses. The study involves 200 digitized mammograms from MIAS and other databases and has shown 86% correct classification rate.

  18. Advanced transportation system studies technical area 3: Alternate propulsion subsystem concepts, volume 2

    NASA Technical Reports Server (NTRS)

    Levak, Daniel

    1993-01-01

    The Alternate Propulsion Subsystem Concepts contract had five tasks defined for the first year. The tasks were: F-1A Restart Study, J-2S Restart Study, Propulsion Database Development, Space Shuttle Main Engine (SSME) Upper Stage Use, and CER's for Liquid Propellant Rocket Engines. The detailed study results, with the data to support the conclusions from various analyses, are being reported as a series of five separate Final Task Reports. Consequently, this volume only reports the required programmatic information concerning Computer Aided Design Documentation, and New Technology Reports. A detailed Executive Summary, covering all the tasks, is also available as Volume 1.

  19. Data mining the Kansas traffic-crash database : final report.

    DOT National Transportation Integrated Search

    2009-08-01

    Traffic crashes results from the interaction of different parameters which includes highway geometrics, traffic characteristics and human factors. Geometric variables include number of lanes, lane width, median width, shoulder width, roadway section ...

  20. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  1. 21SSD: a new public 21-cm EoR database

    NASA Astrophysics Data System (ADS)

    Eames, Evan; Semelin, Benoît

    2018-05-01

    With current efforts inching closer to detecting the 21-cm signal from the Epoch of Reionization (EoR), proper preparation will require publicly available simulated models of the various forms the signal could take. In this work we present a database of such models, available at 21ssd.obspm.fr. The models are created with a fully-coupled radiative hydrodynamic simulation (LICORICE), and are created at high resolution (10243). We also begin to analyse and explore the possible 21-cm EoR signals (with Power Spectra and Pixel Distribution Functions), and study the effects of thermal noise on our ability to recover the signal out to high redshifts. Finally, we begin to explore the concepts of `distance' between different models, which represents a crucial step towards optimising parameter space sampling, training neural networks, and finally extracting parameter values from observations.

  2. Prediction of risk of recurrence of venous thromboembolism following treatment for a first unprovoked venous thromboembolism: systematic review, prognostic model and clinical decision rule, and economic evaluation.

    PubMed

    Ensor, Joie; Riley, Richard D; Jowett, Sue; Monahan, Mark; Snell, Kym Ie; Bayliss, Susan; Moore, David; Fitzmaurice, David

    2016-02-01

    Unprovoked first venous thromboembolism (VTE) is defined as VTE in the absence of a temporary provoking factor such as surgery, immobility and other temporary factors. Recurrent VTE in unprovoked patients is highly prevalent, but easily preventable with oral anticoagulant (OAC) therapy. The unprovoked population is highly heterogeneous in terms of risk of recurrent VTE. The first aim of the project is to review existing prognostic models which stratify individuals by their recurrence risk, therefore potentially allowing tailored treatment strategies. The second aim is to enhance the existing research in this field, by developing and externally validating a new prognostic model for individual risk prediction, using a pooled database containing individual patient data (IPD) from several studies. The final aim is to assess the economic cost-effectiveness of the proposed prognostic model if it is used as a decision rule for resuming OAC therapy, compared with current standard treatment strategies. Standard systematic review methodology was used to identify relevant prognostic model development, validation and cost-effectiveness studies. Bibliographic databases (including MEDLINE, EMBASE and The Cochrane Library) were searched using terms relating to the clinical area and prognosis. Reviewing was undertaken by two reviewers independently using pre-defined criteria. Included full-text articles were data extracted and quality assessed. Critical appraisal of included full texts was undertaken and comparisons made of model performance. A prognostic model was developed using IPD from the pooled database of seven trials. A novel internal-external cross-validation (IECV) approach was used to develop and validate a prognostic model, with external validation undertaken in each of the trials iteratively. Given good performance in the IECV approach, a final model was developed using all trials data. A Markov patient-level simulation was used to consider the economic cost-effectiveness of using a decision rule (based on the prognostic model) to decide on resumption of OAC therapy (or not). Three full-text articles were identified by the systematic review. Critical appraisal identified methodological and applicability issues; in particular, all three existing models did not have external validation. To address this, new prognostic models were sought with external validation. Two potential models were considered: one for use at cessation of therapy (pre D-dimer), and one for use after cessation of therapy (post D-dimer). Model performance measured in the external validation trials showed strong calibration performance for both models. The post D-dimer model performed substantially better in terms of discrimination (c = 0.69), better separating high- and low-risk patients. The economic evaluation identified that a decision rule based on the final post D-dimer model may be cost-effective for patients with predicted risk of recurrence of over 8% annually; this suggests continued therapy for patients with predicted risks ≥ 8% and cessation of therapy otherwise. The post D-dimer model performed strongly and could be useful to predict individuals' risk of recurrence at any time up to 2-3 years, thereby aiding patient counselling and treatment decisions. A decision rule using this model may be cost-effective for informing clinical judgement and patient opinion in treatment decisions. Further research may investigate new predictors to enhance model performance and aim to further externally validate to confirm performance in new, non-trial populations. Finally, it is essential that further research is conducted to develop a model predicting bleeding risk on therapy, to manage the balance between the risks of recurrence and bleeding. This study is registered as PROSPERO CRD42013003494. The National Institute for Health Research Health Technology Assessment programme.

  3. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  4. Network-based statistical comparison of citation topology of bibliographic databases

    PubMed Central

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  5. Field results from a new die-to-database reticle inspection platform

    NASA Astrophysics Data System (ADS)

    Broadbent, William; Yokoyama, Ichiro; Yu, Paul; Seki, Kazunori; Nomura, Ryohei; Schmalfuss, Heiko; Heumann, Jan; Sier, Jean-Paul

    2007-05-01

    A new die-to-database high-resolution reticle defect inspection platform, TeraScanHR, has been developed for advanced production use with the 45nm logic node, and extendable for development use with the 32nm node (also the comparable memory nodes). These nodes will use predominantly ArF immersion lithography although EUV may also be used. According to recent surveys, the predominant reticle types for the 45nm node are 6% simple tri-tone and COG. Other advanced reticle types may also be used for these nodes including: dark field alternating, Mask Enhancer, complex tri-tone, high transmission, CPL, etc. Finally, aggressive model based OPC will typically be used which will include many small structures such as jogs, serifs, and SRAF (sub-resolution assist features) with accompanying very small gaps between adjacent structures. The current generation of inspection systems is inadequate to meet these requirements. The architecture and performance of the new TeraScanHR reticle inspection platform is described. This new platform is designed to inspect the aforementioned reticle types in die-to-database and die-to-die modes using both transmitted and reflected illumination. Recent results from field testing at two of the three beta sites are shown (Toppan Printing in Japan and the Advanced Mask Technology Center in Germany). The results include applicable programmed defect test reticles and advanced 45nm product reticles (also comparable memory reticles). The results show high sensitivity and low false detections being achieved. The platform can also be configured for the current 65nm, 90nm, and 130nm nodes.

  6. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  7. Drug-Path: a database for drug-induced pathways

    PubMed Central

    Zeng, Hui; Cui, Qinghua

    2015-01-01

    Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. Database URL: http://www.cuilab.cn/drugpath PMID:26130661

  8. Fullerene data mining using bibliometrics and database tomography

    PubMed

    Kostoff; Braun; Schubert; Toothman; Humenik

    2000-01-01

    Database tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multiword phrase frequencies and phrase proximities (physical closeness of the multiword technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a fullerenes database derived from the Science Citation Index and the Engineering Compendex. Phrase frequency analysis by the technical domain experts provided the pervasive technical themes of the fullerenes database, and phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the fullerenes literature supplemented the DT results with author/journal/institution publication and citation data. Comparisons of fullerenes results with past analyses of similarly structured near-earth space, chemistry, hypersonic/supersonic flow, aircraft, and ship hydrodynamics databases are made. One important finding is that many of the normalized bibliometric distribution functions are extremely consistent across these diverse technical domains and could reasonably be expected to apply to broader chemical topics than fullerenes that span multiple structural classes. Finally, lessons learned about integrating the technical domain experts with the data mining tools are presented.

  9. What can we learn from national-scale geodata describing soil erosion?

    NASA Astrophysics Data System (ADS)

    Benaud, Pia; Anderson, Karen; Carvalho, Jason; Evans, Martin; Glendell, Miriam; James, Mike; Lark, Murray; Quine, Timothy; Quinton, John; Rawlins, Barry; Rickson, Jane; Truckell, Ian; Brazier, Richard

    2017-04-01

    The United Kingdom has a rich dataset of soil erosion observations, which have been collected using a wide range of methodologies, across various spatial and temporal scales. Yet, while observations of soil erosion have been carried out along-side agricultural development and intensification, understanding whether or not the UK has a soil erosion problem remains a question to be answered. Furthermore, although good reviews of existing soil erosion rates exist, there is no single resource that brings all of this work together. Therefore, the primary aim of this research was to build a picture of why attempts to quantify erosion rates across the UK empirically have fallen short, through: (1) Collating all available, UK-based and empirically-derived soil erosion datasets into a spatially explicit and open-access database, (2) Developing an understanding of observed magnitudes of erosion, in the UK, (3) Evaluating impact of non-environmental controls on erosion observations i.e. study methodologies, and (4) Exploring trends between environmental controls and erosion rates. To-date, the database holds over 1500 records, which include results from both experimental and natural conditions, across arable, grassland and upland environments. Of the studies contained in the database, erosion has been observed ca. 40% of instances, ranging from <0.01 t.ha-1.yr-1 to 143 t.ha-1.yr-1. However, preliminary analysis has highlighted that over 90% of the studies included in the database only quantify soil loss via visible erosion features, such as rills or gullies, through volumetric assessments. Furthermore, there has been an inherent bias in the UK towards quantifying soil erosion in locations with either a known history or high probability of erosion occurrence. As a consequence, we conclude that such databases, may not be used to make a statistically unbiased assessment of national-scale erosion rates, however, they can highlight maximum likely rates under a wide range of soil, topography and land use conditions. Finally, this work suggests there is a strong argument for a replicable and statistically robust national soil erosion monitoring program to be carried out along-side the proposed sustainable intensification of agriculture.

  10. Kinetic modeling of cell metabolism for microbial production.

    PubMed

    Costa, Rafael S; Hartmann, Andras; Vinga, Susana

    2016-02-10

    Kinetic models of cellular metabolism are important tools for the rational design of metabolic engineering strategies and to explain properties of complex biological systems. The recent developments in high-throughput experimental data are leading to new computational approaches for building kinetic models of metabolism. Herein, we briefly survey the available databases, standards and software tools that can be applied for kinetic models of metabolism. In addition, we give an overview about recently developed ordinary differential equations (ODE)-based kinetic models of metabolism and some of the main applications of such models are illustrated in guiding metabolic engineering design. Finally, we review the kinetic modeling approaches of large-scale networks that are emerging, discussing their main advantages, challenges and limitations. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The design and implementation of urban earthquake disaster loss evaluation and emergency response decision support systems based on GIS

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Xu, Quan-li; Peng, Shuang-yun; Cao, Yan-bo

    2008-10-01

    Based on the necessity analysis of GIS applications in earthquake disaster prevention, this paper has deeply discussed the spatial integration scheme of urban earthquake disaster loss evaluation models and visualization technologies by using the network development methods such as COM/DCOM, ActiveX and ASP, as well as the spatial database development methods such as OO4O and ArcSDE based on ArcGIS software packages. Meanwhile, according to Software Engineering principles, a solution of Urban Earthquake Emergency Response Decision Support Systems based on GIS technologies have also been proposed, which include the systems logical structures, the technical routes,the system realization methods and function structures etc. Finally, the testing systems user interfaces have also been offered in the paper.

  12. Vibrating-Wire, Supercooled Liquid Water Content Sensor Calibration and Characterization Progress

    NASA Technical Reports Server (NTRS)

    King, Michael C.; Bognar, John A.; Guest, Daniel; Bunt, Fred

    2016-01-01

    NASA conducted a winter 2015 field campaign using weather balloons at the NASA Glenn Research Center to generate a validation database for the NASA Icing Remote Sensing System. The weather balloons carried a specialized, disposable, vibrating-wire sensor to determine supercooled liquid water content aloft. Significant progress has been made to calibrate and characterize these sensors. Calibration testing of the vibrating-wire sensors was carried out in a specially developed, low-speed, icing wind tunnel, and the results were analyzed. The sensor ice accretion behavior was also documented and analyzed. Finally, post-campaign evaluation of the balloon soundings revealed a gradual drift in the sensor data with increasing altitude. This behavior was analyzed and a method to correct for the drift in the data was developed.

  13. [The language disorders in schizophrenia in neurolinguistic and psycholinguistic perspectives].

    PubMed

    Piovan, Cristiano

    2012-01-01

    The descriptive psychopathology has classically equated the language with the formal aspects of thought. Recent developments in experimental and clinical research have emphasized the study of the language as a specific communicative ability. Within the framework of cognitive neuropsychology, the development of innovative research models, such as those based on the mentalizing ability, has allowed to formulate new hypotheses on the pathogenetic aspects of schizophrenia. Furthermore, mentalizing ability appears to be a basic skill for the pragmatic dimension of language. The author, after a brief description of the methods of investigation of neurolinguistics and psycholinguistics, presents a review of recent studies obtained by consulting the PubMed and PsycINFO databases. Finally, he focuses on the relationship between research findings and issues related to clinical practice.

  14. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection.

    PubMed

    Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel

    2015-07-01

    Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p < 0.03). In the atlas database study, the authors showed that the MAS using larger atlas databases generated better performance curves than the MAS using smaller ones, indicating larger atlas databases could produce more accurate segmentation. The authors have developed a new MAS framework for automatic WHS of CTA and investigated alternative implementations of MAS. With the proposed atlas ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation within practically acceptable computation time. This method can be useful for the development of new clinical applications of cardiac CT.

  15. Prospective drug safety monitoring using the UK primary-care General Practice Research Database: theoretical framework, feasibility analysis and extrapolation to future scenarios.

    PubMed

    Johansson, Saga; Wallander, Mari-Ann; de Abajo, Francisco J; García Rodríguez, Luis Alberto

    2010-03-01

    Post-launch drug safety monitoring is essential for the detection of adverse drug signals that may be missed during preclinical trials. Traditional methods of postmarketing surveillance such as spontaneous reporting have intrinsic limitations, many of which can be overcome by the additional application of structured pharmacoepidemiological approaches. However, further improvement in drug safety monitoring requires a shift towards more proactive pharmacoepidemiological methods that can detect adverse drug signals as they occur in the population. To assess the feasibility of using proactive monitoring of an electronic medical record system, in combination with an independent endpoint adjudication committee, to detect adverse events among users of selected drugs. UK General Practice Research Database (GPRD) information was used to detect acute liver disorder associated with the use of amoxicillin/clavulanic acid (hepatotoxic) or low-dose aspirin (acetylsalicylic acid [non-hepatotoxic]). Individuals newly prescribed these drugs between 1 October 2005 and 31 March 2006 were identified. Acute liver disorder cases were assessed using GPRD computer records in combination with case validation by an independent endpoint adjudication committee. Signal generation thresholds were based on the background rate of acute liver disorder in the general population. Over a 6-month period, 8148 patients newly prescribed amoxicillin/clavulanic acid and 5577 patients newly prescribed low-dose aspirin were identified. Within this cohort, searches identified 11 potential liver disorder cases from computerized records: six for amoxicillin/clavulanic acid and five for low-dose aspirin. The independent endpoint adjudication committee refined this to four potential acute liver disorder cases for whom paper-based information was requested for final case assessment. Final case assessments confirmed no cases of acute liver disorder. The time taken for this study was 18 months (6 months for recruitment and 12 months for data management and case validation). To reach the estimated target exposure necessary to raise or rule out a signal of concern to public health, we determined that a recruitment period 2-3 times longer than that used in this study would be required. Based on the real market uptake of six commonly used medicinal products launched between 2001 and 2006 in the UK (budesonide/eformoterol [fixed-dose combination], duloxetine, ezetimibe, metformin/rosiglitazone [fixed-dose combination], tiotropium bromide and tadalafil) the target exposure would not have been reached until the fifth year of marketing using a single database. It is feasible to set up a system that actively monitors drug safety using a healthcare database and an independent endpoint adjudication committee. However, future successful implementation will require multiple databases to be queried so that larger study populations are included. This requires further development and harmonization of international healthcare databases.

  16. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  17. Nevada low-temperaure geothermal resource assessment: 1994. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garside, L.J.

    Data compilation for the low-temperature program is being done by State Teams in two western states. Final products of the study include: a geothermal database, in hardcopy and as digital data (diskette) listing information on all known low- and moderate- temperature springs and wells in Nevada; a 1:1,000,000-scale map displaying these geothermal localities, and a bibliography of references on Nevada geothermal resources.

  18. MR 201424 Final Report Addendum

    DTIC Science & Technology

    2016-09-01

    FINAL REPORT ADDENDUM Munitions Classification Library ESTCP Project MR-201424 SEPTEMBER 2016 Mr. Craig Murray Dr. Nagi Khadr Parsons Dr...solver and multi-solver library databases, and only the TEMTADS 2X2 and the MetalMapper advanced TEM systems are supported by UX-Analyze, data on...other steps (section 3.4) before getting into the data collection activities (sections 3.5-3.7). All inversions of library quality data collected over

  19. IRIS Toxicological Review of Vinyl Chloride (Final Report ...

    EPA Pesticide Factsheets

    EPA is announcing the release of the final report, Toxicological Review of Vinyl Chloride: in support of the Integrated Risk Information System (IRIS). The updated Summary for Vinyl Chloride and accompanying Quickview have also been added to the IRIS Database. Common synonyms of vinyl chloride (VC) include chloroethene, chloroethylene, ethylene monochloride, and monochloroethene. VC is a synthetic chemical used as a chemical intermediate in the polymerization of polyvinyl chloride.

  20. Evidence-based practice guideline of Chinese herbal medicine for primary open-angle glaucoma (qingfeng -neizhang)

    PubMed Central

    Yang, Yingxin; Ma, Qiu-yan; Yang, Yue; He, Yu-peng; Ma, Chao-ting; Li, Qiang; Jin, Ming; Chen, Wei

    2018-01-01

    Abstract Background: Primary open angle glaucoma (POAG) is a chronic, progressive optic neuropathy. The aim was to develop an evidence-based clinical practice guideline of Chinese herbal medicine (CHM) for POAG with focus on Chinese medicine pattern differentiation and treatment as well as approved herbal proprietary medicine. Methods: The guideline development group involved in various pieces of expertise in contents and methods. Authors searched electronic databases include CNKI, VIP, Sino-Med, Wanfang data, PubMed, the Cochrane Library, EMBASE, as well as checked China State Food and Drug Administration (SFDA) from the inception of these databases to June 30, 2015. Systematic reviews and randomized controlled trials of Chinese herbal medicine treating adults with POAG were evaluated. Risk of bias tool in the Cochrane Handbook and evidence strength developed by the GRADE group were applied for the evaluation, and recommendations were based on the findings incorporating evidence strength. After several rounds of Expert consensus, the final guideline was endorsed by relevant professional committees. Results: CHM treatment principle and formulae based on pattern differentiation together with approved patent herbal medicines are the main treatments for POAG, and the diagnosis and treatment focusing on blood related patterns is the major domain. Conclusion: CHM therapy alone or combined with other conventional treatment reported in clinical studies together with Expert consensus were recommended for clinical practice. PMID:29595636

  1. The Set of Fear Inducing Pictures (SFIP): Development and validation in fearful and nonfearful individuals.

    PubMed

    Michałowski, Jarosław M; Droździel, Dawid; Matuszewski, Jacek; Koziejowski, Wojtek; Jednoróg, Katarzyna; Marchewka, Artur

    2017-08-01

    Emotionally charged pictorial materials are frequently used in phobia research, but no existing standardized picture database is dedicated to the study of different phobias. The present work describes the results of two independent studies through which we sought to develop and validate this type of database-a Set of Fear Inducing Pictures (SFIP). In Study 1, 270 fear-relevant and 130 neutral stimuli were rated for fear, arousal, and valence by four groups of participants; small-animal (N = 34), blood/injection (N = 26), social-fearful (N = 35), and nonfearful participants (N = 22). The results from Study 1 were employed to develop the final version of the SFIP, which includes fear-relevant images of social exposure (N = 40), blood/injection (N = 80), spiders/bugs (N = 80), and angry faces (N = 30), as well as 726 neutral photographs. In Study 2, we aimed to validate the SFIP in a sample of spider, blood/injection, social-fearful, and control individuals (N = 66). The fear-relevant images were rated as being more unpleasant and led to greater fear and arousal in fearful than in nonfearful individuals. The fear images differentiated between the three fear groups in the expected directions. Overall, the present findings provide evidence for the high validity of the SFIP and confirm that the set may be successfully used in phobia research.

  2. Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John

    Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regardingmore » Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.« less

  3. Efficient Privacy-Enhancing Techniques for Medical Databases

    NASA Astrophysics Data System (ADS)

    Schartner, Peter; Schaffer, Martin

    In this paper, we introduce an alternative for using linkable unique health identifiers: locally generated system-wide unique digital pseudonyms. The presented techniques are based on a novel technique called collision-free number generation which is discussed in the introductory part of the article. Afterwards, attention is payed onto two specific variants of collision-free number generation: one based on the RSA-Problem and the other one based on the Elliptic Curve Discrete Logarithm Problem. Finally, two applications are sketched: centralized medical records and anonymous medical databases.

  4. Global ISR: Toward a Comprehensive Defense Against Unauthorized Code Execution

    DTIC Science & Technology

    2010-10-01

    implementation using two of the most popular open- source servers: the Apache web server, and the MySQL database server. For Apache, we measure the effect that...utility ab. T o ta l T im e ( s e c ) 0 500 1000 1500 2000 2500 3000 Native Null ISR ISR−MP Fig. 3. The MySQL test-insert bench- mark measures...various SQL operations. The figure draws total execution time as reported by the benchmark utility. Finally, we benchmarked a MySQL database server using

  5. Development of models and software for liquidus temperatures of glasses of HWVP products. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrma, P.R.; Vienna, J.D.; Pelton, A.D.

    In an earlier report [92 Pel] was described the development of software and thermodynamic databases for the calculation of liquidus temperatures of glasses of HWVP products containing the components SiO{sub 2}-B{sub 2}O{sub 3}-Na{sub 2}O-Li{sub 2}O-CaO-MgO-Fe{sub 2}O{sub 3}-Al{sub 2}O{sub 3}-ZrO{sub 2}-{open_quotes}others{close_quotes}. The software package developed at that time consisted of the EQUILIB program of the F*A*C*T computer system with special input/output routines. Since then, Battelle has purchased the entire F*A*C*T computer system, and this fully replaces the earlier package. Furthermore, with the entire F*A*C*T system, additional calculations can be performed such as calculations at fixed O{sub 2}, SO{sub 2} etc. pressures,more » or graphing of output. Furthermore, the public F*A*C*T database of over 5000 gaseous species and condensed phases is now accessible. The private databases for the glass and crystalline phases were developed for Battelle by optimization of thermodynamic and phase diagram data. That is, all available data for 2- and 3-component sub-systems of the 9-component oxide system were collected, and parameters of model equations for the thermodynamic properties were found which best reproduce all the data. For representing the thermodynamic properties of the glass as a function of composition and temperature, the modified quasichemical model was used. This model was described in the earlier report [92 Pel] along with all the optimizations. With the model, it was possible to predict the thermodynamic properties of the 9-component glass, and thereby to calculate liquidus temperatures. Liquidus temperatures measured by Battelle for 123 CVS glass compositions were used to test the model and to refine the model by the addition of further parameters.« less

  6. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    PubMed

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  7. A UML Profile for Developing Databases that Conform to the Third Manifesto

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki

    The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.

  8. Effective Use of Java Data Objects in Developing Database Applications; Advantages and Disadvantages

    DTIC Science & Technology

    2004-06-01

    DATA OBJECTS IN DEVELOPING DATABASE APPLICATIONS. ADVANTAGES AND DISADVANTAGES Paschalis Zilidis June 2004 Thesis Advisor: Thomas...Objects in Developing Database Applications. Advantages and Disadvantages 6. AUTHOR(S) Paschalis ZILIDIS 5. FUNDING NUMBERS 7. PERFORMING...database for the backend datastore. The major disadvantage of this approach is the well-known “impedance mismatch” in which some form of mapping is

  9. Rollover Data Special Study : Final Report.

    DOT National Transportation Integrated Search

    2011-01-31

    This report summarizes research results from the Rollover Data Special Study (RODSS) project. The research encompassed the : design of a RODSS database for the National Highway Traffic Safety Administration, review of the RODSS data to evaluate the :...

  10. 2000-2001 California statewide household travel survey. Final report

    DOT National Transportation Integrated Search

    2002-06-01

    The California Department of Transportation (Caltrans) maintains a statewide database of household socioeconomic and travel information, which is used in regional and statewide travel demand forecasting. The 2000-2001 California Statewide Household T...

  11. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  12. A global approach to analysis and interpretation of metabolic data for plant natural product discovery.

    PubMed

    Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J; Wurtele, Eve Syrkin

    2013-04-01

    Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publicly available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these datasets with transcriptomic data to create hypotheses concerning specialized metabolisms that generate the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software.

  13. A global approach to analysis and interpretation of metabolic data for plant natural product discovery†

    PubMed Central

    Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J.

    2013-01-01

    Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publically available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these dataset with transcriptomic data to create hypotheses concerning specialized metabolism that generates the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software. PMID:23447050

  14. The global record of local iron geochemical data from Proterozoic through Paleozoic basins

    NASA Astrophysics Data System (ADS)

    Sperling, E. A.; Wolock, C.; Johnston, D. T.; Knoll, A. H.

    2013-12-01

    Iron-based redox proxies represent one of the most mature tools available to sedimentary geochemists. These techniques, which benefit from decades of refinement, are based on the fact that rocks deposited under anoxic conditions tend to be enriched in highly-reactive iron. However, there are myriad local controls on the development of anoxia, and no local section is an exemplar for the global ocean. The global signal must thus be determined using techniques like those developed to solve an analogous problem in paleobiology: the inference of global diversity patterns through time from faunas seen in local stratigraphic sections. Here we analyze a dataset of over 4000 iron speciation measurements (including over 600 de novo analyses) to better understand redox changes from the Proterozoic through the Paleozoic Era. Preliminary database analyses yield interesting observations. We find that although anoxic water columns in the middle Proterozoic were dominantly ferruginous, there was a statistical tendency towards euxinia not seen in early Neoproterozoic or Ediacaran data. Also, we find that in the Neoproterozoic oceans, oxic depositional environments-the likely home for early animals-have exceptionally low pyrite contents, and by inference low levels of porewater sulfide. This runs contrary to notions of sulfide stress on early metazoans. Finally, the current database of iron speciation data does not support an Ediacaran or Cambrian oxygenation event. This conclusion is of course only as sharp as the ability of the Fe-proxy database to track dissolved oxygen and does not rule out the possibility of a small-magnitude change in oxygen. It does suggest, however, that if changing pO2 facilitated animal diversification it did so by a limited rise past critical ecological thresholds, such as seen in the modern Oxygen Minimum Zones benthos. Oxygen increase to modern levels thus becomes a Paleozoic problem, and one in need of better sampling if a database approach is to be employed.

  15. A comparative method for processing immunological parameters: developing an "Immunogram".

    PubMed

    Ortolani, Riccardo; Bellavite, Paolo; Paiola, Fiorenza; Martini, Morena; Marchesini, Martina; Veneri, Dino; Franchini, Massimo; Chirumbolo, Salvatore; Tridente, Giuseppe; Vella, Antonio

    2010-04-01

    The immune system is a network of numerous cells that communicate both directly and indirectly with each other. The system is very sensitive to antigenic stimuli, which are memorised, and is closely connected with the endocrine and nervous systems. Therefore, in order to study the immune system correctly, it must be considered in all its complexity by analysing its components with multiparametric tools that take its dynamic characteristic into account. We analysed lymphocyte subpopulations by using monoclonal antibodies with six different fluorochromes; the monoclonal panel employed included CD45, CD3, CD4, CD8, CD16, CD56, CD57, CD19, CD23, CD27, CD5, and HLA-DR. This panel has enabled us to measure many lymphocyte subsets in different states and with different functions: helper, suppressor, activated, effector, naïve, memory, and regulatory. A database was created to collect the values of immunological parameters of approximately 8,000 subjects who have undergone testing since 2000. When the distributions of the values for these parameters were compared with the medians of reference values published in the literature, we found that most of the values from the subjects included in the database were close to the medians in the literature. To process the data we used a comparative method that calculates the percentile rank of the values of a subject by comparing them with the values for others subjects of the same age. From this data processing we obtained a set of percentile ranks that represent the positions of the various parameters with regard to the data for other age-matched subjects included in the database. These positions, relative to both the absolute values and percentages, are plotted in a graph. We have called the final plot, which can be likened to that subject's immunological fingerprint, an "Immunogram". In order to perform the necessary calculations automatically, we developed dedicated software (Immunogramma) which provides at least two different "pictures" for each subject: the first is based on a comparison of the individual's data with those from all age-related subjects, while the second provides a comparison with only age and disease-related subjects. In addition, we can superimpose two fingerprints from the same subject, calculated at different times, in order to produce a dynamic picture, for instance before and after treatment. Finally, with the aim of interpreting the clinical and diagnostic meaning of a set of positions for the values of the measured parameters, we can also search the database to determine whether it contains other subjects who have a similar pattern for some selected immune parameters. This method helps to study and follow-up immune parameters over time. The software enables automation of the process and data sharing with other departments and laboratories, so the database can grow rapidly, thus expanding its informational capacity.

  16. Integrating computer programs for engineering analysis and design

    NASA Technical Reports Server (NTRS)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  17. Mineral resources management based on GIS and RS: a case study of the Laozhaiwan Gold Mine

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Hua, Xianghong; Wang, Xinzhou; Ma, Liguang; Yuan, Yanbin

    2005-10-01

    With the development of digital information technology in mining industry, the concept of DM (Digital Mining) and MGIS (Mining Geographical Information System) are becoming the research focus but not perfect. How to effectively manage the dataset of geological, surveying and mineral products grade is the key point that concerned the sustainable development and standardized management in mining industry. Based on the existing combined GIS and remote sensing technology, we propose a model named DMMIS (Digital Mining Management Information System), which is composed of the database layer, the ActiveX layer and the user interface layer. The system is used in Laozhaiwan Gold Mine, Yunnan Province of China, which is shown to demonstrate the feasibility of the research and development achievement stated in this paper. Finally, some conclusions and constructive advices for future research work are given.

  18. Toward Mycobacterium tuberculosis DXR inhibitor design: homology modeling and molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Singh, Nidhi; Avery, Mitchell A.; McCurdy, Christopher R.

    2007-09-01

    Mycobacterium tuberculosis 1-deoxy- d-xylulose-5-phosphate reductoisomerase ( MtDXR) is a potential target for antitubercular chemotherapy. In the absence of its crystallographic structure, our aim was to develop a structural model of MtDXR. This will allow us to gain early insight into the structure and function of the enzyme and its likely binding to ligands and cofactors and thus, facilitate structure-based inhibitor design. To achieve this goal, initial models of MtDXR were generated using MODELER. The best quality model was refined using a series of minimizations and molecular dynamics simulations. A protein-ligand complex was also developed from the initial homology model of the target protein by including information about the known ligand as spatial restraints and optimizing the mutual interactions between the ligand and the binding site. The final model was evaluated on the basis of its ability to explain several site-directed mutagenesis data. Furthermore, a comparison of the homology model with the X-ray structure published in the final stages of the project shows excellent agreement and validates the approach. The knowledge gained from the current study should prove useful in the design and development of inhibitors as potential novel therapeutic agents against tuberculosis by either de novo drug design or virtual screening of large chemical databases.

  19. Enhancing the DNA Patent Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walters, LeRoy B.

    Final Report on Award No. DE-FG0201ER63171 Principal Investigator: LeRoy B. Walters February 18, 2008 This project successfully completed its goal of surveying and reporting on the DNA patenting and licensing policies at 30 major U.S. academic institutions. The report of survey results was published in the January 2006 issue of Nature Biotechnology under the title “The Licensing of DNA Patents by US Academic Institutions: An Empirical Survey.” Lori Pressman was the lead author on this feature article. A PDF reprint of the article will be submitted to our Program Officer under separate cover. The project team has continued to updatemore » the DNA Patent Database on a weekly basis since the conclusion of the project. The database can be accessed at dnapatents.georgetown.edu. This database provides a valuable research tool for academic researchers, policymakers, and citizens. A report entitled Reaping the Benefits of Genomic and Proteomic Research: Intellectual Property Rights, Innovation, and Public Health was published in 2006 by the Committee on Intellectual Property Rights in Genomic and Protein Research and Innovation, Board on Science, Technology, and Economic Policy at the National Academies. The report was edited by Stephen A. Merrill and Anne-Marie Mazza. This report employed and then adapted the methodology developed by our research project and quoted our findings at several points. (The full report can be viewed online at the following URL: http://www.nap.edu/openbook.php?record_id=11487&page=R1). My colleagues and I are grateful for the research support of the ELSI program at the U.S. Department of Energy.« less

  20. Genomics and Public Health Research: Can the State Allow Access to Genomic Databases?

    PubMed Central

    Cousineau, J; Girard, N; Monardes, C; Leroux, T; Jean, M Stanton

    2012-01-01

    Because many diseases are multifactorial disorders, the scientific progress in genomics and genetics should be taken into consideration in public health research. In this context, genomic databases will constitute an important source of information. Consequently, it is important to identify and characterize the State’s role and authority on matters related to public health, in order to verify whether it has access to such databases while engaging in public health genomic research. We first consider the evolution of the concept of public health, as well as its core functions, using a comparative approach (e.g. WHO, PAHO, CDC and the Canadian province of Quebec). Following an analysis of relevant Quebec legislation, the precautionary principle is examined as a possible avenue to justify State access to and use of genomic databases for research purposes. Finally, we consider the Influenza pandemic plans developed by WHO, Canada, and Quebec, as examples of key tools framing public health decision-making process. We observed that State powers in public health, are not, in Quebec, well adapted to the expansion of genomics research. We propose that the scope of the concept of research in public health should be clear and include the following characteristics: a commitment to the health and well-being of the population and to their determinants; the inclusion of both applied research and basic research; and, an appropriate model of governance (authorization, follow-up, consent, etc.). We also suggest that the strategic approach version of the precautionary principle could guide collective choices in these matters. PMID:23113174

  1. 24 CFR 902.24 - Database adjustment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Database adjustment. 902.24 Section 902.24 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED... PUBLIC HOUSING ASSESSMENT SYSTEM Physical Condition Indicator § 902.24 Database adjustment. (a...

  2. 24 CFR 902.24 - Database adjustment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Database adjustment. 902.24 Section 902.24 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED... PUBLIC HOUSING ASSESSMENT SYSTEM Physical Condition Indicator § 902.24 Database adjustment. (a...

  3. 24 CFR 902.24 - Database adjustment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Database adjustment. 902.24 Section 902.24 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED... PUBLIC HOUSING ASSESSMENT SYSTEM Physical Condition Indicator § 902.24 Database adjustment. (a...

  4. 24 CFR 902.24 - Database adjustment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 4 2014-04-01 2014-04-01 false Database adjustment. 902.24 Section 902.24 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED... PUBLIC HOUSING ASSESSMENT SYSTEM Physical Condition Indicator § 902.24 Database adjustment. (a...

  5. Developing Visualization Support System for Teaching/Learning Database Normalization

    ERIC Educational Resources Information Center

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  6. Development and Operation of a Database Machine for Online Access and Update of a Large Database.

    ERIC Educational Resources Information Center

    Rush, James E.

    1980-01-01

    Reviews the development of a fault tolerant database processor system which replaced OCLC's conventional file system. A general introduction to database management systems and the operating environment is followed by a description of the hardware selection, software processes, and system characteristics. (SW)

  7. Impact of land use, soil and DEM databases on surface runoff assessment with GIS decision support tool: A study case on the Briançon vineyard catchment (Gard, France)

    NASA Astrophysics Data System (ADS)

    Regazzoni, C.; Payraudeau, S.

    2012-04-01

    Runoff and associated erosion represent a primary mode of mobilization and transfer of pesticides from agricultural lands to watercourses and groundwater. The pesticides toxicity is potentially higher at the headwater catchment scale. These catchments are usually ungauged and characterized by temporary streams. Several mitigation strategies and management practices are currently used to mitigate the pesticides mixtures in agro-ecosystems. Among those practices, Stormwater Wetlands (SW) could be implemented to store surface runoff and to mitigate pesticides loads. The implementation of New Potential Stormwater Wetlands (NPSW) requires a diagnosis of intermittent runoff at the headwater catchment scale. The main difficulty to perform this diagnosis at the headwater catchment scale is to spatially characterize with enough accuracy the landscape components. Indeed, fields and field margins enhance or decrease the runoff and determine the pathways of hortonian overland flow. Land use, soil and Digital Elevation Model databases are systematically used. The question of the respective weight of each of these databases on the uncertainty of the diagnostic results is rarely analyzed at the headwater catchment scale. Therefore, this work focused (i) on the uncertainties of each of these databases and their propagation on the hortonian overland flow modelling, (ii) the methods to improve the accuracy of each database, (iii) the propagation of the databases uncertainties on intermittent runoff modelling and (iv) the impact of modelling cell size on the diagnosis. The model developed was a raster approach of the SCS-CN method integrating re-infiltration processes. The uncertainty propagation was analyzed on the Briançon vineyard catchment (Gard, France, 1400 ha). Based on this study site, the results showed that the geographic and thematic accuracies of regional soil database (1:250 000) were insufficient to correctly simulate the hortonian overland flow. These results have to be weighted according to the soil heterogeneity. Conversely, the regional land use (1:50 000) provided an acceptable diagnostic when combining with accurate soil database (1:15 000). Moreover, the regional land use quality can be improved by integrating road and river networks usually available at the national scale. Finally, a 5 m modelling cell size appeared as an optimum to correctly describe the landscape components and to assess the hortonian overland flow. A wrong assessment of the hortonian overland flow leads to a misinterpretation of the results and affects effective decision-making, e.g. the number and the location of the NSPW. This uncertainty analysis and the improvement methods developed on this study site can be adapted on other headwater catchments characterized by intermittent surface runoff.

  8. Resident database interfaces to the DAVID system, a heterogeneous distributed database management system

    NASA Technical Reports Server (NTRS)

    Moroh, Marsha

    1988-01-01

    A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.

  9. A Web-based searchable system to confirm magnetic resonance compatibility of implantable medical devices in Japan: a preliminary study.

    PubMed

    Fujiwara, Yasuhiro; Fujioka, Hitoshi; Watanabe, Tomoko; Sekiguchi, Maiko; Murakami, Ryuji

    2017-09-01

    Confirmation of the magnetic resonance (MR) compatibility of implanted medical devices (IMDs) is mandatory before conducting magnetic resonance imaging (MRI) examinations. In Japan, few such confirmation methods are in use, and they are time-consuming. This study aimed to develop a Web-based searchable MR safety information system to confirm IMD compatibility and to evaluate the usefulness of the system. First, MR safety information for intravascular stents and stent grafts sold in Japan was gathered by interviewing 20 manufacturers. These IMDs were categorized based on the descriptions available on medical package inserts as: "MR Safe," "MR Conditional," "MR Unsafe," "Unknown," and "No Medical Package Insert Available". An MR safety information database for implants was created based on previously proposed item lists. Finally, a Web-based searchable system was developed using this database. A questionnaire was given to health-care personnel in Japan to evaluate the usefulness of this system. Seventy-nine datasets were collected using information provided by 12 manufacturers and by investigating the medical packaging of the IMDs. Although the datasets must be updated by collecting data from other manufacturers, this system facilitates the easy and rapid acquisition of MR safety information for IMDs, thereby improving the safety of MRI examinations.

  10. Overview of data and conceptual approaches for derivation of quantitative structure-activity relationships for ecotoxicological effects of organic chemicals.

    PubMed

    Bradbury, Steven P; Russom, Christine L; Ankley, Gerald T; Schultz, T Wayne; Walker, John D

    2003-08-01

    The use of quantitative structure-activity relationships (QSARs) in assessing potential toxic effects of organic chemicals on aquatic organisms continues to evolve as computational efficiency and toxicological understanding advance. With the ever-increasing production of new chemicals, and the need to optimize resources to assess thousands of existing chemicals in commerce, regulatory agencies have turned to QSARs as essential tools to help prioritize tiered risk assessments when empirical data are not available to evaluate toxicological effects. Progress in designing scientifically credible QSARs is intimately associated with the development of empirically derived databases of well-defined and quantified toxicity endpoints, which are based on a strategic evaluation of diverse sets of chemical structures, modes of toxic action, and species. This review provides a brief overview of four databases created for the purpose of developing QSARs for estimating toxicity of chemicals to aquatic organisms. The evolution of QSARs based initially on general chemical classification schemes, to models founded on modes of toxic action that range from nonspecific partitioning into hydrophobic cellular membranes to receptor-mediated mechanisms is summarized. Finally, an overview of expert systems that integrate chemical-specific mode of action classification and associated QSAR selection for estimating potential toxicological effects of organic chemicals is presented.

  11. Development of equations to predict the influence of floor space on average daily gain, average daily feed intake and gain : feed ratio of finishing pigs.

    PubMed

    Flohr, J R; Dritz, S S; Tokach, M D; Woodworth, J C; DeRouchey, J M; Goodband, R D

    2018-05-01

    Floor space allowance for pigs has substantial effects on pig growth and welfare. Data from 30 papers examining the influence of floor space allowance on the growth of finishing pigs was used in a meta-analysis to develop alternative prediction equations for average daily gain (ADG), average daily feed intake (ADFI) and gain : feed ratio (G : F). Treatment means were compiled in a database that contained 30 papers for ADG and 28 papers for ADFI and G : F. The predictor variables evaluated were floor space (m2/pig), k (floor space/final BW0.67), Initial BW, Final BW, feed space (pigs per feeder hole), water space (pigs per waterer), group size (pigs per pen), gender, floor type and study length (d). Multivariable general linear mixed model regression equations were used. Floor space treatments within each experiment were the observational and experimental unit. The optimum equations to predict ADG, ADFI and G : F were: ADG, g=337.57+(16 468×k)-(237 350×k 2)-(3.1209×initial BW (kg))+(2.569×final BW (kg))+(71.6918×k×initial BW (kg)); ADFI, g=833.41+(24 785×k)-(388 998×k 2)-(3.0027×initial BW (kg))+(11.246×final BW (kg))+(187.61×k×initial BW (kg)); G : F=predicted ADG/predicted ADFI. Overall, the meta-analysis indicates that BW is an important predictor of ADG and ADFI even after computing the constant coefficient k, which utilizes final BW in its calculation. This suggests including initial and final BW improves the prediction over using k as a predictor alone. In addition, the analysis also indicated that G : F of finishing pigs is influenced by floor space allowance, whereas individual studies have concluded variable results.

  12. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  13. PRODORIC2: the bacterial gene regulation database in 2018

    PubMed Central

    Dudek, Christian-Alexander; Hartlich, Juliane; Brötje, David; Jahn, Dieter

    2018-01-01

    Abstract Bacteria adapt to changes in their environment via differential gene expression mediated by DNA binding transcriptional regulators. The PRODORIC2 database hosts one of the largest collections of DNA binding sites for prokaryotic transcription factors. It is the result of the thoroughly redesigned PRODORIC database. PRODORIC2 is more intuitive and user-friendly. Besides significant technical improvements, the new update offers more than 1000 new transcription factor binding sites and 110 new position weight matrices for genome-wide pattern searches with the Virtual Footprint tool. Moreover, binding sites deduced from high-throughput experiments were included. Data for 6 new bacterial species including bacteria of the Rhodobacteraceae family were added. Finally, a comprehensive collection of sigma- and transcription factor data for the nosocomial pathogen Clostridium difficile is now part of the database. PRODORIC2 is publicly available at http://www.prodoric2.de. PMID:29136200

  14. 20 CFR 411.250 - How will SSA evaluate a PM?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PROGRAM Use of One or More Program Managers To Assist in Administration of the Ticket to Work Program... determine the PM's final rating. (c) These performance evaluations will be made part of our database on...

  15. IRIS Toxicological Review of Hexachloroethane (External Review Draft)

    EPA Science Inventory

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of hexachloroethane that when finalized will appear on the Integrated Risk Information System (IRIS) database.

  16. 13 CFR 127.604 - How will SBA process an EDWOSB or WOSB status protest?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... women claiming economic disadvantage and their spouses, unless the individuals and their spouses are... procurement reporting databases to reflect the final agency decision (the D/GC's decision if no appeal is...

  17. 13 CFR 127.604 - How will SBA process an EDWOSB or WOSB status protest?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... women claiming economic disadvantage and their spouses, unless the individuals and their spouses are... procurement reporting databases to reflect the final agency decision (the D/GC's decision if no appeal is...

  18. Key comparison BIPM.RI(I)-K9 of the absorbed dose to water standards of the PTB, Germany and the BIPM in medium-energy x-rays

    NASA Astrophysics Data System (ADS)

    Burns, D. T.; Kessler, C.; Büermann, L.; Ketelhut, S.

    2018-01-01

    A key comparison has been made between the absorbed dose to water standards of the PTB, Germany and the BIPM in the medium-energy x-ray range. The results show the standards to be in general agreement at the level of the standard uncertainty of the comparison of 9 to 11 parts in 103. The results are combined with those of a EURAMET comparison and presented in terms of degrees of equivalence for entry in the BIPM key comparison database. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  19. Demonstrating the financial impact of clinical libraries: a systematic review.

    PubMed

    Madden, Anne; Collins, Pamela; McGowan, Sondhaya; Stevenson, Paul; Castelli, David; Hyde, Loree; DeSanto, Kristen; O'Brien, Nancy; Purdon, Michelle; Delgado, Diana

    2016-09-01

    The purpose of this review is to evaluate the tools used to measure the financial value of libraries in a clinical setting. Searches were carried out on ten databases for the years 2003-2013, with a final search before completion to identify any recent papers. Eleven papers met the final inclusion criteria. There was no evidence of a single 'best practice', and many metrics used to measure financial impact of clinical libraries were developed on an ad hoc basis locally. The most common measures of financial impact were value of time saved, value of resource collection against cost of alternative sources, cost avoidance and revenue generated through assistance on grant submissions. Few papers provided an insight into the longer term impact on the library service resulting from submitting return on investment (ROI) or other financial impact statements. There are limited examples of metrics which clinical libraries can use to measure explicit financial impact. The methods highlighted in this literature review are generally implicit in the measures used and lack robustness. There is a need for future research to develop standardised, validated tools that clinical libraries can use to demonstrate their financial impact. © 2016 Health Libraries Group.

  20. [Medicinal plant DNA marker assisted breeding (Ⅱ) the assistant identification of SNPs assisted identification and breeding research of high yield Perilla frutescens new variety].

    PubMed

    Shen, Qi; Zhang, Dong; Sun, Wei; Zhang, Yu-Jun; Shang, Zhi-Wei; Chen, Shi-Lin

    2017-05-01

    Perilla frutescens is one of 60 kinds of food and medicine plants in the initial directory announced by health ministry of China. With the development of Perilla domain in recent , the breeding and application of good varieties has become the main bottleneck of its development. This study reported that applied to the system selection, add to marker-assisted method to breed perilla varieties. Through the whole genome sequencing and consistency matching, annotated the mutation locus according to genome data, and comparison analysis with Perilla common variants database, finally selected 30 non-synonymous mutation SNPs used as characteristic markers of Zhongyan Feishu No.1. those SNP marker were used as chosen standard of Perilla varieties. Finally breeding new perilla variety Zhongyan Feishu No.1, which possess to characters of the leaf and seed dual-used, high yield, high resistance, and could used to green fertilizer. The Zhongyan Feishu No.1 acquired the plant new varieties identification of Beijing city , the identification numbers is 2016054. Marker assisted identification guide new varieties breeding in plants, which can provide a new reference for breeding of medicinal plants. Copyright© by the Chinese Pharmaceutical Association.

  1. The Primate Life History Database: A unique shared ecological data resource

    PubMed Central

    Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.

    2011-01-01

    Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066

  2. Integrating a local database into the StarView distributed user interface

    NASA Technical Reports Server (NTRS)

    Silberberg, D. P.

    1992-01-01

    A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.

  3. Development of a food frequency questionnaire for Sri Lankan adults

    PubMed Central

    2012-01-01

    Background Food Frequency Questionnaires (FFQs) are commonly used in epidemiologic studies to assess long-term nutritional exposure. Because of wide variations in dietary habits in different countries, a FFQ must be developed to suit the specific population. Sri Lanka is undergoing nutritional transition and diet-related chronic diseases are emerging as an important health problem. Currently, no FFQ has been developed for Sri Lankan adults. In this study, we developed a FFQ to assess the regular dietary intake of Sri Lankan adults. Methods A nationally representative sample of 600 adults was selected by a multi-stage random cluster sampling technique and dietary intake was assessed by random 24-h dietary recall. Nutrient analysis of the FFQ required the selection of foods, development of recipes and application of these to cooked foods to develop a nutrient database. We constructed a comprehensive food list with the units of measurement. A stepwise regression method was used to identify foods contributing to a cumulative 90% of variance to total energy and macronutrients. In addition, a series of photographs were included. Results We obtained dietary data from 482 participants and 312 different food items were recorded. Nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Conclusion We developed a FFQ and the related nutrient composition database for Sri Lankan adults. Culturally specific dietary tools are central to capturing the role of diet in risk for chronic disease in Sri Lanka. The next step will involve the verification of FFQ reproducibility and validity. PMID:22937734

  4. Final report of international comparison APMP.QM-S2.2015 of oxygen in nitrogen at 0.2 mol/mol

    NASA Astrophysics Data System (ADS)

    Aoki, Nobuyuki; Shimosaka, Takuya; Lin, Tsai-Yin; Liu, Hsin-Wang; Huang, Chiung-Kun; Wongjuk, Arnuttachai; Rattanasombat, Soponrat; Sinweeruthai, Ratirat; Uehara, Shinji; Aleksandrov, Vladimir

    2017-01-01

    This document describes the results of the supplementary comparison for oxygen in nitrogen gas mixture. The nominal amount-of-substance fraction of oxygen is 0.20 mol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  5. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest

  6. A life cycle database for parasitic acanthocephalans, cestodes, and nematodes

    USGS Publications Warehouse

    Benesh, Daniel P.; Lafferty, Kevin D.; Kuris, Armand

    2017-01-01

    Parasitologists have worked out many complex life cycles over the last ~150 years, yet there have been few efforts to synthesize this information to facilitate comparisons among taxa. Most existing host-parasite databases focus on particular host taxa, do not distinguish final from intermediate hosts, and lack parasite life-history information. We summarized the known life cycles of trophically transmitted parasitic acanthocephalans, cestodes, and nematodes. For 973 parasite species, we gathered information from the literature on the hosts infected at each stage of the parasite life cycle (8510 host-parasite species associations), what parasite stage is in each host, and whether parasites need to infect certain hosts to complete the life cycle. We also collected life-history data for these parasites at each life cycle stage, including 2313 development time measurements and 7660 body size measurements. The result is the most comprehensive data summary available for these parasite taxa. In addition to identifying gaps in our knowledge of parasite life cycles, these data can be used to test hypotheses about life cycle evolution, host specificity, parasite life-history strategies, and the roles of parasites in food webs.

  7. Employing image processing techniques for cancer detection using microarray images.

    PubMed

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification

    PubMed Central

    Xie, Jin; Zhang, Lei; You, Jane; Zhang, David; Qu, Xiaofeng

    2012-01-01

    Human hand back skin texture (HBST) is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands). An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the l1 -minimization based sparse representation (SR) technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification. PMID:23012512

  9. Relations between some horizontal‐component ground‐motion intensity measures used in practice

    USGS Publications Warehouse

    Boore, David; Kishida, Tadahiro

    2017-01-01

    Various measures using the two horizontal components of recorded ground motions have been used in a number of studies that derive ground‐motion prediction equations and construct maps of shaking intensity. We update relations between a number of these measures, including those in Boore et al. (2006) and Boore (2010), using the large and carefully constructed global database of ground motions from crustal earthquakes in active tectonic regions developed as part of the Pacific Earthquake Engineering Research Center–Next Generation Attenuation‐West2 project. The ratios from the expanded datasets generally agree to within a few percent of the previously published ratios. We also provide some ratios that were not considered before, some of which will be useful in applications such as constructing ShakeMaps. Finally, we compare two important ratios with those from a large central and eastern North American database and from many records from subduction earthquakes in Japan and Taiwan. In general, the ratios from these regions are within several percent of those from crustal earthquakes in active tectonic regions.

  10. Evolution of the LBT Telemetry System

    NASA Astrophysics Data System (ADS)

    Summers, K.; Biddick, C.; De La Peña, M. D.; Summers, D.

    2014-05-01

    The Large Binocular Telescope (LBT) Telescope Control System (TCS) records about 10GB of telemetry data per night. Additionally, the vibration monitoring system records about 9GB of telemetry data per night. Through 2013, we have amassed over 6TB of Hierarchical Data Format (HDF5) files and almost 9TB in a MySQL database of TCS and vibration data. The LBT telemetry system, in its third major revision since 2004, provides the mechanism to capture and store this data. The telemetry system has evolved from a simple HDF file system with MySQL stream definitions within the TCS, to a separate system using a MySQL database system for the definitions and data, and finally to no database use at all, using HDF5 files.

  11. Combining new technologies for effective collection development: a bibliometric study using CD-ROM and a database management program.

    PubMed Central

    Burnham, J F; Shearer, B S; Wall, J C

    1992-01-01

    Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424

  12. Bio-psycho-social factors affecting sexual self-concept: A systematic review

    PubMed Central

    Potki, Robabeh; Ziaei, Tayebe; Faramarzi, Mahbobeh; Moosazadeh, Mahmood; Shahhosseini, Zohreh

    2017-01-01

    Background Nowadays, it is believed that mental and emotional aspects of sexual well-being are the important aspects of sexual health. Sexual self-concept is a major component of sexual health and the core of sexuality. It is defined as the cognitive perspective concerning the sexual aspects of ‘self’ and refers to the individual’s self-perception as a sexual creature. Objective The aim of this study was to assess the different factors affecting sexual self-concept. Methods English electronic databases including PubMed, Scopus, Web of Science and Google Scholar as well as two Iranian databases including Scientific Information Database and Iranmedex were searched for English and Persian-language articles published between 1996 and 2016. Of 281 retrieved articles, 37 articles were finally included for writing this review article. Results Factors affecting sexual self-concept were categorized to biological, psychological and social factors. In the category of biological factors, age gender, marital status, race, disability and sexual transmitted infections are described. In the psychological category, the impact of body image, sexual abuse in childhood and mental health history are present. Lastly, in the social category, the roles of parents, peers and the media are discussed. Conclusion As the development of sexual self-concept is influenced by multiple events in individuals’ lives, to promotion of sexual self-concept, an integrated implementation of health policies is recommended. PMID:29038693

  13. Let the IRIS Bloom:Regrowing the integrated risk information system (IRIS) of the U.S. Environmental Protection Agency.

    PubMed

    Dourson, Michael L

    2018-05-03

    The Integrated Risk Information System (IRIS) of the U.S. Environmental Protection Agency (EPA) has an important role in protecting public health. Originally it provided a single database listing official risk values equally valid for all Agency offices, and was an important tool for risk assessment communication across EPA. Started in 1986, IRIS achieved full standing in 1990 when it listed 500 risk values, the effort of two senior EPA groups over 5 years of monthly face-to-face meetings, to assess combined risk data from multiple Agency offices. Those groups were disbanded in 1995, and the lack of continuing face-to-face meetings meant that IRIS became no longer EPA's comprehensive database of risk values or their latest evaluations. As a remedy, a work group of the Agency's senior scientists should be re-established to evaluate new risks and to update older ones. Risk values to be reviewed would come from the same EPA offices now developing such information on their own. Still, this senior group would have the final authority on posting a risk value in IRIS, independently of individual EPA offices. This approach could also lay the groundwork for an all-government IRIS database, especially needed as more government Agencies, industries and non-governmental organizations are addressing evolving risk characterizations. Copyright © 2018. Published by Elsevier Inc.

  14. Identification of the Beer Component Hordenine as Food-Derived Dopamine D2 Receptor Agonist by Virtual Screening a 3D Compound Database

    NASA Astrophysics Data System (ADS)

    Sommer, Thomas; Hübner, Harald; El Kerdawy, Ahmed; Gmeiner, Peter; Pischetsrieder, Monika; Clark, Timothy

    2017-03-01

    The dopamine D2 receptor (D2R) is involved in food reward and compulsive food intake. The present study developed a virtual screening (VS) method to identify food components, which may modulate D2R signalling. In contrast to their common applications in drug discovery, VS methods are rarely applied for the discovery of bioactive food compounds. Here, databases were created that exclusively contain substances occurring in food and natural sources (about 13,000 different compounds in total) as the basis for combined pharmacophore searching, hit-list clustering and molecular docking into D2R homology models. From 17 compounds finally tested in radioligand assays to determine their binding affinities, seven were classified as hits (hit rate = 41%). Functional properties of the five most active compounds were further examined in β-arrestin recruitment and cAMP inhibition experiments. D2R-promoted G-protein activation was observed for hordenine, a constituent of barley and beer, with approximately identical ligand efficacy as dopamine (76%) and a Ki value of 13 μM. Moreover, hordenine antagonised D2-mediated β-arrestin recruitment indicating functional selectivity. Application of our databases provides new perspectives for the discovery of bioactive food constituents using VS methods. Based on its presence in beer, we suggest that hordenine significantly contributes to mood-elevating effects of beer.

  15. Normative Databases for Imaging Instrumentation.

    PubMed

    Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray

    2015-08-01

    To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.

  16. Normative Databases for Imaging Instrumentation

    PubMed Central

    Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray

    2015-01-01

    Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003

  17. Preparing Nursing Home Data from Multiple Sites for Clinical Research – A Case Study Using Observational Health Data Sciences and Informatics

    PubMed Central

    Boyce, Richard D.; Handler, Steven M.; Karp, Jordan F.; Perera, Subashan; Reynolds, Charles F.

    2016-01-01

    Introduction: A potential barrier to nursing home research is the limited availability of research quality data in electronic form. We describe a case study of converting electronic health data from five skilled nursing facilities to a research quality longitudinal dataset by means of open-source tools produced by the Observational Health Data Sciences and Informatics (OHDSI) collaborative. Methods: The Long-Term Care Minimum Data Set (MDS), drug dispensing, and fall incident data from five SNFs were extracted, translated, and loaded into version 4 of the OHDSI common data model. Quality assurance involved identifying errors using the Achilles data characterization tool and comparing both quality measures and drug exposures in the new database for concordance with externally available sources. Findings: Records for a total 4,519 patients (95.1%) made it into the final database. Achilles identified 10 different types of errors that were addressed in the final dataset. Drug exposures based on dispensing were generally accurate when compared with medication administration data from the pharmacy services provider. Quality measures were generally concordant between the new database and Nursing Home Compare for measures with a prevalence ≥ 10%. Fall data recorded in MDS was found to be more complete than data from fall incident reports. Conclusions: The new dataset is ready to support observational research on topics of clinical importance in the nursing home including patient-level prediction of falls. The extraction, translation, and loading process enabled the use of OHDSI data characterization tools that improved the quality of the final dataset. PMID:27891528

  18. Knowledge representation in metabolic pathway databases.

    PubMed

    Stobbe, Miranda D; Jansen, Gerbert A; Moerland, Perry D; van Kampen, Antoine H C

    2014-05-01

    The accurate representation of all aspects of a metabolic network in a structured format, such that it can be used for a wide variety of computational analyses, is a challenge faced by a growing number of researchers. Analysis of five major metabolic pathway databases reveals that each database has made widely different choices to address this challenge, including how to deal with knowledge that is uncertain or missing. In concise overviews, we show how concepts such as compartments, enzymatic complexes and the direction of reactions are represented in each database. Importantly, also concepts which a database does not represent are described. Which aspects of the metabolic network need to be available in a structured format and to what detail differs per application. For example, for in silico phenotype prediction, a detailed representation of gene-protein-reaction relations and the compartmentalization of the network is essential. Our analysis also shows that current databases are still limited in capturing all details of the biology of the metabolic network, further illustrated with a detailed analysis of three metabolic processes. Finally, we conclude that the conceptual differences between the databases, which make knowledge exchange and integration a challenge, have not been resolved, so far, by the exchange formats in which knowledge representation is standardized.

  19. First year progress report on the development of the Texas flexible pavement database.

    DOT National Transportation Integrated Search

    2008-01-01

    Comprehensive and reliable databases are essential for the development, validation, and calibration of any pavement : design and rehabilitation system. These databases should include material properties, pavement structural : characteristics, highway...

  20. IRIS Toxicological Review of Dichloromethane (Methylene Chloride) (External Review Draft)

    EPA Science Inventory

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of Dichloromethane that when finalized will appear on the Integrated Risk Information System (IRIS) database.

  1. IRIS Toxicological Review of Tetrahydrofuran (THF) (External Review Draft)

    EPA Science Inventory

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of tetrahydrofuran (THF) that when finalized will appear on the Integrated Risk Information System (IRIS) database.

  2. IRIS Toxicological Review of Trichloroethylene (TCE) (External Review Draft)

    EPA Science Inventory

    EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of Trichloroethylene (TCE) that when finalized will appear on the Integrated Risk Information System (IRIS) database.

  3. Improved quality truck castings : final report.

    DOT National Transportation Integrated Search

    2016-06-01

    A review of the car repair billing database shows that many bolsters and side frames are removed from service each year due to : cracking or breaking. Derailment-related costs due to bolster and side frame failures total approximately $9 million per ...

  4. Drug-Path: a database for drug-induced pathways.

    PubMed

    Zeng, Hui; Qiu, Chengxiang; Cui, Qinghua

    2015-01-01

    Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. © The Author(s) 2015. Published by Oxford University Press.

  5. [Information management in multicenter studies: the Brazilian longitudinal study for adult health].

    PubMed

    Duncan, Bruce Bartholow; Vigo, Álvaro; Hernandez, Émerson; Luft, Vivian Cristine; Ahlert, Hubert; Bergmann, Kaiser; Mota, Eduardo

    2013-06-01

    Information management in large multicenter studies requires a specialized approach. The Estudo Longitudinal da Saúde do Adulto (ELSA-Brasil - Brazilian Longitudinal Study for Adult Health) has created a Datacenter to enter and manage its data system. The aim of this paper is to describe the steps involved, including the information entry, transmission and management methods. A web system was developed in order to allow, in a safe and confidential way, online data entry, checking and editing, as well as the incorporation of data collected on paper. Additionally, a Picture Archiving and Communication System was implemented and customized for echocardiography and retinography. It stores the images received from the Investigation Centers and makes them available at the Reading Centers. Finally, data extraction and cleaning processes were developed to create databases in formats that enable analyses in multiple statistical packages.

  6. Development Of New Databases For Tsunami Hazard Analysis In California

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.

    2009-12-01

    The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.

  7. Development of multi-mission satellite data systems at the German Remote Sensing Data Centre

    NASA Astrophysics Data System (ADS)

    Lotz-Iwen, H. J.; Markwitz, W.; Schreier, G.

    1998-11-01

    This paper focuses on conceptual aspects of the access to multi-mission remote sensing data by online catalogue and information systems. The system ISIS of the German Remote Sensing Data Centre is described as an example of a user interface to earth observation data. ISIS has been designed to support international scientific research as well as operational applications by offering online access to the database via public networks. It provides catalogue retrieval, visualisation and transfer of image data, and is integrated in international activities dedicated to catalogue and archive interoperability. Finally, an outlook is given on international projects dealing with access to remote sensing data in distributed archives.

  8. Evolutionary grinding model for nanometric control of surface roughness for aspheric optical surfaces.

    PubMed

    Han, Jeong-Yeol; Kim, Sug-Whan; Han, Inwoo; Kim, Geon-Hee

    2008-03-17

    A new evolutionary grinding process model has been developed for nanometric control of material removal from an aspheric surface of Zerodur substrate. The model incorporates novel control features such as i) a growing database; ii) an evolving, multi-variable regression equation; and iii) an adaptive correction factor for target surface roughness (Ra) for the next machine run. This process model demonstrated a unique evolutionary controllability of machining performance resulting in the final grinding accuracy (i.e. averaged difference between target and measured surface roughness) of -0.2+/-2.3(sigma) nm Ra over seven trial machine runs for the target surface roughness ranging from 115 nm to 64 nm Ra.

  9. Sharing Water Data to Encourage Sustainable Choices in Areas of the Marcellus Shale

    NASA Astrophysics Data System (ADS)

    Brantley, S. L.; Abad, J. D.; Vastine, J.; Yoxtheimer, D.; Wilderman, C.; Vidic, R.; Hooper, R. P.; Brasier, K.

    2012-12-01

    Natural gas sourced from shales but stored in more permeable formations has long been exploited as an energy resource. Now, however, gas is exploited directly from the low-porosity and low-permeability shale reservoirs through the use of hydrofracturing. Hydrofracturing is not a new technique: it has long been utilized in the energy industry to promote flow of oil and gas from traditional reservoirs. To exploit gas in reservoirs such as the Marcellus shale in PA, hydrofracturing is paired with directional drilling. Such hydrofracturing utilizes large volumes of water to increase porosity in the shale formations at depth. Small concentrations of chemicals are added to the water to improve the formation and maintenance of the fractures. Significant public controversy has developed in response to the use of hydrofracturing especially in the northeastern states underlain by the Marcellus shale where some citizens and scientists question whether shale gas recovery will contaminate local surface and ground waters. Researchers, government agencies, and citizen scientists in Pennsylvania are teaming up to run the ShaleNetwork (www.shalenetwork.org), an NSF-funded research collaboration network that is currently finding, collating, sharing, publishing, and exploring data related to water quality and quantity in areas that are exploiting shale gas. The effort, focussed initially on Pennsylvania, is now developing the ShaleNetwork database that can be accessed through HydroDesktop in the CUAHSI Hydrologic Information System. In the first year since inception, the ShaleNetwork ran a workshop and reached eight conclusions, largely focussed on issues related to the sources, entry, and use of data. First, the group discovered that extensive water data is available in areas of shale gas. Second, participants agreed that the Shale Network team should partner with state agencies and industry to move datasets online. Third, participants discovered that the database allows participants to assess data gaps. Fourth, the team was encouraged to search for data that plug gaps. Fifth, the database should be easily sustained by others long-term if the Shale Network team simplifies the process of uploading data and finds ways to create community buy-in or incentives for data uploads. Sixth, the database itself and the workshops for the database should drive future agreement about analytical protocols. Seventh, the database is already encouraging other groups to publish data online. Finally, a user interface is needed that is easier and more accessible for citizens to use. Overall, it is clear that sharing data is one way to build bridges among decision makers, scientists, and citizens to understand issues related to sustainable development of energy resources in the face of issues related to water quality and quantity.

  10. PROGRESS TOWARDS NEXT GENERATION, WAVEFORM BASED THREE-DIMENSIONAL MODELS AND METRICS TO IMPROVE NUCLEAR EXPLOSION MONITORING IN THE MIDDLE EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, B; Peter, D; Covellone, B

    2009-07-02

    Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less

  11. Development of a database for Louisiana highway bridge scour data : technical summary.

    DOT National Transportation Integrated Search

    1999-10-01

    The objectives of the project included: 1) developed a database with manipulation capabilities such as data retrieval, visualization, and update; 2) Input the existing scour data from DOTD files into the database.

  12. Time Request for the Finalization of a BACT Determination for a New Emissions Source

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  13. Revised Final Determination, on Reconsideration, Regarding the Applicability of the Clean Air Acts NSPS and PSD

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  14. Requirement to Publish All Significant Final Actions Under Title I of The Clean Air Act

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  15. Response to Letter by Golden Aluminum Requesting EPA Reconsider a Final PSD Applicability Determination

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  16. Technology Insertion-Engineering Services Process Characterization. Task Order No. 1. Book 2 of 5. Database Documentation Book. OO-ALC MANPRA (C5 Main Landing Gear - WCD’S)

    DTIC Science & Technology

    1989-12-15

    ORDER NO. 1 BOOK 2 OF 5 DATABASE DOCUMENTATION BOOK 00-ALC MANPRA *(C5 MAIN LANDING GEAR- WCD’S) CONTRACT SUMMARY REPORT , 15 DECEMBER 1989 ’.1 - CONTRACT...ORIQ 1 . T A 1~4 M E* X C I" El 1 -:11 E:XCE:EI:IIE:U RE:CORI’ RFiS CvAUiS FOREXE IN REWORK LI[MITS Ill FINAL DESTINATION 22. COOROINATION/INITIATING MCC

  17. SU-E-T-255: Development of a Michigan Quality Assurance (MQA) Database for Clinical Machine Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, D

    Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visualmore » basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.« less

  18. Communications System Architecture Development for Air Traffic Management and Aviation Weather Information Dissemination

    NASA Technical Reports Server (NTRS)

    Gallagher, Seana; Olson, Matt; Blythe, Doug; Heletz, Jacob; Hamilton, Griff; Kolb, Bill; Homans, Al; Zemrowski, Ken; Decker, Steve; Tegge, Cindy

    2000-01-01

    This document is the NASA AATT Task Order 24 Final Report. NASA Research Task Order 24 calls for the development of eleven distinct task reports. Each task was a necessary exercise in the development of comprehensive communications systems architecture (CSA) for air traffic management and aviation weather information dissemination for 2015, the definition of the interim architecture for 2007, and the transition plan to achieve the desired End State. The eleven tasks are summarized along with the associated Task Order reference. The output of each task was an individual task report. The task reports that make up the main body of this document include Task 5, Task 6, Task 7, Task 8, Task 10, and Task 11. The other tasks provide the supporting detail used in the development of the architecture. These reports are included in the appendices. The detailed user needs, functional communications requirements and engineering requirements associated with Tasks 1, 2, and 3 have been put into a relational database and are provided electronically.

  19. A web application to support telemedicine services in Brazil.

    PubMed

    Barbosa, Ana Karina P; de A Novaes, Magdala; de Vasconcelos, Alexandre M L

    2003-01-01

    This paper describes a system that has been developed to support Telemedicine activities in Brazil, a country that has serious problems in the delivery of health services. The system is a part of the broader Tele-health Project that has been developed to make health services more accessible to the low-income population in the northeast region. The HealthNet system is based upon a pilot area that uses fetal and pediatric cardiology. This article describes both the system's conceptual model, including the tele-diagnosis and second medical opinion services, as well as its architecture and development stages. The system model describes both collaborating tools used asynchronously, such as discussion forums, and synchronous tools, such as videoconference services. Web and free-of-charge tools are utilized for implementation, such as Java and MySQL database. Furthermore, an interface with Electronic Patient Record (EPR) systems using Extended Markup Language (XML) technology is also proposed. Finally, considerations concerning the development and implementation process are presented.

  20. Environmental applications based on GIS and GRID technologies

    NASA Astrophysics Data System (ADS)

    Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.

    2009-04-01

    In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS™ (Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula, Southern Sardinia, Italy), the soil database is managed at the DISTER node (Cagliari, southern Sardinia, Italy), and the geochemical database is managed at the DIGITA node (Cagliari, southern Sardinia, Italy). The input data are files (raster ASCII format) and database tables. The raster files have been zipped and stored in iRods. The tables are imported into a PostgreSQL database and accessed by the Rule-oriented Database Access (RDA) system available for PostgreSQL in iRODS 1.1. From the EnginFrame portal it is possible to view and use the applications through three services: "Upload Data", "View Data and Metadata", and "Execute Application". The Land Suitability application, based on the FAO framework for land evaluation, produces suitability maps (at the scale 1:10,000) for 11 different possible alternative uses. The maps, with a ASCII raster format, are downloadable by the user and viewable by Mapserver. This application has been implemented in an area of southern Sardinia (Monastir) and may be useful to direct municipal urban planning towards a rational land use. The Desertification Risk Assessment application produces, by means of biophysical and socioeconomic key indicators, a final combined map showing critical, fragile, and potential Environmentally Sensitive Areas to desertification. This application has been implemented in an area of south-west Sardinia (Muravera). The final index for the sensitivity is obtained by the geometric mean among four parameters: SQI (Soil Quality Index), CQI (Climate Quality Index), VQI (Vegetation Quality Index) e MQI (Management Quality Index). The final result (ESAs = (SQI * CQI * VQI * MQI)1•4) is a map at the scale 1:50,000, with a ASCII raster format, downloadable by the user and viewable by Mapserver. This type of map may be useful to direct land planning at catchment basin level. The Georesources and Environmental Geochemistry application, whose test is in progress in the area of Muravera (south-west Sardinia) through stream sediment sampling, aims at producing maps defining, with high precision, areas (hydrographic basins) where the values of a given element exceed the lithological background (i.e. geochemically anomalous). Such a product has a double purpose. First of all, it identifies releasing sources and may be useful for the necessary remediation actions, if they insist on areas historically prone to more or less intense anthropical activities. On the other hand, if these sources are of natural origin, they could also be interpreted as ore mineral occurrences. In the latter case the study of these occurrences could lead to discover economic ore bodies of small-to-medium size (at least in the present target area) and consequently to the revival of a local mining industry.

  1. Analytical Design Package (ADP2): A computer aided engineering tool for aircraft transparency design

    NASA Technical Reports Server (NTRS)

    Wuerer, J. E.; Gran, M.; Held, T. W.

    1994-01-01

    The Analytical Design Package (ADP2) is being developed as a part of the Air Force Frameless Transparency Program (FTP). ADP2 is an integrated design tool consisting of existing analysis codes and Computer Aided Engineering (CAE) software. The objective of the ADP2 is to develop and confirm an integrated design methodology for frameless transparencies, related aircraft interfaces, and their corresponding tooling. The application of this methodology will generate high confidence for achieving a qualified part prior to mold fabrication. ADP2 is a customized integration of analysis codes, CAE software, and material databases. The primary CAE integration tool for the ADP2 is P3/PATRAN, a commercial-off-the-shelf (COTS) software tool. The open architecture of P3/PATRAN allows customized installations with different applications modules for specific site requirements. Integration of material databases allows the engineer to select a material, and those material properties are automatically called into the relevant analysis code. The ADP2 materials database will be composed of four independent schemas: CAE Design, Processing, Testing, and Logistics Support. The design of ADP2 places major emphasis on the seamless integration of CAE and analysis modules with a single intuitive graphical interface. This tool is being designed to serve and be used by an entire project team, i.e., analysts, designers, materials experts, and managers. The final version of the software will be delivered to the Air Force in Jan. 1994. The Analytical Design Package (ADP2) will then be ready for transfer to industry. The package will be capable of a wide range of design and manufacturing applications.

  2. The Brazilian Portuguese Lexicon: An Instrument for Psycholinguistic Research

    PubMed Central

    Estivalet, Gustavo L.; Meunier, Fanny

    2015-01-01

    In this article, we present the Brazilian Portuguese Lexicon, a new word-based corpus for psycholinguistic and computational linguistic research in Brazilian Portuguese. We describe the corpus development, the specific characteristics on the internet site and database for user access. We also perform distributional analyses of the corpus and comparisons to other current databases. Our main objective was to provide a large, reliable, and useful word-based corpus with a dynamic, easy-to-use, and intuitive interface with free internet access for word and word-criteria searches. We used the Núcleo Interinstitucional de Linguística Computacional’s corpus as the basic data source and developed the Brazilian Portuguese Lexicon by deriving and adding metalinguistic and psycholinguistic information about Brazilian Portuguese words. We obtained a final corpus with more than 30 million word tokens, 215 thousand word types and 25 categories of information about each word. This corpus was made available on the internet via a free-access site with two search engines: a simple search and a complex search. The simple engine basically searches for a list of words, while the complex engine accepts all types of criteria in the corpus categories. The output result presents all entries found in the corpus with the criteria specified in the input search and can be downloaded as a.csv file. We created a module in the results that delivers basic statistics about each search. The Brazilian Portuguese Lexicon also provides a pseudoword engine and specific tools for linguistic and statistical analysis. Therefore, the Brazilian Portuguese Lexicon is a convenient instrument for stimulus search, selection, control, and manipulation in psycholinguistic experiments, as also it is a powerful database for computational linguistics research and language modeling related to lexicon distribution, functioning, and behavior. PMID:26630138

  3. The Brazilian Portuguese Lexicon: An Instrument for Psycholinguistic Research.

    PubMed

    Estivalet, Gustavo L; Meunier, Fanny

    2015-01-01

    In this article, we present the Brazilian Portuguese Lexicon, a new word-based corpus for psycholinguistic and computational linguistic research in Brazilian Portuguese. We describe the corpus development, the specific characteristics on the internet site and database for user access. We also perform distributional analyses of the corpus and comparisons to other current databases. Our main objective was to provide a large, reliable, and useful word-based corpus with a dynamic, easy-to-use, and intuitive interface with free internet access for word and word-criteria searches. We used the Núcleo Interinstitucional de Linguística Computacional's corpus as the basic data source and developed the Brazilian Portuguese Lexicon by deriving and adding metalinguistic and psycholinguistic information about Brazilian Portuguese words. We obtained a final corpus with more than 30 million word tokens, 215 thousand word types and 25 categories of information about each word. This corpus was made available on the internet via a free-access site with two search engines: a simple search and a complex search. The simple engine basically searches for a list of words, while the complex engine accepts all types of criteria in the corpus categories. The output result presents all entries found in the corpus with the criteria specified in the input search and can be downloaded as a.csv file. We created a module in the results that delivers basic statistics about each search. The Brazilian Portuguese Lexicon also provides a pseudoword engine and specific tools for linguistic and statistical analysis. Therefore, the Brazilian Portuguese Lexicon is a convenient instrument for stimulus search, selection, control, and manipulation in psycholinguistic experiments, as also it is a powerful database for computational linguistics research and language modeling related to lexicon distribution, functioning, and behavior.

  4. Medical applications: a database and characterization of apps in Apple iOS and Android platforms.

    PubMed

    Seabrook, Heather J; Stromer, Julie N; Shevkenek, Cole; Bharwani, Aleem; de Grood, Jill; Ghali, William A

    2014-08-27

    Medical applications (apps) for smart phones and tablet computers are growing in number and are commonly used in healthcare. In this context, there is a need for a diverse community of app users, medical researchers, and app developers to better understand the app landscape. In mid-2012, we undertook an environmental scan and classification of the medical app landscape in the two dominant platforms by searching the medical category of the Apple iTunes and Google Play app download sites. We identified target audiences, functions, costs and content themes using app descriptions and captured these data in a database. We only included apps released or updated between October 1, 2011 and May 31, 2012, with a primary "medical" app store categorization, in English, that contained health or medical content. Our sample of Android apps was limited to the most popular apps in the medical category. Our final sample of Apple iOS (n = 4561) and Android (n = 293) apps illustrate a diverse medical app landscape. The proportion of Apple iOS apps for the public (35%) and for physicians (36%) is similar. Few Apple iOS apps specifically target nurses (3%). Within the Android apps, those targeting the public dominated in our sample (51%). The distribution of app functions is similar in both platforms with reference being the most common function. Most app functions and content themes vary considerably by target audience. Social media apps are more common for patients and the public, while conference apps target physicians. We characterized existing medical apps and illustrated their diversity in terms of target audience, main functions, cost and healthcare topic. The resulting app database is a resource for app users, app developers and health informatics researchers.

  5. A database of body-only computer-generated pictures of women for body-image studies: Development and preliminary validation.

    PubMed

    Moussally, Joanna M; Rochat, Lucien; Posada, Andrés; Van der Linden, Martial

    2017-02-01

    The body-shape-related stimuli used in most body-image studies have several limitations (e.g., a lack of pilot validation procedures and the use of non-body-shape-related control/neutral stimuli). We therefore developed a database of 61 computer-generated body-only pictures of women, wherein bodies were methodically manipulated in terms of fatness versus thinness. Eighty-two young women assessed the pictures' attractiveness, beauty, harmony (valence ratings), and body shape (assessed on a thinness/fatness axis), providing normative data for valence and body shape ratings. First, stimuli manipulated for fatness versus thinness conveyed comparable emotional intensities regarding the valence and body shape ratings. Second, different subcategories of stimuli were obtained on the basis of variations in body shape and valence judgments. Fat and thin bodies were distributed into several subcategories depending on their valence ratings, and a subcategory containing stimuli that were neutral in terms of valence and body shape was identified. Interestingly, at a descriptive level, the thinness/fatness manipulations of the bodies were in a curvilinear relationship with the valence ratings: Thin bodies were not only judged as positive, but also as negative when their estimated body mass indexes (BMIs) decreased too much. Finally, convergent validity was assessed by exploring the impacts of body-image-related variables (BMI, thin-ideal internalization, and body dissatisfaction) on participants' judgments of the bodies. Valence judgments, but not body shape judgments, were influenced by the participants' levels of thin-ideal internalization and body dissatisfaction. Participants' BMIs did not significantly influence their judgments. Given these findings, this database contains relevant material that can be used in various fields, primarily for studies of body-image disturbance or eating disorders.

  6. Toward high-throughput genotyping: dynamic and automatic software for manipulating large-scale genotype data using fluorescently labeled dinucleotide markers.

    PubMed

    Li, J L; Deng, H; Lai, D B; Xu, F; Chen, J; Gao, G; Recker, R R; Deng, H W

    2001-07-01

    To efficiently manipulate large amounts of genotype data generated with fluorescently labeled dinucleotide markers, we developed a Microsoft database management system, named. offers several advantages. First, it accommodates the dynamic nature of the accumulations of genotype data during the genotyping process; some data need to be confirmed or replaced by repeat lab procedures. By using, the raw genotype data can be imported easily and continuously and incorporated into the database during the genotyping process that may continue over an extended period of time in large projects. Second, almost all of the procedures are automatic, including autocomparison of the raw data read by different technicians from the same gel, autoadjustment among the allele fragment-size data from cross-runs or cross-platforms, autobinning of alleles, and autocompilation of genotype data for suitable programs to perform inheritance check in pedigrees. Third, provides functions to track electrophoresis gel files to locate gel or sample sources for any resultant genotype data, which is extremely helpful for double-checking consistency of raw and final data and for directing repeat experiments. In addition, the user-friendly graphic interface of renders processing of large amounts of data much less labor-intensive. Furthermore, has built-in mechanisms to detect some genotyping errors and to assess the quality of genotype data that then are summarized in the statistic reports automatically generated by. The can easily handle >500,000 genotype data entries, a number more than sufficient for typical whole-genome linkage studies. The modules and programs we developed for the can be extended to other database platforms, such as Microsoft SQL server, if the capability to handle still greater quantities of genotype data simultaneously is desired.

  7. User Generated Spatial Content Sources for Land Use/Land Cover Validation Purposes: Suitability Analysis and Integration Model

    NASA Astrophysics Data System (ADS)

    Estima, Jacinto Paulo Simoes

    Traditional geographic information has been produced by mapping agencies and corporations, using high skilled people as well as expensive precision equipment and procedures, in a very costly approach. The production of land use and land cover databases are just one example of such traditional approach. On the other side, The amount of Geographic Information created and shared by citizens through the Web has been increasing exponentially during the last decade, resulting from the emergence and popularization of technologies such as the Web 2.0, cloud computing, GPS, smart phones, among others. Such comprehensive amount of free geographic data might have valuable information to extract and thus opening great possibilities to improve significantly the production of land use and land cover databases. In this thesis we explored the feasibility of using geographic data from different user generated spatial content initiatives in the process of land use and land cover database production. Data from Panoramio, Flickr and OpenStreetMap were explored in terms of their spatial and temporal distribution, and their distribution over the different land use and land cover classes. We then proposed a conceptual model to integrate data from suitable user generated spatial content initiatives based on identified dissimilarities among a comprehensive list of initiatives. Finally we developed a prototype implementing the proposed integration model, which was then validated by using the prototype to solve four identified use cases. We concluded that data from user generated spatial content initiatives has great value but should be integrated to increase their potential. The possibility of integrating data from such initiatives in an integration model was proved. Using the developed prototype, the relevance of the integration model was also demonstrated for different use cases. None None None

  8. A comprehensive linear programming tool to optimize formulations of ready-to-use therapeutic foods: an application to Ethiopia.

    PubMed

    Ryan, Kelsey N; Adams, Katherine P; Vosti, Stephen A; Ordiz, M Isabel; Cimo, Elizabeth D; Manary, Mark J

    2014-12-01

    Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed international and national crop and animal food databases was used to create a global and local candidate ingredient database. The database included information about each ingredient regarding nutrient composition, ingredient category, regional availability, and food safety, processing, and price. An LP tool was then designed to compose novel RUTF formulations. For the example case of Ethiopia, the objective was to minimize the ingredient cost of RUTF; the decision variables were ingredient weights and the extent of use of locally available ingredients, and the constraints were nutritional and product-quality related. Of the new RUTF formulations found by the LP tool for Ethiopia, 32 were predicted to be feasible for creating a paste, and these were prepared in the laboratory. Palatable final formulations contained a variety of ingredients, including fish, different dairy powders, and various seeds, grains, and legumes. Nearly all of the macronutrient values calculated by the LP tool differed by <10% from results produced by laboratory analyses, but the LP tool consistently underestimated total energy. The LP tool can be used to develop new RUTF formulations that make more use of locally available ingredients. This tool has the potential to lead to production of a variety of low-cost RUTF formulations that meet international standards and thereby potentially allow more children to be treated for SAM. © 2014 American Society for Nutrition.

  9. Short tandem repeat profiling: part of an overall strategy for reducing the frequency of cell misidentification.

    PubMed

    Nims, Raymond W; Sykes, Greg; Cottrill, Karin; Ikonomi, Pranvera; Elmore, Eugene

    2010-12-01

    The role of cell authentication in biomedical science has received considerable attention, especially within the past decade. This quality control attribute is now beginning to be given the emphasis it deserves by granting agencies and by scientific journals. Short tandem repeat (STR) profiling, one of a few DNA profiling technologies now available, is being proposed for routine identification (authentication) of human cell lines, stem cells, and tissues. The advantage of this technique over methods such as isoenzyme analysis, karyotyping, human leukocyte antigen typing, etc., is that STR profiling can establish identity to the individual level, provided that the appropriate number and types of loci are evaluated. To best employ this technology, a standardized protocol and a data-driven, quality-controlled, and publically searchable database will be necessary. This public STR database (currently under development) will enable investigators to rapidly authenticate human-based cultures to the individual from whom the cells were sourced. Use of similar approaches for non-human animal cells will require developing other suitable loci sets. While implementing STR analysis on a more routine basis should significantly reduce the frequency of cell misidentification, additional technologies may be needed as part of an overall authentication paradigm. For instance, isoenzyme analysis, PCR-based DNA amplification, and sequence-based barcoding methods enable rapid confirmation of a cell line's species of origin while screening against cross-contaminations, especially when the cells present are not recognized by the species-specific STR method. Karyotyping may also be needed as a supporting tool during establishment of an STR database. Finally, good cell culture practices must always remain a major component of any effort to reduce the frequency of cell misidentification.

  10. Biomedical informatics: development of a comprehensive data warehouse for clinical and genomic breast cancer research.

    PubMed

    Hu, Hai; Brzeski, Henry; Hutchins, Joe; Ramaraj, Mohan; Qu, Long; Xiong, Richard; Kalathil, Surendran; Kato, Rand; Tenkillaya, Santhosh; Carney, Jerry; Redd, Rosann; Arkalgudvenkata, Sheshkumar; Shahzad, Kashif; Scott, Richard; Cheng, Hui; Meadow, Stephen; McMichael, John; Sheu, Shwu-Lin; Rosendale, David; Kvecher, Leonid; Ahern, Stephen; Yang, Song; Zhang, Yonghong; Jordan, Rick; Somiari, Stella B; Hooke, Jeffrey; Shriver, Craig D; Somiari, Richard I; Liebman, Michael N

    2004-10-01

    The Windber Research Institute is an integrated high-throughput research center employing clinical, genomic and proteomic platforms to produce terabyte levels of data. We use biomedical informatics technologies to integrate all of these operations. This report includes information on a multi-year, multi-phase hybrid data warehouse project currently under development in the Institute. The purpose of the warehouse is to host the terabyte-level of internal experimentally generated data as well as data from public sources. We have previously reported on the phase I development, which integrated limited internal data sources and selected public databases. Currently, we are completing phase II development, which integrates our internal automated data sources and develops visualization tools to query across these data types. This paper summarizes our clinical and experimental operations, the data warehouse development, and the challenges we have faced. In phase III we plan to federate additional manual internal and public data sources and then to develop and adapt more data analysis and mining tools. We expect that the final implementation of the data warehouse will greatly facilitate biomedical informatics research.

  11. Web Database Development: Implications for Academic Publishing.

    ERIC Educational Resources Information Center

    Fernekes, Bob

    This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…

  12. Machine learning and microsimulation techniques on the prognosis of dementia: A systematic literature review.

    PubMed

    Dallora, Ana Luiza; Eivazzadeh, Shahryar; Mendes, Emilia; Berglund, Johan; Anderberg, Peter

    2017-01-01

    Dementia is a complex disorder characterized by poor outcomes for the patients and high costs of care. After decades of research little is known about its mechanisms. Having prognostic estimates about dementia can help researchers, patients and public entities in dealing with this disorder. Thus, health data, machine learning and microsimulation techniques could be employed in developing prognostic estimates for dementia. The goal of this paper is to present evidence on the state of the art of studies investigating and the prognosis of dementia using machine learning and microsimulation techniques. To achieve our goal we carried out a systematic literature review, in which three large databases-Pubmed, Socups and Web of Science were searched to select studies that employed machine learning or microsimulation techniques for the prognosis of dementia. A single backward snowballing was done to identify further studies. A quality checklist was also employed to assess the quality of the evidence presented by the selected studies, and low quality studies were removed. Finally, data from the final set of studies were extracted in summary tables. In total 37 papers were included. The data summary results showed that the current research is focused on the investigation of the patients with mild cognitive impairment that will evolve to Alzheimer's disease, using machine learning techniques. Microsimulation studies were concerned with cost estimation and had a populational focus. Neuroimaging was the most commonly used variable. Prediction of conversion from MCI to AD is the dominant theme in the selected studies. Most studies used ML techniques on Neuroimaging data. Only a few data sources have been recruited by most studies and the ADNI database is the one most commonly used. Only two studies have investigated the prediction of epidemiological aspects of Dementia using either ML or MS techniques. Finally, care should be taken when interpreting the reported accuracy of ML techniques, given studies' different contexts.

  13. New perspectives in toxicological information management, and the role of ISSTOX databases in assessing chemical mutagenicity and carcinogenicity.

    PubMed

    Benigni, Romualdo; Battistelli, Chiara Laura; Bossa, Cecilia; Tcheremenskaia, Olga; Crettaz, Pierre

    2013-07-01

    Currently, the public has access to a variety of databases containing mutagenicity and carcinogenicity data. These resources are crucial for the toxicologists and regulators involved in the risk assessment of chemicals, which necessitates access to all the relevant literature, and the capability to search across toxicity databases using both biological and chemical criteria. Towards the larger goal of screening chemicals for a wide range of toxicity end points of potential interest, publicly available resources across a large spectrum of biological and chemical data space must be effectively harnessed with current and evolving information technologies (i.e. systematised, integrated and mined), if long-term screening and prediction objectives are to be achieved. A key to rapid progress in the field of chemical toxicity databases is that of combining information technology with the chemical structure as identifier of the molecules. This permits an enormous range of operations (e.g. retrieving chemicals or chemical classes, describing the content of databases, finding similar chemicals, crossing biological and chemical interrogations, etc.) that other more classical databases cannot allow. This article describes the progress in the technology of toxicity databases, including the concepts of Chemical Relational Database and Toxicological Standardized Controlled Vocabularies (Ontology). Then it describes the ISSTOX cluster of toxicological databases at the Istituto Superiore di Sanitá. It consists of freely available databases characterised by the use of modern information technologies and by curation of the quality of the biological data. Finally, this article provides examples of analyses and results made possible by ISSTOX.

  14. Space transfer vehicle concepts and requirements study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Weber, Gary A.

    1991-01-01

    A description of the study in terms of background, objectives, and issues is provided. NASA is currently studying new initiatives of space exploration involving both piloted and unpiloted missions to destinations throughout the solar system. Many of these missions require substantial improvements in launch vehicle and upper stage capabilities. This study provides a focused examination of the Space Transfer Vehicles (STV) required to perform these missions using the emerging national launch vehicle definition, the Space Station Freedom (SSF) definition, and the latest mission scenario requirements. The study objectives are to define preferred STV concepts capable of accommodating future exploration missions in a cost-effective manner, determine the technology development (if any) required to perform these missions, and develop a decision database of various programmatic approaches for the development of the STV family of vehicles. Special emphasis was given to examining space basing (stationing reusable vehicles at a space station), examining the piloted lunar mission as a primary design mission, and restricting trade studies to the high-performance, near-term cryogenics (LO2/LH2) as vehicle propellant. The study progressed through three distinct 6-month phases. The first phase concentrated on supporting a NASA 3 month definition of exploration requirements (the '90-day study') and during this phase developed and optimized the space-based point-of-departure (POD) 2.5-stage lunar vehicle. The second phase developed a broad decision database of 95 different vehicle options and transportation architectures. The final phase chose the three most cost-effective architectures and developed point designs to carry to the end of the study. These reference vehicle designs are mutually exclusive and correspond to different national choices about launch vehicles and in-space reusability. There is, however, potential for evolution between concepts.

  15. Multiple endocrine neoplasia type 1 (MEN1): An update of 208 new germline variants reported in the last nine years.

    PubMed

    Concolino, Paola; Costella, Alessandra; Capoluongo, Ettore

    2016-01-01

    This review will focus on the germline MEN1 mutations that have been reported in patients with MEN1 and other hereditary endocrine disorders from 2007 to September 2015. A comprehensive review regarding the analysis of 1336 MEN1 mutations reported in the first decade following the gene's identification was performed by Lemos and Thakker in 2008. No other similar papers are available in literature apart from these data. We also checked for the list of Locus-Specific DataBases (LSDBs) and we found five MEN1 free-online mutational databases. 151 articles from the NCBI PubMed literature database were read and evaluated and a total of 75 MEN1 variants were found. On the contrary, 67, 22 and 44 novel MEN1 variants were obtained from ClinVar, MEN1 at Café Variome and HGMD (The Human Gene Mutation Database) databases respectively. A final careful analysis of MEN1 mutations affecting the coding region was performed. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The Muon Conditions Data Management:. Database Architecture and Software Infrastructure

    NASA Astrophysics Data System (ADS)

    Verducci, Monica

    2010-04-01

    The management of the Muon Conditions Database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored and their analysis. The Muon conditions database is responsible for almost all of the 'non-event' data and detector quality flags storage needed for debugging of the detector operations and for performing the reconstruction and the analysis. In particular for the early data, the knowledge of the detector performance, the corrections in term of efficiency and calibration will be extremely important for the correct reconstruction of the events. In this work, an overview of the entire Muon conditions database architecture is given, in particular the different sources of the data and the storage model used, including the database technology associated. Particular emphasis is given to the Data Quality chain: the flow of the data, the analysis and the final results are described. In addition, the description of the software interfaces used to access to the conditions data are reported, in particular, in the ATLAS Offline Reconstruction framework ATHENA environment.

  17. An open experimental database for exploring inorganic materials

    DOE PAGES

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; ...

    2018-04-03

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less

  18. An open experimental database for exploring inorganic materials.

    PubMed

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb

    2018-04-03

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.

  19. An open experimental database for exploring inorganic materials

    PubMed Central

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D.; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb

    2018-01-01

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource. PMID:29611842

  20. An open experimental database for exploring inorganic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus

    The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less

Top