Sample records for flexible relational database

  1. The CEBAF Element Database and Related Operational Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larrieu, Theodore; Slominski, Christopher; Keesee, Marie

    The newly commissioned 12GeV CEBAF accelerator relies on a flexible, scalable and comprehensive database to define the accelerator. This database delivers the configuration for CEBAF operational tools, including hardware checkout, the downloadable optics model, control screens, and much more. The presentation will describe the flexible design of the CEBAF Element Database (CED), its features and assorted use case examples.

  2. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  3. Sugeno Fuzzy Integral as a Basis for the Interpretation of Flexible Queries Involving Monotonic Aggregates.

    ERIC Educational Resources Information Center

    Bosc, P.; Lietard, L.; Pivert, O.

    2003-01-01

    Considers flexible querying of relational databases. Highlights include SQL languages and basic aggregate operators; Sugeno's fuzzy integral; evaluation examples; and how and under what conditions other aggregate functions could be applied to fuzzy sets in a flexible query. (Author/LRW)

  4. Design and implementation of a twin-family database for behavior genetics and genomics studies.

    PubMed

    Boomsma, Dorret I; Willemsen, Gonneke; Vink, Jacqueline M; Bartels, Meike; Groot, Paul; Hottenga, Jouke Jan; van Beijsterveldt, C E M Toos; Stroet, Therese; van Dijk, Rob; Wertheim, Rien; Visser, Marco; van der Kleij, Frank

    2008-06-01

    In this article we describe the design and implementation of a database for extended twin families. The database does not focus on probands or on index twins, as this approach becomes problematic when larger multigenerational families are included, when more than one set of multiples is present within a family, or when families turn out to be part of a larger pedigree. Instead, we present an alternative approach that uses a highly flexible notion of persons and relations. The relations among the subjects in the database have a one-to-many structure, are user-definable and extendible and support arbitrarily complicated pedigrees. Some additional characteristics of the database are highlighted, such as the storage of historical data, predefined expressions for advanced queries, output facilities for individuals and relations among individuals and an easy-to-use multi-step wizard for contacting participants. This solution presents a flexible approach to accommodate pedigrees of arbitrary size, multiple biological and nonbiological relationships among participants and dynamic changes in these relations that occur over time, which can be implemented for any type of multigenerational family study.

  5. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  6. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  7. A searching and reporting system for relational databases using a graph-based metadata representation.

    PubMed

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  8. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL

    PubMed Central

    2010-01-01

    Background Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. Results A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. Conclusions CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL. PMID:20594316

  9. Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.

    PubMed

    Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad

    2010-07-01

    Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.

  10. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  11. Study on the Flexibility in Cross-Border Water Resources Cooperation Governance

    NASA Astrophysics Data System (ADS)

    Liu, Zongrui; Wang, Teng; Zhou, Li

    2018-02-01

    Flexible strategy is very important to cross-border cooperation in international rivers water resources, which may be employed to reconcile contradictions and ease conflicts. Flexible characters of cross-border cooperation in international rivers water resources could be analyzed and revealed, using flexible strategic management framework, by taking international cooperation protocols related to water from Transboundary Freshwater Disputes Database (TFDD) as samples from the number of cooperation issues, the amount of management layers and regulator agencies in cooperation organization and the categories of income (cost) distribution (allocation) mode. The research demonstrates that there are some flexible features of cross-border cooperation in international rivers water resources: Riparian countries would select relative diversification strategies related to water, tend to construct a flexible cooperation organization featured with moderate hierarchies from vertical perspective and simplified administrations from horizontal perspective, and adopt selective inducement modes to respect ‘joint and several liability’.

  12. Optimization of the Controlled Evaluation of Closed Relational Queries

    NASA Astrophysics Data System (ADS)

    Biskup, Joachim; Lochner, Jan-Hendrik; Sonntag, Sebastian

    For relational databases, controlled query evaluation is an effective inference control mechanism preserving confidentiality regarding a previously declared confidentiality policy. Implementations of controlled query evaluation usually lack efficiency due to costly theorem prover calls. Suitably constrained controlled query evaluation can be implemented efficiently, but is not flexible enough from the perspective of database users and security administrators. In this paper, we propose an optimized framework for controlled query evaluation in relational databases, being efficiently implementable on the one hand and relaxing the constraints of previous approaches on the other hand.

  13. Creating flexible work arrangements through idiosyncratic deals.

    PubMed

    Hornung, Severin; Rousseau, Denise M; Glaser, Jürgen

    2008-05-01

    A survey of 887 employees in a German government agency assessed the antecedents and consequences of idiosyncratic arrangements individual workers negotiated with their supervisors. Work arrangements promoting the individualization of employment conditions, such as part-time work and telecommuting, were positively related to the negotiation of idiosyncratic deals ("i-deals"). Worker personal initiative also had a positive effect on i-deal negotiation. Two types of i-deals were studied: flexibility in hours of work and developmental opportunities. Flexibility i-deals were negatively related and developmental i-deals positively related to work-family conflict and working unpaid overtime. Developmental i-deals were also positively related to increased performance expectations and affective organizational commitment, while flexibility i-deals were unrelated to either. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  14. An effective model for store and retrieve big health data in cloud computing.

    PubMed

    Goli-Malekabadi, Zohreh; Sargolzaei-Javan, Morteza; Akbari, Mohammad Kazem

    2016-08-01

    The volume of healthcare data including different and variable text types, sounds, and images is increasing day to day. Therefore, the storage and processing of these data is a necessary and challenging issue. Generally, relational databases are used for storing health data which are not able to handle the massive and diverse nature of them. This study aimed at presenting the model based on NoSQL databases for the storage of healthcare data. Despite different types of NoSQL databases, document-based DBs were selected by a survey on the nature of health data. The presented model was implemented in the Cloud environment for accessing to the distribution properties. Then, the data were distributed on the database by applying the Shard property. The efficiency of the model was evaluated in comparison with the previous data model, Relational Database, considering query time, data preparation, flexibility, and extensibility parameters. The results showed that the presented model approximately performed the same as SQL Server for "read" query while it acted more efficiently than SQL Server for "write" query. Also, the performance of the presented model was better than SQL Server in the case of flexibility, data preparation and extensibility. Based on these observations, the proposed model was more effective than Relational Databases for handling health data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  16. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  17. New Powder Diffraction File (PDF-4) in relational database format: advantages and data-mining capabilities.

    PubMed

    Kabekkodu, Soorya N; Faber, John; Fawcett, Tim

    2002-06-01

    The International Centre for Diffraction Data (ICDD) is responding to the changing needs in powder diffraction and materials analysis by developing the Powder Diffraction File (PDF) in a very flexible relational database (RDB) format. The PDF now contains 136,895 powder diffraction patterns. In this paper, an attempt is made to give an overview of the PDF-4, search/match methods and the advantages of having the PDF-4 in RDB format. Some case studies have been carried out to search for crystallization trends, properties, frequencies of space groups and prototype structures. These studies give a good understanding of the basic structural aspects of classes of compounds present in the database. The present paper also reports data-mining techniques and demonstrates the power of a relational database over the traditional (flat-file) database structures.

  18. An Extensible "SCHEMA-LESS" Database Framework for Managing High-Throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  19. An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.

  20. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  2. The MAO NASU Plate Archive Database. Current Status and Perspectives

    NASA Astrophysics Data System (ADS)

    Pakuliak, L. K.; Sergeeva, T. P.

    2006-04-01

    The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.

  3. GOBASE—a database of mitochondrial and chloroplast information

    PubMed Central

    O'Brien, Emmet A.; Badidi, Elarbi; Barbasiewicz, Ania; deSousa, Cristina; Lang, B. Franz; Burger, Gertraud

    2003-01-01

    GOBASE is a relational database containing integrated sequence, RNA secondary structure and biochemical and taxonomic information about organelles. GOBASE release 6 (summer 2002) contains over 130 000 mitochondrial sequences, an increase of 37% over the previous release, and more than 30 000 chloroplast sequences in a new auxiliary database. To handle this flood of new data, we have designed and implemented GOpop, a Java system for population and verification of the database. We have also implemented a more powerful and flexible user interface using the PHP programming language. http://megasun.bch.umontreal.ca/gobase/gobase.html. PMID:12519975

  4. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  5. Use of a Relational Database to Support Clinical Research: Application in a Diabetes Program

    PubMed Central

    Lomatch, Diane; Truax, Terry; Savage, Peter

    1981-01-01

    A database has been established to support conduct of clinical research and monitor delivery of medical care for 1200 diabetic patients as part of the Michigan Diabetes Research and Training Center (MDRTC). Use of an intelligent microcomputer to enter and retrieve the data and use of a relational database management system (DBMS) to store and manage data have provided a flexible, efficient method of achieving both support of small projects and monitoring overall activity of the Diabetes Center Unit (DCU). Simplicity of access to data, efficiency in providing data for unanticipated requests, ease of manipulations of relations, security and “logical data independence” were important factors in choosing a relational DBMS. The ability to interface with an interactive statistical program and a graphics program is a major advantage of this system. Out database currently provides support for the operation and analysis of several ongoing research projects.

  6. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  7. LSE-Sign: A lexical database for Spanish Sign Language.

    PubMed

    Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel

    2016-03-01

    The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.

  8. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  9. The Steward Observatory asteroid relational database

    NASA Technical Reports Server (NTRS)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1991-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.

  10. Ureteral Avulsion Associated with Ureteroscopy: Insights from the MAUDE Database.

    PubMed

    Tanimoto, Ryuta; Cleary, Ryan C; Bagley, Demetrius H; Hubosky, Scott G

    2016-03-01

    Flexible and semirigid ureteroscopy (URS) are widely performed for the treatment of upper tract calculi and tumors. Ureteral avulsion is a rare, but devastating complication of endoscopic stone removal having multiple possible etiologies. Awareness and avoidance of this rare complication depend on identifying responsible mechanisms. This study examines the situations in which ureteral avulsion occurs as described anonymously in the Manufacturer and User facility Device Experience (MAUDE) database. The MAUDE database was systematically reviewed to account for all reported complications of flexible and semirigid URS. Keywords "ureteroscopy, injury, death, malfunction and other" were entered in the database and medical device reports were reviewed to capture any cases resulting in ureteral avulsion. Attention was paid to the type of ureteroscope involved and the mechanism for avulsion. A total of 104 entries were found detailing the reported complications of flexible and semirigid URS. Ureteral avulsion was clearly noted in six reports with flexible (2) and semirigid ureteroscopes (4). Potential mechanisms included locked deflection of a flexible ureteroscope (1), bunching of the distal bending rubber in a flexible ureteroscope (1), scabbard avulsion (3), and stone basketing (1). Although the incidence of ureteral avulsion cannot truly be determined from this study, some potentially novel mechanisms for this rare complication are observed. This may target future educational efforts to maximize awareness and avoidance of this complication.

  11. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    PubMed

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  12. Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database

    PubMed Central

    Arita, Masanori; Suwa, Kazuhiro

    2008-01-01

    Background In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. Results To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. Conclusion This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated. PMID:18822113

  13. FINDING COMMON GROUND IN MANAGING DATA USED IN REGIONAL ENVIRONMENTAL ASSESSMENTS

    EPA Science Inventory

    Evaluating the overall environmental health of a region invariably involves using data-bases from multiple organizations. Several approaches to deal with the related technological and sociological issues have been used by various programs. Flexible data systems are required to de...

  14. Image databases: Problems and perspectives

    NASA Technical Reports Server (NTRS)

    Gudivada, V. Naidu

    1989-01-01

    With the increasing number of computer graphics, image processing, and pattern recognition applications, economical storage, efficient representation and manipulation, and powerful and flexible query languages for retrieval of image data are of paramount importance. These and related issues pertinent to image data bases are examined.

  15. The relational database model and multiple multicenter clinical trials.

    PubMed

    Blumenstein, B A

    1989-12-01

    The Southwest Oncology Group (SWOG) chose to use a relational database management system (RDBMS) for the management of data from multiple clinical trials because of the underlying relational model's inherent flexibility and the natural way multiple entity types (patients, studies, and participants) can be accommodated. The tradeoffs to using the relational model as compared to using the hierarchical model include added computing cycles due to deferred data linkages and added procedural complexity due to the necessity of implementing protections against referential integrity violations. The SWOG uses its RDBMS as a platform on which to build data operations software. This data operations software, which is written in a compiled computer language, allows multiple users to simultaneously update the database and is interactive with respect to the detection of conditions requiring action and the presentation of options for dealing with those conditions. The relational model facilitates the development and maintenance of data operations software.

  16. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  17. Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements

    NASA Technical Reports Server (NTRS)

    Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri

    2006-01-01

    NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.

  18. NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.

    PubMed

    Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam

    2014-01-01

    Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Design and implementation of relational databases relevant to the diverse needs of a tuberculosis case contact study in the Gambia.

    PubMed

    Jeffries, D J; Donkor, S; Brookes, R H; Fox, A; Hill, P C

    2004-09-01

    The data requirements of a large multidisciplinary tuberculosis case contact study are complex. We describe an ACCESS-based relational database system that meets our rigorous requirements for data entry and validation, while being user-friendly, flexible, exportable, and easy to install on a network or stand alone system. This includes the development of a double data entry package for epidemiology and laboratory data, semi-automated entry of ELISPOT data directly from the plate reader, and a suite of new programmes for the manipulation and integration of flow cytometry data. The double entered epidemiology and immunology databases are combined into a separate database, providing a near-real-time analysis of immuno-epidemiological data, allowing important trends to be identified early and major decisions about the study to be made and acted on. This dynamic data management model is portable and can easily be applied to other studies.

  20. The Steward Observatory asteroid relational database

    NASA Technical Reports Server (NTRS)

    Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.

    1992-01-01

    The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. A browse capability allows the user to explore the contents of any data file. SOARD offers, also, an asteroid bibliography containing about 13,000 references. The program has online help as well as user and programmer documentation manuals. SOARD continues to provide data to fulfill requests by members of the astronomical community and will continue to grow as data is added to the database and new features are added to the program.

  1. The Effect of Positive Mood on Flexible Processing of Affective Information.

    PubMed

    Grol, Maud; De Raedt, Rudi

    2017-07-17

    Recent efforts have been made to understand the cognitive mechanisms underlying psychological resilience. Cognitive flexibility in the context of affective information has been related to individual differences in resilience. However, it is unclear whether flexible affective processing is sensitive to mood fluctuations. Furthermore, it remains to be investigated how effects on flexible affective processing interact with the affective valence of information that is presented. To fill this gap, we tested the effects of positive mood and individual differences in self-reported resilience on affective flexibility, using a task switching paradigm (N = 80). The main findings showed that positive mood was related to lower task switching costs, reflecting increased flexibility, in line with previous findings. In line with this effect of positive mood, we showed that greater resilience levels, specifically levels of acceptance of self and life, also facilitated task set switching in the context of affective information. However, the effects of resilience on affective flexibility seem more complex. Resilience tended to relate to more efficient task switching when negative information was preceded by positive information, possibly because the presentation of positive information, as well as positive mood, can facilitate task set switching. Positive mood also influenced costs associated with switching affective valence of the presented information. This latter effect was indicative of a reduced impact of no longer relevant negative information and more impact of no longer relevant positive information. Future research should confirm these effects of individual differences in resilience on affective flexibility, considering the affective valence of the presented information. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Relax with CouchDB--into the non-relational DBMS era of bioinformatics.

    PubMed

    Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R

    2012-07-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Relational databases for rare disease study: application to vascular anomalies.

    PubMed

    Perkins, Jonathan A; Coltrera, Marc D

    2008-01-01

    To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.

  4. An Electronic Finding Aid Using Extensible Markup Language (XML) and Encoded Archival Description (EAD).

    ERIC Educational Resources Information Center

    Chang, May

    2000-01-01

    Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…

  5. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  6. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  7. Artificial intelligence techniques for modeling database user behavior

    NASA Technical Reports Server (NTRS)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  8. Applications of Database Machines in Library Systems.

    ERIC Educational Resources Information Center

    Salmon, Stephen R.

    1984-01-01

    Characteristics and advantages of database machines are summarized and their applications to library functions are described. The ability to attach multiple hosts to the same database and flexibility in choosing operating and database management systems for different functions without loss of access to common database are noted. (EJS)

  9. An integrated approach to reservoir modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, K.

    1993-08-01

    The purpose of this research is to evaluate the usefulness of the following procedural and analytical methods in investigating the heterogeneity of the oil reserve for the Mississipian Big Injun Sandstone of the Granny Creek field, Clay and Roane counties, West Virginia: (1) relational database, (2) two-dimensional cross sections, (3) true three-dimensional modeling, (4) geohistory analysis, (5) a rule-based expert system, and (6) geographical information systems. The large data set could not be effectively integrated and interpreted without this approach. A relational database was designed to fully integrate three- and four-dimensional data. The database provides an effective means for maintainingmore » and manipulating the data. A two-dimensional cross section program was designed to correlate stratigraphy, depositional environments, porosity, permeability, and petrographic data. This flexible design allows for additional four-dimensional data. Dynamic Graphics[sup [trademark

  10. DITOP: drug-induced toxicity related protein database.

    PubMed

    Zhang, Jing-Xian; Huang, Wei-Juan; Zeng, Jing-Hua; Huang, Wen-Hui; Wang, Yi; Zhao, Rui; Han, Bu-Cong; Liu, Qing-Feng; Chen, Yu-Zong; Ji, Zhi-Liang

    2007-07-01

    Drug-induced toxicity related proteins (DITRPs) are proteins that mediate adverse drug reactions (ADRs) or toxicities through their binding to drugs or reactive metabolites. Collection of these proteins facilitates better understanding of the molecular mechanisms of drug-induced toxicity and the rational drug discovery. Drug-induced toxicity related protein database (DITOP) is such a database that is intending to provide comprehensive information of DITRPs. Currently, DITOP contains 1501 records, covering 618 distinct literature-reported DITRPs, 529 drugs/ligands and 418 distinct toxicity terms. These proteins were confirmed experimentally to interact with drugs or their reactive metabolites, thus directly or indirectly cause adverse effects or toxicities. Five major types of drug-induced toxicities or ADRs are included in DITOP, which are the idiosyncratic adverse drug reactions, the dose-dependent toxicities, the drug-drug interactions, the immune-mediated adverse drug effects (IMADEs) and the toxicities caused by genetic susceptibility. Molecular mechanisms underlying the toxicity and cross-links to related resources are also provided while available. Moreover, a series of user-friendly interfaces were designed for flexible retrieval of DITRPs-related information. The DITOP can be accessed freely at http://bioinf.xmu.edu.cn/databases/ADR/index.html. Supplementary data are available at Bioinformatics online.

  11. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  12. GALT protein database, a bioinformatics resource for the management and analysis of structural features of a galactosemia-related protein and its mutants.

    PubMed

    d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna

    2009-06-01

    We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.

  13. DB Dehydrogenase: an online integrated structural database on enzyme dehydrogenase.

    PubMed

    Nandy, Suman Kumar; Bhuyan, Rajabrata; Seal, Alpana

    2012-01-01

    Dehydrogenase enzymes are almost inevitable for metabolic processes. Shortage or malfunctioning of dehydrogenases often leads to several acute diseases like cancers, retinal diseases, diabetes mellitus, Alzheimer, hepatitis B & C etc. With advancement in modern-day research, huge amount of sequential, structural and functional data are generated everyday and widens the gap between structural attributes and its functional understanding. DB Dehydrogenase is an effort to relate the functionalities of dehydrogenase with its structures. It is a completely web-based structural database, covering almost all dehydrogenases [~150 enzyme classes, ~1200 entries from ~160 organisms] whose structures are known. It is created by extracting and integrating various online resources to provide the true and reliable data and implemented by MySQL relational database through user friendly web interfaces using CGI Perl. Flexible search options are there for data extraction and exploration. To summarize, sequence, structure, function of all dehydrogenases in one place along with the necessary option of cross-referencing; this database will be utile for researchers to carry out further work in this field. The database is available for free at http://www.bifku.in/DBD/

  14. First year progress report on the development of the Texas flexible pavement database.

    DOT National Transportation Integrated Search

    2008-01-01

    Comprehensive and reliable databases are essential for the development, validation, and calibration of any pavement : design and rehabilitation system. These databases should include material properties, pavement structural : characteristics, highway...

  15. Advanced Techniques for Deploying Reliable and Efficient Access Control: Application to E-healthcare.

    PubMed

    Jaïdi, Faouzi; Labbene-Ayachi, Faten; Bouhoula, Adel

    2016-12-01

    Nowadays, e-healthcare is a main advancement and upcoming technology in healthcare industry that contributes to setting up automated and efficient healthcare infrastructures. Unfortunately, several security aspects remain as main challenges towards secure and privacy-preserving e-healthcare systems. From the access control perspective, e-healthcare systems face several issues due to the necessity of defining (at the same time) rigorous and flexible access control solutions. This delicate and irregular balance between flexibility and robustness has an immediate impact on the compliance of the deployed access control policy. To address this issue, the paper defines a general framework to organize thinking about verifying, validating and monitoring the compliance of access control policies in the context of e-healthcare databases. We study the problem of the conformity of low level policies within relational databases and we particularly focus on the case of a medical-records management database defined in the context of a Medical Information System. We propose an advanced solution for deploying reliable and efficient access control policies. Our solution extends the traditional lifecycle of an access control policy and allows mainly managing the compliance of the policy. We refer to an example to illustrate the relevance of our proposal.

  16. Comment on "flexible protocol for quantum private query based on B92 protocol"

    NASA Astrophysics Data System (ADS)

    Chang, Yan; Zhang, Shi-Bin; Zhu, Jing-Min

    2017-03-01

    In a recent paper (Quantum Inf Process 13:805-813, 2014), a flexible quantum private query (QPQ) protocol based on B92 protocol is presented. Here we point out that the B92-based QPQ protocol is insecure in database security when the channel has loss, that is, the user (Alice) will know more records in Bob's database compared with she has bought.

  17. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  18. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  19. Proceedings of the NATO-Advanced Study Institute on Computer Aided Analysis of Rigid and Flexible Mechanical Systems Held in Troia, Portugal on 27 Jun-9 Jul, 1993. Volume 2. Contributed Papers

    DTIC Science & Technology

    1993-07-09

    Calculate Oil and solve iteratively equation (18) for q and (l)-(S) forex . 4, Solve the velocity problemn through equation (19) to calculate q and (6)-(10) to...object.oriented models for the database to store the system information f1l. Using OOP on the formalism level is more difficult and a current field of...Multidimensional Physical Systems: Graph-theoretic Modeling, Systems and Cybernetics, vol 21 (1992), 5 .9-71 JV A RELATIONAL DATABASE FOR GENERAL

  20. Intermodal freight transportation planning using commodity flow data

    DOT National Transportation Integrated Search

    2003-01-01

    In this study, the 1997 Commodity Flow Survey (CFS) database was identified to be the most cost-effective and flexible database that can be used for conducting statewide freight transportation planning study. The CFS database, together with other rel...

  1. The Physiology Constant Database of Teen-Agers in Beijing

    PubMed Central

    Wei-Qi, Wei; Guang-Jin, Zhu; Cheng-Li, Xu; Shao-Mei, Han; Bao-Shen, Qi; Li, Chen; Shu-Yu, Zu; Xiao-Mei, Zhou; Wen-Feng, Hu; Zheng-Guo, Zhang

    2004-01-01

    Physiology constants of adolescents are important to understand growing living systems and are a useful reference in clinical and epidemiological research. Until recently, physiology constants were not available in China and therefore most physiologists, physicians, and nutritionists had to use data from abroad for reference. However, the very difference between the Eastern and Western races casts doubt on the usefulness of overseas data. We have therefore created a database system to provide a repository for the storage of physiology constants of teen-agers in Beijing. The several thousands of pieces of data are now divided into hematological biochemistry, lung function, and cardiac function with all data manually checked before being transferred into the database. The database was accomplished through the development of a web interface, scripts, and a relational database. The physiology data were integrated into the relational database system to provide flexible facilities by using combinations of various terms and parameters. A web browser interface was designed for the users to facilitate their searching. The database is available on the web. The statistical table, scatter diagram, and histogram of the data are available for both anonym and user according to queries, while only the user can achieve detail, including download data and advanced search. PMID:15258669

  2. HTAPP: High-Throughput Autonomous Proteomic Pipeline

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2011-01-01

    Recent advances in the speed and sensitivity of mass spectrometers and in analytical methods, the exponential acceleration of computer processing speeds, and the availability of genomic databases from an array of species and protein information databases have led to a deluge of proteomic data. The development of a lab-based automated proteomic software platform for the automated collection, processing, storage, and visualization of expansive proteomic datasets is critically important. The high-throughput autonomous proteomic pipeline (HTAPP) described here is designed from the ground up to provide critically important flexibility for diverse proteomic workflows and to streamline the total analysis of a complex proteomic sample. This tool is comprised of software that controls the acquisition of mass spectral data along with automation of post-acquisition tasks such as peptide quantification, clustered MS/MS spectral database searching, statistical validation, and data exploration within a user-configurable lab-based relational database. The software design of HTAPP focuses on accommodating diverse workflows and providing missing software functionality to a wide range of proteomic researchers to accelerate the extraction of biological meaning from immense proteomic data sets. Although individual software modules in our integrated technology platform may have some similarities to existing tools, the true novelty of the approach described here is in the synergistic and flexible combination of these tools to provide an integrated and efficient analysis of proteomic samples. PMID:20336676

  3. Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.

    PubMed

    Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick

    2013-01-01

    Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.

  4. Everything you ever wanted to know about GRM* (*but were afraid to ask)

    Treesearch

    Jeffery A. Turner

    2015-01-01

    Querying the Forest Inventory and Analysis Database (FIADB) for growth, removals, and mortality (GRM) estimates can certainly be a conundrum. Providing the flexibility necessary to produce a wide array of GRM estimates has the unfortunate side effect of added complexity. This presentation seeks to answer some recurring questions related to GRM and how our new system...

  5. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan

    PubMed Central

    Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki

    2010-01-01

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081

  6. Development of the ECODAB into a relational database for Escherichia coli O-antigens and other bacterial polysaccharides.

    PubMed

    Rojas-Macias, Miguel A; Ståhle, Jonas; Lütteke, Thomas; Widmalm, Göran

    2015-03-01

    Escherichia coli O-antigen database (ECODAB) is a web-based application to support the collection of E. coli O-antigen structures, polymerase and flippase amino acid sequences, NMR chemical shift data of O-antigens as well as information on glycosyltransferases (GTs) involved in the assembly of O-antigen polysaccharides. The database content has been compiled from scientific literature. Furthermore, the system has evolved from being a repository to one that can be used for generating novel data on its own. GT specificity is suggested through sequence comparison with GTs whose function is known. The migration of ECODAB to a relational database has allowed the automation of all processes to update, retrieve and present information, thereby, endowing the system with greater flexibility and improved overall performance. ECODAB is freely available at http://www.casper.organ.su.se/ECODAB/. Currently, data on 169 E. coli unique O-antigen entries and 338 GTs is covered. Moreover, the scope of the database has been extended so that polysaccharide structure and related information from other bacteria subsequently can be added, for example, from Streptococcus pneumoniae. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    PubMed

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  8. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador

    PubMed Central

    Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. Discussion A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. Conclusion The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research. PMID:22859644

  9. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador.

    PubMed

    Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research.

  10. Ontology to relational database transformation for web application development and maintenance

    NASA Astrophysics Data System (ADS)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  11. High-Level Data-Abstraction System

    NASA Technical Reports Server (NTRS)

    Fishwick, P. A.

    1986-01-01

    Communication with data-base processor flexible and efficient. High Level Data Abstraction (HILDA) system is three-layer system supporting data-abstraction features of Intel data-base processor (DBP). Purpose of HILDA establishment of flexible method of efficiently communicating with DBP. Power of HILDA lies in its extensibility with regard to syntax and semantic changes. HILDA's high-level query language readily modified. Offers powerful potential to computer sites where DBP attached to DEC VAX-series computer. HILDA system written in Pascal and FORTRAN 77 for interactive execution.

  12. Implication of Emotional Labor, Cognitive Flexibility, and Relational Energy among Cabin Crew: A Review.

    PubMed

    Baruah, Rithi; Reddy, K Jayasankara

    2018-01-01

    The primary aim of the civil aviation industry is to provide a secured and comfortable service to their customers and clients. This review concentrates on the cabin crew members, who are the frontline employees of the aviation industry and are salaried to smile. The objective of this review article is to analyze the variables of emotional labor, cognitive flexibility, and relational energy using the biopsychosocial model and identify organizational implications among cabin crew. Online databases such as EBSCOhost, JSTOR, Springerlink, and PubMed were used to gather articles for the review. The authors analyzed 17 articles from 2001 to 2016 and presented a comprehensive review. The review presented an integrative approach and suggested a hypothetical model that can prove to be a signitficant contribution to the avaition industry in particular and to research findings of aviation psychology.

  13. The USA-NPN Information Management System: A tool in support of phenological assessments

    NASA Astrophysics Data System (ADS)

    Rosemartin, A.; Vazquez, R.; Wilson, B. E.; Denny, E. G.

    2009-12-01

    The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. Data management and information sharing are central to the USA-NPN mission. The USA-NPN develops, implements, and maintains a comprehensive Information Management System (IMS) to serve the needs of the network, including the collection, storage and dissemination of phenology data, access to phenology-related information, tools for data interpretation, and communication among partners of the USA-NPN. The IMS includes components for data storage, such as the National Phenology Database (NPD), and several online user interfaces to accommodate data entry, data download, data visualization and catalog searches for phenology-related information. The IMS is governed by a set of standards to ensure security, privacy, data access, and data quality. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data, to be flexible to the changing needs of the network, and to provide for quality control. The database stores phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), and provides for integration with legacy datasets. Several services will be created to provide access to the data, including reports, visualization interfaces, and web services. These services will provide integrated access to phenology and related information for scientists, decision-makers and general audiences. Phenological assessments at any scale will rely on secure and flexible information management systems for the organization and analysis of phenology data. The USA-NPN’s IMS can serve phenology assessments directly, through data management and indirectly as a model for large-scale integrated data management.

  14. The Stanford MediaServer Project: strategies for building a flexible digital media platform to support biomedical education and research.

    PubMed Central

    Durack, Jeremy C.; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P.; Dev, Parvati

    2002-01-01

    Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research. PMID:12463820

  15. The Stanford MediaServer Project: strategies for building a flexible digital media platform to support biomedical education and research.

    PubMed

    Durack, Jeremy C; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P; Dev, Parvati

    2002-01-01

    Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research.

  16. MBGD update 2015: microbial genome database for flexible ortholog analysis utilizing a diverse set of genomic data.

    PubMed

    Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu

    2015-01-01

    The microbial genome database for comparative analysis (MBGD) (available at http://mbgd.genome.ad.jp/) is a comprehensive ortholog database for flexible comparative analysis of microbial genomes, where the users are allowed to create an ortholog table among any specified set of organisms. Because of the rapid increase in microbial genome data owing to the next-generation sequencing technology, it becomes increasingly challenging to maintain high-quality orthology relationships while allowing the users to incorporate the latest genomic data available into an analysis. Because many of the recently accumulating genomic data are draft genome sequences for which some complete genome sequences of the same or closely related species are available, MBGD now stores draft genome data and allows the users to incorporate them into a user-specific ortholog database using the MyMBGD functionality. In this function, draft genome data are incorporated into an existing ortholog table created only from the complete genome data in an incremental manner to prevent low-quality draft data from affecting clustering results. In addition, to provide high-quality orthology relationships, the standard ortholog table containing all the representative genomes, which is first created by the rapid classification program DomClust, is now refined using DomRefine, a recently developed program for improving domain-level clustering using multiple sequence alignment information. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. An integrated data-analysis and database system for AMS 14C

    NASA Astrophysics Data System (ADS)

    Kjeldsen, Henrik; Olsen, Jesper; Heinemeier, Jan

    2010-04-01

    AMSdata is the name of a combined database and data-analysis system for AMS 14C and stable-isotope work that has been developed at Aarhus University. The system (1) contains routines for data analysis of AMS and MS data, (2) allows a flexible and accurate description of sample extraction and pretreatment, also when samples are split into several fractions, and (3) keeps track of all measured, calculated and attributed data. The structure of the database is flexible and allows an unlimited number of measurement and pretreatment procedures. The AMS 14C data analysis routine is fairly advanced and flexible, and it can be easily optimized for different kinds of measuring processes. Technically, the system is based on a Microsoft SQL server and includes stored SQL procedures for the data analysis. Microsoft Office Access is used for the (graphical) user interface, and in addition Excel, Word and Origin are exploited for input and output of data, e.g. for plotting data during data analysis.

  18. FJET Database Project: Extract, Transform, and Load

    NASA Technical Reports Server (NTRS)

    Samms, Kevin O.

    2015-01-01

    The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.

  19. Structure and software tools of AIDA.

    PubMed

    Duisterhout, J S; Franken, B; Witte, F

    1987-01-01

    AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write implementation-specific code which can be selected and loaded by a special source loader, being part of the AIDA software. This feature is also accessible for maintaining software on different sites and on different installations.

  20. Texas flexible pavements overlays : review and analysis of existing databases.

    DOT National Transportation Integrated Search

    2011-12-01

    Proper calibration of pavement design and rehabilitation performance models to : conditions in Texas is essential for cost-effective flexible pavement design. The degree of : excellence with which TxDOTs pavement design models is calibrated will d...

  1. Information, intelligence, and interface: the pillars of a successful medical information system.

    PubMed

    Hadzikadic, M; Harrington, A L; Bohren, B F

    1995-01-01

    This paper addresses three key issues facing developers of clinical and/or research medical information systems. 1. INFORMATION. The basic function of every database is to store information about the phenomenon under investigation. There are many ways to organize information in a computer; however only a few will prove optimal for any real life situation. Computer Science theory has developed several approaches to database structure, with relational theory leading in popularity among end users [8]. Strict conformance to the rules of relational database design rewards the user with consistent data and flexible access to that data. A properly defined database structure minimizes redundancy i.e.,multiple storage of the same information. Redundancy introduces problems when updating a database, since the repeated value has to be updated in all locations--missing even a single value corrupts the whole database, and incorrect reports are produced [8]. To avoid such problems, relational theory offers a formal mechanism for determining the number and content of data files. These files not only preserve the conceptual schema of the application domain, but allow a virtually unlimited number of reports to be efficiently generated. 2. INTELLIGENCE. Flexible access enables the user to harvest additional value from collected data. This value is usually gained via reports defined at the time of database design. Although these reports are indispensable, with proper tools more information can be extracted from the database. For example, machine learning, a sub-discipline of artificial intelligence, has been successfully used to extract knowledge from databases of varying size by uncovering a correlation among fields and records[1-6, 9]. This knowledge, represented in the form of decision trees, production rules, and probabilistic networks, clearly adds a flavor of intelligence to the data collection and manipulation system. 3. INTERFACE. Despite the obvious importance of collecting data and extracting knowledge, current systems often impede these processes. Problems stem from the lack of user friendliness and functionality. To overcome these problems, several features of a successful human-computer interface have been identified [7], including the following "golden" rules of dialog design [7]: consistency, use of shortcuts for frequent users, informative feedback, organized sequence of actions, simple error handling, easy reversal of actions, user-oriented focus of control, and reduced short-term memory load. To this list of rules, we added visual representation of both data and query results, since our experience has demonstrated that users react much more positively to visual rather than textual information. In our design of the Orthopaedic Trauma Registry--under development at the Carolinas Medical Center--we have made every effort to follow the above rules. The results were rewarding--the end users actually not only want to use the product, but also to participate in its development.

  2. HBVPathDB: a database of HBV infection-related molecular interaction network.

    PubMed

    Zhang, Yi; Bo, Xiao-Chen; Yang, Jing; Wang, Sheng-Qi

    2005-03-21

    To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection. The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML). We present the first version of HBVPathDB, which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1 050 molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search, unitized search-are supported by the search engine of the database. The database is freely available at http://www.bio-inf.net/HBVPathDB/HBV/. The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing, visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.

  3. MBGD update 2013: the microbial genome database for exploring the diversity of microbial world.

    PubMed

    Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu

    2013-01-01

    The microbial genome database for comparative analysis (MBGD, available at http://mbgd.genome.ad.jp/) is a platform for microbial genome comparison based on orthology analysis. As its unique feature, MBGD allows users to conduct orthology analysis among any specified set of organisms; this flexibility allows MBGD to adapt to a variety of microbial genomic study. Reflecting the huge diversity of microbial world, the number of microbial genome projects now becomes several thousands. To efficiently explore the diversity of the entire microbial genomic data, MBGD now provides summary pages for pre-calculated ortholog tables among various taxonomic groups. For some closely related taxa, MBGD also provides the conserved synteny information (core genome alignment) pre-calculated using the CoreAligner program. In addition, efficient incremental updating procedure can create extended ortholog table by adding additional genomes to the default ortholog table generated from the representative set of genomes. Combining with the functionalities of the dynamic orthology calculation of any specified set of organisms, MBGD is an efficient and flexible tool for exploring the microbial genome diversity.

  4. Cardiac CT and MRI for congenital heart disease in Asian countries: recent trends in publication based on a scientific database.

    PubMed

    Tsai, I-Chen; Goo, Hyun Woo

    2013-06-01

    In the past 12 years, during the process of imaging congenital heart disease (CHD), Asian doctors have not only made every effort to adhere to established magnetic resonance imaging (MRI) protocols as in Western countries, but also have developed Computed tomography (CT) as an alternative problem-solving technique. Databases have shown that Asian doctors were more inclined to utilize CT than MRI in evaluating CHD. Articles in the literature focusing on CT have been cited more frequently than articles on MRI. Additionally, several repeatedly cited CT articles have become seminal papers in this field. The database reflects a trend suggesting that Asian doctors actively adapt to new techniques and flexibly develop unique strategies to overcome limitations caused by the relatively limited resources often available to them.

  5. MODBASE, a database of annotated comparative protein structure models

    PubMed Central

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309

  6. Implication of Emotional Labor, Cognitive Flexibility, and Relational Energy among Cabin Crew: A Review

    PubMed Central

    Baruah, Rithi; Reddy, K. Jayasankara

    2018-01-01

    The primary aim of the civil aviation industry is to provide a secured and comfortable service to their customers and clients. This review concentrates on the cabin crew members, who are the frontline employees of the aviation industry and are salaried to smile. The objective of this review article is to analyze the variables of emotional labor, cognitive flexibility, and relational energy using the biopsychosocial model and identify organizational implications among cabin crew. Online databases such as EBSCOhost, JSTOR, Springerlink, and PubMed were used to gather articles for the review. The authors analyzed 17 articles from 2001 to 2016 and presented a comprehensive review. The review presented an integrative approach and suggested a hypothetical model that can prove to be a signitficant contribution to the avaition industry in particular and to research findings of aviation psychology. PMID:29743777

  7. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    PubMed Central

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443

  8. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    PubMed

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  9. Cenozoic Antarctic DiatomWare/BugCam: An aid for research and teaching

    USGS Publications Warehouse

    Wise, S.W.; Olney, M.; Covington, J.M.; Egerton, V.M.; Jiang, S.; Ramdeen, D.K.; ,; Schrader, H.; Sims, P.A.; Wood, A.S.; Davis, A.; Davenport, D.R.; Doepler, N.; Falcon, W.; Lopez, C.; Pressley, T.; Swedberg, O.L.; Harwood, D.M.

    2007-01-01

    Cenozoic Antarctic DiatomWare/BugCam© is an interactive, icon-driven digital-image database/software package that displays over 500 illustrated Cenozoic Antarctic diatom taxa along with original descriptions (including over 100 generic and 20 family-group descriptions). This digital catalog is designed primarily for use by micropaleontologists working in the field (at sea or on the Antarctic continent) where hard-copy literature resources are limited. This new package will also be useful for classroom/lab teaching as well as for any paleontologists making or refining taxonomic identifications at the microscope. The database (Cenozoic Antarctic DiatomWare) is displayed via a custom software program (BugCam) written in Visual Basic for use on PCs running Windows 95 or later operating systems. BugCam is a flexible image display program that utilizes an intuitive thumbnail “tree” structure for navigation through the database. The data are stored on Micrsosoft EXCEL spread sheets, hence no separate relational database program is necessary to run the package

  10. NCBI GEO: mining millions of expression profiles--database and tools.

    PubMed

    Barrett, Tanya; Suzek, Tugba O; Troup, Dennis B; Wilhite, Stephen E; Ngau, Wing-Chi; Ledoux, Pierre; Rudnev, Dmitry; Lash, Alex E; Fujibuchi, Wataru; Edgar, Ron

    2005-01-01

    The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest fully public repository for high-throughput molecular abundance data, primarily gene expression data. The database has a flexible and open design that allows the submission, storage and retrieval of many data types. These data include microarray-based experiments measuring the abundance of mRNA, genomic DNA and protein molecules, as well as non-array-based technologies such as serial analysis of gene expression (SAGE) and mass spectrometry proteomic technology. GEO currently holds over 30,000 submissions representing approximately half a billion individual molecular abundance measurements, for over 100 organisms. Here, we describe recent database developments that facilitate effective mining and visualization of these data. Features are provided to examine data from both experiment- and gene-centric perspectives using user-friendly Web-based interfaces accessible to those without computational or microarray-related analytical expertise. The GEO database is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

  11. StreptomycesInforSys: A web-enabled information repository

    PubMed Central

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736

  12. StreptomycesInforSys: A web-enabled information repository.

    PubMed

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.

  13. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  14. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences.

    PubMed

    Fourment, Mathieu; Gibbs, Mark J

    2008-02-05

    Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.

  15. The CMS DBS query language

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee

    2010-04-01

    The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.

  16. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    NASA Astrophysics Data System (ADS)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  17. Methods for structuring scientific knowledge from many areas related to aging research.

    PubMed

    Zhavoronkov, Alex; Cantor, Charles R

    2011-01-01

    Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.

  18. "Science SQL" as a Building Block for Flexible, Standards-based Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2016-04-01

    We have learnt to live with the pain of separating data and metadata into non-interoperable silos. For metadata, we enjoy the flexibility of databases, be they relational, graph, or some other NoSQL. Contrasting this, users still "drown in files" as an unstructured, low-level archiving paradigm. It is time to bridge this chasm which once was technologically induced, but today can be overcome. One building block towards a common re-integrated information space is to support massive multi-dimensional spatio-temporal arrays. These "datacubes" appear as sensor, image, simulation, and statistics data in all science and engineering domains, and beyond. For example, 2-D satellilte imagery, 2-D x/y/t image timeseries and x/y/z geophysical voxel data, and 4-D x/y/z/t climate data contribute to today's data deluge in the Earth sciences. Virtual observatories in the Space sciences routinely generate Petabytes of such data. Life sciences deal with microarray data, confocal microscopy, human brain data, which all fall into the same category. The ISO SQL/MDA (Multi-Dimensional Arrays) candidate standard is extending SQL with modelling and query support for n-D arrays ("datacubes") in a flexible, domain-neutral way. This heralds a new generation of services with new quality parameters, such as flexibility, ease of access, embedding into well-known user tools, and scalability mechanisms that remain completely transparent to users. Technology like the EU rasdaman ("raster data manager") Array Database system can support all of the above examples simultaneously, with one technology. This is practically proven: As of today, rasdaman is in operational use on hundreds of Terabytes of satellite image timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Therefore, Array Databases offering SQL/MDA constitute a natural common building block for next-generation data infrastructures. Being initiator and editor of the standard we present principles, implementation facets, and application examples as a basis for further discussion. Further, we highlight recent implementation progress in parallelization, data distribution, and query optimization showing their effects on real-life use cases.

  19. Database assessment of CMIP5 and hydrological models to determine flood risk areas

    NASA Astrophysics Data System (ADS)

    Limlahapun, Ponthip; Fukui, Hiromichi

    2016-11-01

    Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.

  20. An eight-week mindfulness-based stress reduction (MBSR) workshop increases regulatory choice flexibility.

    PubMed

    Alkoby, Alon; Pliskin, Ruthie; Halperin, Eran; Levit-Binnun, Nava

    2018-06-28

    Individuals encounter a variety of emotional challenges daily, with optimal emotion modulation requiring adaptive choice among available means of regulation. However, individuals differ in the ability to flexibly and adaptively move between engaging and disengaging emotion regulation (ER) strategies as per contextual demands, referred to as regulatory choice flexibility. Greater regulatory choice flexibility is associated with greater mental health, well-being and resilience, warranting the development of interventions to increase such flexibility. We hypothesized that a mindfulness-based stress reduction (MBSR) program would fulfill this goal. To test our hypothesis, we recruited college students to either participate in an 8-week MBSR workshop or join a waiting list for a later workshop (i.e., control participants). After the workshop's completion, all participants were invited to the laboratory and completed several computerized tasks examining their regulatory choice flexibility when exposed to universally emotion-laden stimuli as well as stimuli specifically related to the students' social and political environment. The regulatory choice patterns of participants who underwent MBSR training were found to be more flexible than those of participants who had not yet completed the workshop, with the former more likely than the latter to favor an engaging ER strategy (i.e., reappraisal) when faced with low-intensity stimuli and a disengaging strategy (i.e., distraction) when faced with high-intensity stimuli. The findings' importance is discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT

    NASA Astrophysics Data System (ADS)

    Livschitz, Yakov; Munro, Rosemary; Lang, Rüdiger; Fiedler, Lars; Dyer, Richard; Eisinger, Michael

    2010-05-01

    The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument's health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument's degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.

  2. Operational Monitoring of GOME-2 and IASI Level 1 Product Processing at EUMETSAT

    NASA Astrophysics Data System (ADS)

    Livschitz, Y.; Munro, R.; Lang, R.; Fiedler, L.; Dyer, R.; Eisinger, M.

    2009-12-01

    The growing complexity of operational level 1 radiance products from Low Earth Orbiting (LEO) platforms like EUMETSATs Metop series makes near-real-time monitoring of product quality a challenging task. The main challenge is to provide a monitoring system which is flexible and robust enough to identify and to react to anomalies which may be previously unknown to the system, as well as to provide all means and parameters necessary in order to support efficient ad-hoc analysis of the incident. The operational monitoring system developed at EUMETSAT for monitoring of GOME-2 and IASI level 1 data allows to perform near-real-time monitoring of operational products and instrument’s health in a robust and flexible fashion. For effective information management, the system is based on a relational database (Oracle). An Extract, Transform, Load (ETL) process transforms products in EUMETSAT Polar System (EPS) format into relational data structures. The identification of commonalities between products and instruments allows for a database structure design in such a way that different data can be analyzed using the same business intelligence functionality. An interactive analysis software implementing modern data mining techniques is also provided for a detailed look into the data. The system is effectively used for day-to-day monitoring, long-term reporting, instrument’s degradation analysis as well as for ad-hoc queries in case of an unexpected instrument or processing behaviour. Having data from different sources on a single instrument and even from different instruments, platforms or numerical weather prediction within the same database allows effective cross-comparison and looking for correlated parameters. Automatic alarms raised by checking for deviation of certain parameters, for data losses and other events significantly reduce time, necessary to monitor the processing on a day-to-day basis.

  3. Biological knowledge bases using Wikis: combining the flexibility of Wikis with the structure of databases.

    PubMed

    Brohée, Sylvain; Barriot, Roland; Moreau, Yves

    2010-09-01

    In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.

  4. MetNetAPI: A flexible method to access and manipulate biological network data from MetNet

    PubMed Central

    2010-01-01

    Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943

  5. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849

  6. aTRAM 2.0: An Improved, Flexible Locus Assembler for NGS Data

    PubMed Central

    Allen, Julie M; LaFrance, Raphael; Folk, Ryan A; Johnson, Kevin P; Guralnick, Robert P

    2018-01-01

    Massive strides have been made in technologies for collecting genome-scale data. However, tools for efficiently and flexibly assembling raw outputs into downstream analytical workflows are still nascent. aTRAM 1.0 was designed to assemble any locus from genome sequencing data but was neither optimized for efficiency nor able to serve as a single toolkit for all assembly needs. We have completely re-implemented aTRAM and redesigned its structure for faster read retrieval while adding a number of key features to improve flexibility and functionality. The software can now (1) assemble single- or paired-end data, (2) utilize both read directions in the database, (3) use an additional de novo assembly module, and (4) leverage new built-in pipelines to automate common workflows in phylogenomics. Owing to reimplementation of databasing strategies, we demonstrate that aTRAM 2.0 is much faster across all applications compared to the previous version. PMID:29881251

  7. On-the-fly form generation and on-line metadata configuration--a clinical data management Web infrastructure in Java.

    PubMed

    Beck, Peter; Truskaller, Thomas; Rakovac, Ivo; Cadonna, Bruno; Pieber, Thomas R

    2006-01-01

    In this paper we describe the approach to build a web-based clinical data management infrastructure on top of an entity-attribute-value (EAV) database which provides for flexible definition and extension of clinical data sets as well as efficient data handling and high performance query execution. A "mixed" EAV implementation provides a flexible and configurable data repository and at the same time utilizes the performance advantages of conventional database tables for rarely changing data structures. A dynamically configurable data dictionary contains further information for data validation. The online user interface can also be assembled dynamically. A data transfer object which encapsulates data together with all required metadata is populated by the backend and directly used to dynamically render frontend forms and handle incoming data. The "mixed" EAV model enables flexible definition and modification of clinical data sets while reducing performance drawbacks of pure EAV implementations to a minimum. The system currently is in use in an electronic patient record with focus on flexibility and a quality management application (www.healthgate.at) with high performance requirements.

  8. Flexible Decision Support in Device-Saturated Environments

    DTIC Science & Technology

    2003-10-01

    also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results

  9. The VirusBanker database uses a Java program to allow flexible searching through Bunyaviridae sequences

    PubMed Central

    Fourment, Mathieu; Gibbs, Mark J

    2008-01-01

    Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically. PMID:18251994

  10. Data, Data Everywhere but Not a Byte to Read: Managing Monitoring Information.

    ERIC Educational Resources Information Center

    Stafford, Susan G.

    1993-01-01

    Describes the Forest Science Data Bank that contains 2,400 data sets from over 350 existing ecological studies. Database features described include involvement of the scientific community; database documentation; data quality assurance; security; data access and retrieval; and data import/export flexibility. Appendices present the Quantitative…

  11. Integrating Digital Images into the Art and Art History Curriculum.

    ERIC Educational Resources Information Center

    Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.

    2002-01-01

    Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…

  12. CEBS: a comprehensive annotated database of toxicological data

    PubMed Central

    Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer

    2017-01-01

    The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660

  13. Cruella: developing a scalable tissue microarray data management system.

    PubMed

    Cowan, James D; Rimm, David L; Tuck, David P

    2006-06-01

    Compared with DNA microarray technology, relatively little information is available concerning the special requirements, design influences, and implementation strategies of data systems for tissue microarray technology. These issues include the requirement to accommodate new and different data elements for each new project as well as the need to interact with pre-existing models for clinical, biological, and specimen-related data. To design and implement a flexible, scalable tissue microarray data storage and management system that could accommodate information regarding different disease types and different clinical investigators, and different clinical investigation questions, all of which could potentially contribute unforeseen data types that require dynamic integration with existing data. The unpredictability of the data elements combined with the novelty of automated analysis algorithms and controlled vocabulary standards in this area require flexible designs and practical decisions. Our design includes a custom Java-based persistence layer to mediate and facilitate interaction with an object-relational database model and a novel database schema. User interaction is provided through a Java Servlet-based Web interface. Cruella has become an indispensable resource and is used by dozens of researchers every day. The system stores millions of experimental values covering more than 300 biological markers and more than 30 disease types. The experimental data are merged with clinical data that has been aggregated from multiple sources and is available to the researchers for management, analysis, and export. Cruella addresses many of the special considerations for managing tissue microarray experimental data and the associated clinical information. A metadata-driven approach provides a practical solution to many of the unique issues inherent in tissue microarray research, and allows relatively straightforward interoperability with and accommodation of new data models.

  14. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Photo-z-SQL: Integrated, flexible photometric redshift computation in a database

    NASA Astrophysics Data System (ADS)

    Beck, R.; Dobos, L.; Budavári, T.; Szalay, A. S.; Csabai, I.

    2017-04-01

    We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability to perform in-place photo-z calculation would be a significant advantage. Also, the efficient handling of variable filter sets is a necessity for heterogeneous databases, for example the Hubble Source Catalog, and for cross-match services such as SkyQuery. We illustrate the performance of our code on two reference photo-z estimation testing datasets, and provide an analysis of execution time and scalability with respect to different configurations. The code is available for download at https://github.com/beckrob/Photo-z-SQL.

  16. [Methodological aspects in the evaluation of turn-over and up/down sizing as indicators of work-related stress].

    PubMed

    Veronesi, G; Bertù, L; Mombelli, S; Cimmino, L; Caravati, G; Conti, M; Abate, T; Ferrario, M M

    2011-01-01

    We discuss the methodological aspects related to the evaluation of turn-over and up-down sizing as indicators of work-related stress, in complex organizations like a university hospital. To estimate the active workers population we developed an algorithm which integrated several administrative databases. The indicators were standardized to take into account some potential confounders (age, sex, work seniority) when considering different hospital structures and job mansions. Main advantages of our method include flexibility in the choice of the analysis detail (hospital units, job mansions, a combination of both) and the possibility to describe over-time trends to measure the success of preventive strategies.

  17. An efficient representation of spatial information for expert reasoning in robotic vehicles

    NASA Technical Reports Server (NTRS)

    Scott, Steven; Interrante, Mark

    1987-01-01

    The previous generation of robotic vehicles and drones was designed for a specific task, with limited flexibility in executing their mission. This limited flexibility arises because the robotic vehicles do not possess the intelligence and knowledge upon which to make significant tactical decisions. Current development of robotic vehicles is toward increased intelligence and capabilities, adapting to a changing environment and altering mission objectives. The latest techniques in artificial intelligence (AI) are being employed to increase the robotic vehicle's intelligent decision-making capabilities. This document describes the design of the SARA spatial database tool, which is composed of request parser, reasoning, computations, and database modules that collectively manage and derive information useful for robotic vehicles.

  18. Graduate Student Research Instruction: Testing an Interactive Web-Based Library Tutorial for a Health Sciences Database

    ERIC Educational Resources Information Center

    Lechner, David L.

    2005-01-01

    Interactive electronic tutorials offer flexibility in delivering library instruction; however, questions linger regarding their effectiveness compared to traditional librarian-led classroom lectures. This study examines a tutorial introducing health science students to the Cumulative Index to Nursing and Allied Health Literature database. Half the…

  19. Using sampling theory as the basis for a conceptual data model

    Treesearch

    Fred C. Martin; Tonya Baggett; Tom Wolfe

    2000-01-01

    Greater demands on forest resources require that larger amounts of information be readily available to decisionmakers. To provide more information faster, databases must be developed that are more comprehensive and easier to use. Data modeling is a process for building more complete and flexible databases by emphasizing fundamental relationships over existing or...

  20. ALDB: a domestic-animal long noncoding RNA database.

    PubMed

    Li, Aimin; Zhang, Junying; Zhou, Zhongyin; Wang, Lei; Liu, Yujuan; Liu, Yajun

    2015-01-01

    Long noncoding RNAs (lncRNAs) have attracted significant attention in recent years due to their important roles in many biological processes. Domestic animals constitute a unique resource for understanding the genetic basis of phenotypic variation and are ideal models relevant to diverse areas of biomedical research. With improving sequencing technologies, numerous domestic-animal lncRNAs are now available. Thus, there is an immediate need for a database resource that can assist researchers to store, organize, analyze and visualize domestic-animal lncRNAs. The domestic-animal lncRNA database, named ALDB, is the first comprehensive database with a focus on the domestic-animal lncRNAs. It currently archives 12,103 pig intergenic lncRNAs (lincRNAs), 8,923 chicken lincRNAs and 8,250 cow lincRNAs. In addition to the annotations of lincRNAs, it offers related data that is not available yet in existing lncRNA databases (lncRNAdb and NONCODE), such as genome-wide expression profiles and animal quantitative trait loci (QTLs) of domestic animals. Moreover, a collection of interfaces and applications, such as the Basic Local Alignment Search Tool (BLAST), the Generic Genome Browser (GBrowse) and flexible search functionalities, are available to help users effectively explore, analyze and download data related to domestic-animal lncRNAs. ALDB enables the exploration and comparative analysis of lncRNAs in domestic animals. A user-friendly web interface, integrated information and tools make it valuable to researchers in their studies. ALDB is freely available from http://res.xaut.edu.cn/aldb/index.jsp.

  1. Private database queries based on counterfactual quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Li; Guo, Fen-Zhuo; Gao, Fei; Liu, Bin; Wen, Qiao-Yan

    2013-08-01

    Based on the fundamental concept of quantum counterfactuality, we propose a protocol to achieve quantum private database queries, which is a theoretical study of how counterfactuality can be employed beyond counterfactual quantum key distribution (QKD). By adding crucial detecting apparatus to the device of QKD, the privacy of both the distrustful user and the database owner can be guaranteed. Furthermore, the proposed private-database-query protocol makes full use of the low efficiency in the counterfactual QKD, and by adjusting the relevant parameters, the protocol obtains excellent flexibility and extensibility.

  2. Efficient method for high-throughput virtual screening based on flexible docking: discovery of novel acetylcholinesterase inhibitors.

    PubMed

    Mizutani, Miho Yamada; Itai, Akiko

    2004-09-23

    A method of easily finding ligands, with a variety of core structures, for a given target macromolecule would greatly contribute to the rapid identification of novel lead compounds for drug development. We have developed an efficient method for discovering ligand candidates from a number of flexible compounds included in databases, when the three-dimensional (3D) structure of the drug target is available. The method, named ADAM&EVE, makes use of our automated docking method ADAM, which has already been reported. Like ADAM, ADAM&EVE takes account of the flexibility of each molecule in databases, by exploring the conformational space fully and continuously. Database screening has been made much faster than with ADAM through the tuning of parameters, so that computational screening of several hundred thousand compounds is possible in a practical time. Promising ligand candidates can be selected according to various criteria based on the docking results and characteristics of compounds. Furthermore, we have developed a new tool, EVE-MAKE, for automatically preparing the additional compound data necessary for flexible docking calculation, prior to 3D database screening. Among several successful cases of lead discovery by ADAM&EVE, the finding of novel acetylcholinesterase (AChE) inhibitors is presented here. We performed a virtual screening of about 160 000 commercially available compounds against the X-ray crystallographic structure of AChE. Among 114 compounds that could be purchased and assayed, 35 molecules with various core structures showed inhibitory activities with IC(50) values less than 100 microM. Thirteen compounds had IC(50) values between 0.5 and 10 microM, and almost all their core structures are very different from those of known inhibitors. The results demonstrate the effectiveness and validity of the ADAM&EVE approach and provide a starting point for development of novel drugs to treat Alzheimer's disease.

  3. Construction of the Database for Pulsating Variable Stars

    NASA Astrophysics Data System (ADS)

    Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei

    2012-01-01

    A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.

  4. maxdLoad2 and maxdBrowse: standards-compliant tools for microarray experimental annotation, data management and dissemination.

    PubMed

    Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy

    2005-11-03

    maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxdBrowse emulates many of the browse and search features available in the maxdLoad2 application via a web-browser. This allows users who are not familiar with maxdLoad2 to browse and export microarray data from the database for their own analysis. The same browse and search features are also available via command-line and SOAP server interfaces. This both enables scripting of data export for use embedded in data repositories and analysis environments, and allows access to the maxd databases via web-service architectures. maxdLoad2 http://www.bioinf.man.ac.uk/microarray/maxd/ and maxdBrowse http://dbk.ch.umist.ac.uk/maxdBrowse are portable and compatible with all common operating systems and major database servers. They provide a powerful, flexible package for annotation of microarray experiments and a convenient dissemination environment. They are available for download and open sourced under the Artistic License.

  5. Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.

    ERIC Educational Resources Information Center

    Johnson, Ian

    TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…

  6. Fault-tolerant symmetrically-private information retrieval

    NASA Astrophysics Data System (ADS)

    Wang, Tian-Yin; Cai, Xiao-Qiu; Zhang, Rui-Ling

    2016-08-01

    We propose two symmetrically-private information retrieval protocols based on quantum key distribution, which provide a good degree of database and user privacy while being flexible, loss-resistant and easily generalized to a large database similar to the precedent works. Furthermore, one protocol is robust to a collective-dephasing noise, and the other is robust to a collective-rotation noise.

  7. [Development of Hospital Equipment Maintenance Information System].

    PubMed

    Zhou, Zhixin

    2015-11-01

    Hospital equipment maintenance information system plays an important role in improving medical treatment quality and efficiency. By requirement analysis of hospital equipment maintenance, the system function diagram is drawed. According to analysis of input and output data, tables and reports in connection with equipment maintenance process, relationships between entity and attribute is found out, and E-R diagram is drawed and relational database table is established. Software development should meet actual process requirement of maintenance and have a friendly user interface and flexible operation. The software can analyze failure cause by statistical analysis.

  8. Protection of electronic health records (EHRs) in cloud.

    PubMed

    Alabdulatif, Abdulatif; Khalil, Ibrahim; Mai, Vu

    2013-01-01

    EHR technology has come into widespread use and has attracted attention in healthcare institutions as well as in research. Cloud services are used to build efficient EHR systems and obtain the greatest benefits of EHR implementation. Many issues relating to building an ideal EHR system in the cloud, especially the tradeoff between flexibility and security, have recently surfaced. The privacy of patient records in cloud platforms is still a point of contention. In this research, we are going to improve the management of access control by restricting participants' access through the use of distinct encrypted parameters for each participant in the cloud-based database. Also, we implement and improve an existing secure index search algorithm to enhance the efficiency of information control and flow through a cloud-based EHR system. At the final stage, we contribute to the design of reliable, flexible and secure access control, enabling quick access to EHR information.

  9. Updates to the Virtual Atomic and Molecular Data Centre

    NASA Astrophysics Data System (ADS)

    Hill, Christian; Tennyson, Jonathan; Gordon, Iouli E.; Rothman, Laurence S.; Dubernet, Marie-Lise

    2014-06-01

    The Virtual Atomic and Molecular Data Centre (VAMDC) has established a set of standards for the storage and transmission of atomic and molecular data and an SQL-based query language (VSS2) for searching online databases, known as nodes. The project has also created an online service, the VAMDC Portal, through which all of these databases may be searched and their results compared and aggregated. Since its inception four years ago, the VAMDC e-infrastructure has grown to encompass over 40 databases, including HITRAN, in more than 20 countries and engages actively with scientists in six continents. Associated with the portal are a growing suite of software tools for the transformation of data from its native, XML-based, XSAMS format, to a range of more convenient human-readable (such as HTML) and machinereadable (such as CSV) formats. The relational database for HITRAN1, created as part of the VAMDC project is a flexible and extensible data model which is able to represent a wider range of parameters than the current fixed-format text-based one. Over the next year, a new online interface to this database will be tested, released and fully documented - this web application, HITRANonline2, will fully replace the ageing and incomplete JavaHAWKS software suite.

  10. OWLing Clinical Data Repositories With the Ontology Web Language

    PubMed Central

    Pastor, Xavier; Lozano, Esther

    2014-01-01

    Background The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. Objective The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. Methods We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Results Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. Conclusions OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems. PMID:25599697

  11. OWLing Clinical Data Repositories With the Ontology Web Language.

    PubMed

    Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther

    2014-08-01

    The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.

  12. PACE: Proactively Secure Accumulo with Cryptographic Enforcement

    DTIC Science & Technology

    2017-05-27

    Abstract—Cloud-hosted databases have many compelling ben- efits, including high availability , flexible resource allocation, and resiliency to attack...infrastructure to the cloud. This move is motivated by the cloud’s increased availability , flexibility, and resilience [1]. Most importantly, the cloud enables...a level of availability and performance that would be impossible for many companies to achieve using their own infrastructure. For example, using a

  13. An integrated biomedical telemetry system for sleep monitoring employing a portable body area network of sensors (SENSATION).

    PubMed

    Astaras, Alexander; Arvanitidou, Marina; Chouvarda, Ioanna; Kilintzis, Vassilis; Koutkias, Vassilis; Sanchez, Eduardo Monton; Stalidis, George; Triantafyllidis, Andreas; Maglaveras, Nicos

    2008-01-01

    A flexible, scaleable and cost-effective medical telemetry system is described for monitoring sleep-related disorders in the home environment. The system was designed and built for real-time data acquisition and processing, allowing for additional use in intensive care unit scenarios where rapid medical response is required in case of emergency. It comprises a wearable body area network of Zigbee-compatible wireless sensors worn by the subject, a central database repository residing in the medical centre and thin client workstations located at the subject's home and in the clinician's office. The system supports heterogeneous setup configurations, involving a variety of data acquisition sensors to suit several medical applications. All telemetry data is securely transferred and stored in the central database under the clinicians' ownership and control.

  14. Insertion algorithms for network model database management systems

    NASA Astrophysics Data System (ADS)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  15. Graphical user interfaces for symbol-oriented database visualization and interaction

    NASA Astrophysics Data System (ADS)

    Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger

    1997-04-01

    In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.

  16. PlantTribes: a gene and gene family resource for comparative genomics in plants

    PubMed Central

    Wall, P. Kerr; Leebens-Mack, Jim; Müller, Kai F.; Field, Dawn; Altman, Naomi S.; dePamphilis, Claude W.

    2008-01-01

    The PlantTribes database (http://fgp.huck.psu.edu/tribe.html) is a plant gene family database based on the inferred proteomes of five sequenced plant species: Arabidopsis thaliana, Carica papaya, Medicago truncatula, Oryza sativa and Populus trichocarpa. We used the graph-based clustering algorithm MCL [Van Dongen (Technical Report INS-R0010 2000) and Enright et al. (Nucleic Acids Res. 2002; 30: 1575–1584)] to classify all of these species’ protein-coding genes into putative gene families, called tribes, using three clustering stringencies (low, medium and high). For all tribes, we have generated protein and DNA alignments and maximum-likelihood phylogenetic trees. A parallel database of microarray experimental results is linked to the genes, which lets researchers identify groups of related genes and their expression patterns. Unified nomenclatures were developed, and tribes can be related to traditional gene families and conserved domain identifiers. SuperTribes, constructed through a second iteration of MCL clustering, connect distant, but potentially related gene clusters. The global classification of nearly 200 000 plant proteins was used as a scaffold for sorting ∼4 million additional cDNA sequences from over 200 plant species. All data and analyses are accessible through a flexible interface allowing users to explore the classification, to place query sequences within the classification, and to download results for further study. PMID:18073194

  17. SAM: String-based sequence search algorithm for mitochondrial DNA database queries

    PubMed Central

    Röck, Alexander; Irwin, Jodi; Dür, Arne; Parsons, Thomas; Parson, Walther

    2011-01-01

    The analysis of the haploid mitochondrial (mt) genome has numerous applications in forensic and population genetics, as well as in disease studies. Although mtDNA haplotypes are usually determined by sequencing, they are rarely reported as a nucleotide string. Traditionally they are presented in a difference-coded position-based format relative to the corrected version of the first sequenced mtDNA. This convention requires recommendations for standardized sequence alignment that is known to vary between scientific disciplines, even between laboratories. As a consequence, database searches that are vital for the interpretation of mtDNA data can suffer from biased results when query and database haplotypes are annotated differently. In the forensic context that would usually lead to underestimation of the absolute and relative frequencies. To address this issue we introduce SAM, a string-based search algorithm that converts query and database sequences to position-free nucleotide strings and thus eliminates the possibility that identical sequences will be missed in a database query. The mere application of a BLAST algorithm would not be a sufficient remedy as it uses a heuristic approach and does not address properties specific to mtDNA, such as phylogenetically stable but also rapidly evolving insertion and deletion events. The software presented here provides additional flexibility to incorporate phylogenetic data, site-specific mutation rates, and other biologically relevant information that would refine the interpretation of mitochondrial DNA data. The manuscript is accompanied by freeware and example data sets that can be used to evaluate the new software (http://stringvalidation.org). PMID:21056022

  18. Predicting and priming thematic roles: Flexible use of verbal and nonverbal cues during relative clause comprehension.

    PubMed

    Kowalski, Alix; Huang, Yi Ting

    2017-09-01

    Relative-clause sentences (RCs) have been a key test case for psycholinguistic models of comprehension. While object-relative clauses (e.g., ORCs: "The bear that the horse . . .") are distinguished from subject-relative clauses (SRCs) after the second noun phrase (NP2; e.g., SRCs: "The bear that pushed . . ."), role assignments are often delayed until the embedded verb (e.g., ". . . pushed ate the sandwich"). This contrasts with overwhelming evidence of incremental role assignment in other garden-path sentences. The current study investigates how contextual factors modulate reliance on verbal and nonverbal cues. Using a visual-world paradigm, participants saw preceding discourse contexts that highlighted relevant roles within events (e.g., pusher, pushee). Nevertheless, role assignment for ORCs remained delayed until the embedded verb (Experiment 1). However, role assignment for ORCs occurred before the embedded verb when additional linguistic input was provided by an adverb (Experiment 2). Finally, when the likelihood of encountering RCs increased within the experimental context, role immediate assignment for ORCs was observed after NP2 (Experiment 3). Together, these findings suggest that real-time role assignment often prefers verbal cues, but can also flexibly adapt to the statistical properties of the local context. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. A cross-cultural comparison of children's imitative flexibility.

    PubMed

    Clegg, Jennifer M; Legare, Cristine H

    2016-09-01

    Recent research with Western populations has demonstrated that children use imitation flexibly to engage in both instrumental and conventional learning. Evidence for children's imitative flexibility in non-Western populations is limited, however, and has only assessed imitation of instrumental tasks. This study (N = 142, 6- to 8-year-olds) demonstrates both cultural continuity and cultural variation in imitative flexibility. Children engage in higher imitative fidelity for conventional tasks than for instrumental tasks in both an industrialized, Western culture (United States), and a subsistence-based, non-Western culture (Vanuatu). Children in Vanuatu engage in higher imitative fidelity of instrumental tasks than in the United States, a potential consequence of cultural variation in child socialization for conformity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. FOUNTAIN: A JAVA open-source package to assist large sequencing projects

    PubMed Central

    Buerstedde, Jean-Marie; Prill, Florian

    2001-01-01

    Background Better automation, lower cost per reaction and a heightened interest in comparative genomics has led to a dramatic increase in DNA sequencing activities. Although the large sequencing projects of specialized centers are supported by in-house bioinformatics groups, many smaller laboratories face difficulties managing the appropriate processing and storage of their sequencing output. The challenges include documentation of clones, templates and sequencing reactions, and the storage, annotation and analysis of the large number of generated sequences. Results We describe here a new program, named FOUNTAIN, for the management of large sequencing projects . FOUNTAIN uses the JAVA computer language and data storage in a relational database. Starting with a collection of sequencing objects (clones), the program generates and stores information related to the different stages of the sequencing project using a web browser interface for user input. The generated sequences are subsequently imported and annotated based on BLAST searches against the public databases. In addition, simple algorithms to cluster sequences and determine putative polymorphic positions are implemented. Conclusions A simple, but flexible and scalable software package is presented to facilitate data generation and storage for large sequencing projects. Open source and largely platform and database independent, we wish FOUNTAIN to be improved and extended in a community effort. PMID:11591214

  1. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  2. A flexible influence of affective feelings on creative and analytic performance.

    PubMed

    Huntsinger, Jeffrey R; Ray, Cara

    2016-09-01

    Considerable research shows that positive affect improves performance on creative tasks and negative affect improves performance on analytic tasks. The present research entertained the idea that affective feelings have flexible, rather than fixed, effects on cognitive performance. Consistent with the idea that positive and negative affect signal the value of accessible processing inclinations, the influence of affective feelings on performance on analytic or creative tasks was found to be flexibly responsive to the relative accessibility of different styles of processing (i.e., heuristic vs. systematic, global vs. local). When a global processing orientation was accessible happy participants generated more creative uses for a brick (Experiment 1), successfully solved more remote associates and insight problems (Experiment 2) and displayed broader categorization (Experiment 3) than those in sad moods. When a local processing orientation was accessible this pattern reversed. When a heuristic processing style was accessible happy participants were more likely to commit the conjunction fallacy (Experiment 3) and showed less pronounced anchoring effects (Experiment 4) than sad participants. When a systematic processing style was accessible this pattern reversed. Implications of these results for relevant affect-cognition models are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Training and manpower issues for specialist registrars in paediatrics. How are we doing and where are we going?

    PubMed

    Byrne, O C; Boland, B; Nicholson, A J; Waldron, M; O'Neill, M B

    2005-01-01

    All Irish paediatric higher specialist trainees' opinions about the paediatric higher specialist training (HST) scheme and related manpower issues were surveyed. Information was obtained on 1) trainees' level of satisfaction with HST, 2) their ultimate career ambitions including location of final posts, 3) attitudes to both flexible training and consultancies and 4) demographics to assess the significance of gender variations. Fifty-two eligible trainees were identified using the Royal College of Physicians of Ireland database. The survey was administered as an anonymous postal survey. The response rate was 88%. Results indicated a high level of satisfaction with HST (78%) overall although problems were noted with the half-day release programme as only 63% were facilitated. Only 30% wish to practice as subspecialists, 76% of trainees wish to work in an urban hospital and 43.5% desire a flexible consultancy suggesting an incompatibility of trainees' desires with current Irish medical manpower policy. To address these difficulties we suggest establishing more rigorous audit of training posts to ensure deficiencies are corrected and the establishment of flexible training to address gender imbalance and to promote the concept of consultant job sharing.

  4. Timing matters: change depends on the stage of treatment in cognitive behavioral therapy for panic disorder with agoraphobia.

    PubMed

    Gloster, Andrew T; Klotsche, Jens; Gerlach, Alexander L; Hamm, Alfons; Ströhle, Andreas; Gauggel, Siegfried; Kircher, Tilo; Alpers, Georg W; Deckert, Jürgen; Wittchen, Hans-Ulrich

    2014-02-01

    The mechanisms of action underlying treatment are inadequately understood. This study examined 5 variables implicated in the treatment of panic disorder with agoraphobia (PD/AG): catastrophic agoraphobic cognitions, anxiety about bodily sensations, agoraphobic avoidance, anxiety sensitivity, and psychological flexibility. The relative importance of these process variables was examined across treatment phases: (a) psychoeducation/interoceptive exposure, (b) in situ exposure, and (c) generalization/follow-up. Data came from a randomized controlled trial of cognitive behavioral therapy for PD/AG (n = 301). Outcomes were the Panic and Agoraphobia Scale (Bandelow, 1995) and functioning as measured in the Clinical Global Impression scale (Guy, 1976). The effect of process variables on subsequent change in outcome variables was calculated using bivariate latent difference score modeling. Change in panic symptomatology was preceded by catastrophic appraisal and agoraphobic avoidance across all phases of treatment, by anxiety sensitivity during generalization/follow-up, and by psychological flexibility during exposure in situ. Change in functioning was preceded by agoraphobic avoidance and psychological flexibility across all phases of treatment, by fear of bodily symptoms during generalization/follow-up, and by anxiety sensitivity during exposure. The effects of process variables on outcomes differ across treatment phases and outcomes (i.e., symptomatology vs. functioning). Agoraphobic avoidance and psychological flexibility should be investigated and therapeutically targeted in addition to cognitive variables. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Exploring performance issues for a clinical database organized using an entity-attribute-value representation.

    PubMed

    Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L

    2000-01-01

    The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.

  6. ToxReporter: viewing the genome through the eyes of a toxicologist.

    PubMed

    Gosink, Mark

    2016-01-01

    One of the many roles of a toxicologist is to determine if an observed adverse event (AE) is related to a previously unrecognized function of a given gene/protein. Towards that end, he or she will search a variety of public and propriety databases for information linking that protein to the observed AE. However, these databases tend to present all available information about a protein, which can be overwhelming, limiting the ability to find information about the specific toxicity being investigated. ToxReporter compiles information from a broad selection of resources and limits display of the information to user-selected areas of interest. ToxReporter is a PERL-based web-application which utilizes a MySQL database to streamline this process by categorizing public and proprietary domain-derived information into predefined safety categories according to a customizable lexicon. Users can view gene information that is 'red-flagged' according to the safety issue under investigation. ToxReporter also uses a scoring system based on relative counts of the red-flags to rank all genes for the amount of information pertaining to each safety issue and to display their scored ranking as an easily interpretable 'Tox-At-A-Glance' chart. Although ToxReporter was originally developed to display safety information, its flexible design could easily be adapted to display disease information as well.Database URL: ToxReporter is freely available at https://github.com/mgosink/ToxReporter. © The Author(s) 2016. Published by Oxford University Press.

  7. interPopula: a Python API to access the HapMap Project dataset

    PubMed Central

    2010-01-01

    Background The HapMap project is a publicly available catalogue of common genetic variants that occur in humans, currently including several million SNPs across 1115 individuals spanning 11 different populations. This important database does not provide any programmatic access to the dataset, furthermore no standard relational database interface is provided. Results interPopula is a Python API to access the HapMap dataset. interPopula provides integration facilities with both the Python ecology of software (e.g. Biopython and matplotlib) and other relevant human population datasets (e.g. Ensembl gene annotation and UCSC Known Genes). A set of guidelines and code examples to address possible inconsistencies across heterogeneous data sources is also provided. Conclusions interPopula is a straightforward and flexible Python API that facilitates the construction of scripts and applications that require access to the HapMap dataset. PMID:21210977

  8. Simrank: Rapid and sensitive general-purpose k-mer search tool

    PubMed Central

    2011-01-01

    Background Terabyte-scale collections of string-encoded data are expected from consortia efforts such as the Human Microbiome Project http://nihroadmap.nih.gov/hmp. Intra- and inter-project data similarity searches are enabled by rapid k-mer matching strategies. Software applications for sequence database partitioning, guide tree estimation, molecular classification and alignment acceleration have benefited from embedded k-mer searches as sub-routines. However, a rapid, general-purpose, open-source, flexible, stand-alone k-mer tool has not been available. Results Here we present a stand-alone utility, Simrank, which allows users to rapidly identify database strings the most similar to query strings. Performance testing of Simrank and related tools against DNA, RNA, protein and human-languages found Simrank 10X to 928X faster depending on the dataset. Conclusions Simrank provides molecular ecologists with a high-throughput, open source choice for comparing large sequence sets to find similarity. PMID:21524302

  9. Automated biosurveillance data from England and Wales, 1991-2011.

    PubMed

    Enki, Doyo G; Noufaily, Angela; Garthwaite, Paul H; Andrews, Nick J; Charlett, André; Lane, Chris; Farrington, C Paddy

    2013-01-01

    Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991-2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity.

  10. Automated Biosurveillance Data from England and Wales, 1991–2011

    PubMed Central

    Enki, Doyo G.; Noufaily, Angela; Garthwaite, Paul H.; Andrews, Nick J.; Charlett, André; Lane, Chris

    2013-01-01

    Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991–2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity. PMID:23260848

  11. Computational framework to support integration of biomolecular and clinical data within a translational approach.

    PubMed

    Miyoshi, Newton Shydeo Brandão; Pinheiro, Daniel Guariz; Silva, Wilson Araújo; Felipe, Joaquim Cezar

    2013-06-06

    The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. We have implemented an extension of Chado - the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different "omics" technologies with patient's clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans.

  12. Reduced order model of a blended wing body aircraft configuration

    NASA Astrophysics Data System (ADS)

    Stroscher, F.; Sika, Z.; Petersson, O.

    2013-12-01

    This paper describes the full development process of a numerical simulation model for the ACFA2020 (Active Control for Flexible 2020 Aircraft) blended wing body (BWB) configuration. Its requirements are the prediction of aeroelastic and flight dynamic response in time domain, with relatively small model order. Further, the model had to be parameterized with regard to multiple fuel filling conditions, as well as flight conditions. High efforts have been conducted in high-order aerodynamic analysis, for subsonic and transonic regime, by several project partners. The integration of the unsteady aerodynamic databases was one of the key issues in aeroelastic modeling.

  13. The eNanoMapper database for nanomaterial safety information

    PubMed Central

    Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413

  14. The eNanoMapper database for nanomaterial safety information.

    PubMed

    Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).

  15. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences.

    PubMed

    Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.

  16. Qualitative assessment of user experiences of a novel smart phone application designed to support flexible intensive insulin therapy in type 1 diabetes.

    PubMed

    Knight, Brigid A; McIntyre, H David; Hickman, Ingrid J; Noud, Marina

    2016-09-15

    Modern flexible multiple daily injection (MDI) therapy requires people with diabetes to manage complex mathematical calculations to determine insulin doses on a day to day basis. Automated bolus calculators assist with these calculations, add additional functionality to protect against hypoglycaemia and enhance the record keeping process, however uptake and use depends on the devices meeting the needs of the user. We aimed to obtain user feedback on the usability of a mobile phone bolus calculator application in adults with T1DM to inform future development of mobile phone diabetes support applications. Adults with T1DM who had previously received education in flexible MDI therapy were invited to participate. Eligible respondents attended app education and one month later participated in a focus group to provide feedback on the features of the app in relation to usability for patient-based flexible MDI and future app development. Seven adults participated in the app training and follow up interview. App features that support dose adjustment to reduce hypoglycaemia risk and features that enable greater efficiency in dose calculation, record keeping and report generation were highly valued. Adults who are self managing flexible MDI found the Rapidcalc mobile phone app to be a useful self-management tool and additional features to further improve usability, such as connectivity with BG meter and food databases, shortcut options to economise data entry and web based storage of data, were identified. Further work is needed to ascertain specific features and benefit for those with lower health literacy.

  17. [Informatics support for risk assessment and identification of preventive measures in small and micro-enterprises: occupational hazard datasheets].

    PubMed

    de Merich, D; Forte, Giulia

    2011-01-01

    Risk assessment is the fundamental process of an enterprise's prevention system and is the principal mandatory provision contained in the Health and Safety Law (Legislative Decree 81/2008) amended by Legislative Decree 106/2009. In order to properly comply with this obligation also in small-sized enterprises, the appropriate regulatory bodies should provide the enterprises with standardized tools and methods for identifying, assessing and managing risks. To assist in particular small and micro-enterprises (SMEs) with risk assessment, by providing a flexible tool that can also be standardized in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. Official efforts to provide Italian SMEs with information may initially make use of the findings of research conducted by ISPESL over the past 20 years, thanks in part to cooperation with other institutions (Regions, INAIL-National Insurance Institute for Occupational Accidents and Diseases), which have led to the creation of an information system on prevention consisting of numerous databases, both statistical and documental ("National System of Surveillance on fatal and serious accidents", "National System of Surveillance on work-related diseases", "Sector hazard profiles" database, "Solutions and Best Practices" database, "Technical Guidelines" database, "Training packages for prevention professionals in enterprises" database). With regard to evaluation criteria applicable within the enterprise, the possibility of combining traditional and uniform areas of assessment (by sector or by risk factor) with assessments by job/occupation has become possible thanks to the cooperation agreement made in 2009 by ISPESL, the ILO (International Labour Organisation) of Geneva and IIOSH (Israel Institute for Occupational Health and Hygiene) regarding the creation of an international Database (HDODB) based on risk datasheets per occupation. The project sets out to assist in particular small and micro-enterprises with risk assessment, providing a flexible and standardized tool in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. The model proposed by ISPESL selected the ILO's "Hazard Datasheet on Occupation" as an initial information tool to steer efforts to assess and manage hazards in small and micro-enterprises. In addition to being an internationally validated tool, the occupation datasheet has a very simple structure that is very effective in communicating and updating information in relation to the local context. According to the logic based on the providing support to enterprises by means of a collaborative network among institutions, local supervisory services and social partners, standardised hazard assessment procedures should be, irrespective of any legal obligations, the preferred tools of an "updatable information system" capable of providing support for the need to improve the process of assessing and managing hazards in enterprises.

  18. HAEdb: a novel interactive, locus-specific mutation database for the C1 inhibitor gene.

    PubMed

    Kalmár, Lajos; Hegedüs, Tamás; Farkas, Henriette; Nagy, Melinda; Tordai, Attila

    2005-01-01

    Hereditary angioneurotic edema (HAE) is an autosomal dominant disorder characterized by episodic local subcutaneous and submucosal edema and is caused by the deficiency of the activated C1 esterase inhibitor protein (C1-INH or C1INH; approved gene symbol SERPING1). Published C1-INH mutations are represented in large universal databases (e.g., OMIM, HGMD), but these databases update their data rather infrequently, they are not interactive, and they do not allow searches according to different criteria. The HAEdb, a C1-INH gene mutation database (http://hae.biomembrane.hu) was created to contribute to the following expectations: 1) help the comprehensive collection of information on genetic alterations of the C1-INH gene; 2) create a database in which data can be searched and compared according to several flexible criteria; and 3) provide additional help in new mutation identification. The website uses MySQL, an open-source, multithreaded, relational database management system. The user-friendly graphical interface was written in the PHP web programming language. The website consists of two main parts, the freely browsable search function, and the password-protected data deposition function. Mutations of the C1-INH gene are divided in two parts: gross mutations involving DNA fragments >1 kb, and micro mutations encompassing all non-gross mutations. Several attributes (e.g., affected exon, molecular consequence, family history) are collected for each mutation in a standardized form. This database may facilitate future comprehensive analyses of C1-INH mutations and also provide regular help for molecular diagnostic testing of HAE patients in different centers.

  19. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed

    O'Neill, M A; Hilgetag, C C

    2001-08-29

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.

  20. The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data.

    PubMed Central

    O'Neill, M A; Hilgetag, C C

    2001-01-01

    Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702

  1. Globalization, Polanyi, and the Chinese Yuan

    DTIC Science & Technology

    2007-12-01

    international markets. The U.S. Bureau of Economic Analysis database and a doctorate study from the Centre for Strategic Economic Studies provides FDI...Archive/2004/Sep/23-184387.html (accessed June 12, 2007). 99 ŖND LD: APEC Finance Chiefs Paper over Forex Flexibility Issue," Kyodo News...108th Congress (2003): To authorize appropriate action if the negotiations with the People’s Republic of China. GovTrack.us ( database of federal

  2. Intelligent distributed medical image management

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.

    1995-05-01

    The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.

  3. The plant phenological online database (PPODB): an online database for long-term phenological data.

    PubMed

    Dierenbach, Jonas; Badeck, Franz-W; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  4. Gamma-Ray Burst Intensity Distributions

    NASA Technical Reports Server (NTRS)

    Band, David L.; Norris, Jay P.; Bonnell, Jerry T.

    2004-01-01

    We use the lag-luminosity relation to calculate self-consistently the redshifts, apparent peak bolometric luminosities L(sub B1), and isotropic energies E(sub iso) for a large sample of BATSE bursts. We consider two different forms of the lag-luminosity relation; for both forms the median redshift, for our burst database is 1.6. We model the resulting sample of burst energies with power law and Gaussian dis- tributions, both of which are reasonable models. The power law model has an index of a = 1.76 plus or minus 0.05 (95% confidence) as opposed to the index of a = 2 predicted by the simple universal jet profile model; however, reasonable refinements to this model permit much greater flexibility in reconciling predicted and observed energy distributions.

  5. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.

    PubMed

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F

    2007-07-27

    We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.

  6. HOWDY: an integrated database system for human genome research

    PubMed Central

    Hirakawa, Mika

    2002-01-01

    HOWDY is an integrated database system for accessing and analyzing human genomic information (http://www-alis.tokyo.jst.go.jp/HOWDY/). HOWDY stores information about relationships between genetic objects and the data extracted from a number of databases. HOWDY consists of an Internet accessible user interface that allows thorough searching of the human genomic databases using the gene symbols and their aliases. It also permits flexible editing of the sequence data. The database can be searched using simple words and the search can be restricted to a specific cytogenetic location. Linear maps displaying markers and genes on contig sequences are available, from which an object can be chosen. Any search starting point identifies all the information matching the query. HOWDY provides a convenient search environment of human genomic data for scientists unsure which database is most appropriate for their search. PMID:11752279

  7. The dark side of going abroad: How broad foreign experiences increase immoral behavior.

    PubMed

    Lu, Jackson G; Quoidbach, Jordi; Gino, Francesca; Chakroff, Alek; Maddux, William W; Galinsky, Adam D

    2017-01-01

    Because of the unprecedented pace of globalization, foreign experiences are increasingly common and valued. Past research has focused on the benefits of foreign experiences, including enhanced creativity and reduced intergroup bias. In contrast, the present work uncovers a potential dark side of foreign experiences: increased immoral behavior. We propose that broad foreign experiences (i.e., experiences in multiple foreign countries) foster not only cognitive flexibility but also moral flexibility. Using multiple methods (longitudinal, correlational, and experimental), 8 studies (N > 2,200) establish that broad foreign experiences can lead to immoral behavior by increasing moral relativism-the belief that morality is relative rather than absolute. The relationship between broad foreign experiences and immoral behavior was robust across a variety of cultural populations (anglophone, francophone), life stages (high school students, university students, MBA students, middle-aged adults), and 7 different measures of immorality. As individuals are exposed to diverse cultures, their moral compass may lose some of its precision. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Genomic region operation kit for flexible processing of deep sequencing data.

    PubMed

    Ovaska, Kristian; Lyly, Lauri; Sahu, Biswajyoti; Jänne, Olli A; Hautaniemi, Sampsa

    2013-01-01

    Computational analysis of data produced in deep sequencing (DS) experiments is challenging due to large data volumes and requirements for flexible analysis approaches. Here, we present a mathematical formalism based on set algebra for frequently performed operations in DS data analysis to facilitate translation of biomedical research questions to language amenable for computational analysis. With the help of this formalism, we implemented the Genomic Region Operation Kit (GROK), which supports various DS-related operations such as preprocessing, filtering, file conversion, and sample comparison. GROK provides high-level interfaces for R, Python, Lua, and command line, as well as an extension C++ API. It supports major genomic file formats and allows storing custom genomic regions in efficient data structures such as red-black trees and SQL databases. To demonstrate the utility of GROK, we have characterized the roles of two major transcription factors (TFs) in prostate cancer using data from 10 DS experiments. GROK is freely available with a user guide from >http://csbi.ltdk.helsinki.fi/grok/.

  9. Mental sets in conduct problem youth with psychopathic features: entity versus incremental theories of intelligence.

    PubMed

    Salekin, Randall T; Lester, Whitney S; Sellers, Mary-Kate

    2012-08-01

    The purpose of the current study was to examine the effect of a motivational intervention on conduct problem youth with psychopathic features. Specifically, the current study examined conduct problem youths' mental set (or theory) regarding intelligence (entity vs. incremental) upon task performance. We assessed 36 juvenile offenders with psychopathic features and tested whether providing them with two different messages regarding intelligence would affect their functioning on a task related to academic performance. The study employed a MANOVA design with two motivational conditions and three outcomes including fluency, flexibility, and originality. Results showed that youth with psychopathic features who were given a message that intelligence grows over time, were more fluent and flexible than youth who were informed that intelligence is static. There were no significant differences between the groups in terms of originality. The implications of these findings are discussed including the possible benefits of interventions for adolescent offenders with conduct problems and psychopathic features. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  10. Physician behavioral adaptability: A model to outstrip a "one size fits all" approach.

    PubMed

    Carrard, Valérie; Schmid Mast, Marianne

    2015-10-01

    Based on a literature review, we propose a model of physician behavioral adaptability (PBA) with the goal of inspiring new research. PBA means that the physician adapts his or her behavior according to patients' different preferences. The PBA model shows how physicians infer patients' preferences and adapt their interaction behavior from one patient to the other. We claim that patients will benefit from better outcomes if their physicians show behavioral adaptability rather than a "one size fits all" approach. This literature review is based on a literature search of the PsycINFO(®) and MEDLINE(®) databases. The literature review and first results stemming from the authors' research support the validity and viability of parts of the PBA model. There is evidence suggesting that physicians are able to show behavioral flexibility when interacting with their different patients, that a match between patients' preferences and physician behavior is related to better consultation outcomes, and that physician behavioral adaptability is related to better consultation outcomes. Training of physicians' behavioral flexibility and their ability to infer patients' preferences can facilitate physician behavioral adaptability and positive patient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences

    PubMed Central

    Stephens, Susie M.; Chen, Jake Y.; Davidson, Marcel G.; Thomas, Shiby; Trute, Barry M.

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html PMID:15608287

  12. Building MapObjects attribute field in cadastral database based on the method of Jackson system development

    NASA Astrophysics Data System (ADS)

    Chen, Zhu-an; Zhang, Li-ting; Liu, Lu

    2009-10-01

    ESRI's GIS components MapObjects are applied in many cadastral information system because of its miniaturization and flexibility. Some cadastral information was saved in cadastral database directly by MapObjects's Shape file format in this cadastral information system. However, MapObjects didn't provide the function of building attribute field for map layer's attribute data file in cadastral database and user cann't save the result of analysis. This present paper designed and realized the function of building attribute field in MapObjects based on the method of Jackson's system development.

  13. Flexible Reporting of Clinical Data

    PubMed Central

    Andrews, Robert D.

    1987-01-01

    Two prototype methods have been developed to aid in the presentation of relevant clinical data: 1) an integrated report that displays results from a patient's computer-stored data and also allows manual entry of data, and 2) a graph program that plots results of multiple kinds of tests. These reports provide a flexible means of displaying data to help evaluate patient treatment. The two methods also explore ways of integrating the display of data from multiple components of the Veterans Administration's (VA) Decentralized Hospital Computer Program (DHCP) database.

  14. Flexible ligand docking using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Oshiro, C. M.; Kuntz, I. D.; Dixon, J. Scott

    1995-04-01

    Two computational techniques have been developed to explore the orientational and conformational space of a flexible ligand within an enzyme. Both methods use the Genetic Algorithm (GA) to generate conformationally flexible ligands in conjunction with algorithms from the DOCK suite of programs to characterize the receptor site. The methods are applied to three enzyme-ligand complexes: dihydrofolate reductase-methotrexate, thymidylate synthase-phenolpthalein and HIV protease-thioketal haloperidol. Conformations and orientations close to the crystallographically determined structures are obtained, as well as alternative structures with low energy. The potential for the GA method to screen a database of compounds is also examined. A collection of ligands is evaluated simultaneously, rather than docking the ligands individually into the enzyme.

  15. Working with Specify in a Paleo-Geological Context

    NASA Astrophysics Data System (ADS)

    Molineux, A.; Thompson, A. C.; Appleton, L.

    2014-12-01

    For geological collections with limited funding an open source relational database provides an opportunity to digitize specimens and related data. At the Non-vertebrate Paleontology Lab, a large mixed paleo and geological repository on a restricted budget, we opted for one such database, Specify. Initially created at Kansas University for neontological collections and based on a single computer, Specify has moved into the networked scene and will soon be web-based as Specify 7. We currently use the server version of Specify 6, networked to all computers in the lab each running a desktop client, often with six users at any one time. Along with improved access there have been great efforts to broaden the applicability of this database to other disciplines. Current developments are of great importance to us because they focus on the geological aspects of lithostratigraphy and chronostratigaphy and their relationship to other variables. Adoption of this software has required constant change as we move to take advantage of the great improvements. We enjoy the interaction with the developers and their willingness to listen and consider our issues. Here we discuss some of the ways in which we have fashioned Specify into a database that provides us with the flexibility that we need without removing the ability to share our data with other aggregators through accepted protocols. We discuss the customization of forms, the attachment of media and tracking of original media files, our efforts to incorporate geological specimens, and our plans to link the individual specimen record GUIDs to an IGSN numbers and thence to future connections to data derived from our specimens.

  16. A Look Under the Hood: How the JPL Tropical Cyclone Information System Uses Database Technologies to Present Big Data to Users

    NASA Astrophysics Data System (ADS)

    Knosp, B.; Gangl, M.; Hristova-Veleva, S. M.; Kim, R. M.; Li, P.; Turk, J.; Vu, Q. A.

    2015-12-01

    The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data and model forecast related to tropical cyclones. The TCIS has been running a near-real time (NRT) data portal during North Atlantic hurricane season that typically runs from June through October each year, since 2010. Data collected by the TCIS varies by type, format, contents, and frequency and is served to the user in two ways: (1) as image overlays on a virtual globe and (2) as derived output from a suite of analysis tools. In order to support these two functions, the data must be collected and then made searchable by criteria such as date, mission, product, pressure level, and geospatial region. Creating a database architecture that is flexible enough to manage, intelligently interrogate, and ultimately present this disparate data to the user in a meaningful way has been the primary challenge. The database solution for the TCIS has been to use a hybrid MySQL + Solr implementation. After testing other relational database and NoSQL solutions, such as PostgreSQL and MongoDB respectively, this solution has given the TCIS the best offerings in terms of query speed and result reliability. This database solution also supports the challenging (and memory overwhelming) geospatial queries that are necessary to support analysis tools requested by users. Though hardly new technologies on their own, our implementation of MySQL + Solr had to be customized and tuned to be able to accurately store, index, and search the TCIS data holdings. In this presentation, we will discuss how we arrived on our MySQL + Solr database architecture, why it offers us the most consistent fast and reliable results, and how it supports our front end so that we can offer users a look into our "big data" holdings.

  17. On the Future of Thermochemical Databases, the Development of Solution Models and the Practical Use of Computational Thermodynamics in Volcanology, Geochemistry and Petrology: Can Innovations of Modern Data Science Democratize an Oligarchy?

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2014-12-01

    Computational thermodynamics (CT) has now become an essential tool of petrologic and geochemical research. CT is the basis for the construction of phase diagrams, the application of geothermometers and geobarometers, the equilibrium speciation of solutions, the construction of pseudosections, calculations of mass transfer between minerals, melts and fluids, and, it provides a means of estimating materials properties for the evaluation of constitutive relations in fluid dynamical simulations. The practical application of CT to Earth science problems requires data. Data on the thermochemical properties and the equation of state of relevant materials, and data on the relative stability and partitioning of chemical elements between phases as a function of temperature and pressure. These data must be evaluated and synthesized into a self consistent collection of theoretical models and model parameters that is colloquially known as a thermodynamic database. Quantitative outcomes derived from CT reply on the existence, maintenance and integrity of thermodynamic databases. Unfortunately, the community is reliant on too few such databases, developed by a small number of research groups, and mostly under circumstances where refinement and updates to the database lag behind or are unresponsive to need. Given the increasing level of reliance on CT calculations, what is required is a paradigm shift in the way thermodynamic databases are developed, maintained and disseminated. They must become community resources, with flexible and assessable software interfaces that permit easy modification, while at the same time maintaining theoretical integrity and fidelity to the underlying experimental observations. Advances in computational and data science give us the tools and resources to address this problem, allowing CT results to be obtained at the speed of thought, and permitting geochemical and petrological intuition to play a key role in model development and calibration.

  18. An Introduction to MAMA (Meta-Analysis of MicroArray data) System.

    PubMed

    Zhang, Zhe; Fenstermacher, David

    2005-01-01

    Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.

  19. Integration of environmental simulation models with satellite remote sensing and geographic information systems technologies: case studies

    USGS Publications Warehouse

    Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.

    1993-01-01

    Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues involving regional extrapolations and parameterizations of dynamic land processes within simulation models.

  20. Fast 3D shape screening of large chemical databases through alignment-recycling

    PubMed Central

    Fontaine, Fabien; Bolton, Evan; Borodina, Yulia; Bryant, Stephen H

    2007-01-01

    Background Large chemical databases require fast, efficient, and simple ways of looking for similar structures. Although such tasks are now fairly well resolved for graph-based similarity queries, they remain an issue for 3D approaches, particularly for those based on 3D shape overlays. Inspired by a recent technique developed to compare molecular shapes, we designed a hybrid methodology, alignment-recycling, that enables efficient retrieval and alignment of structures with similar 3D shapes. Results Using a dataset of more than one million PubChem compounds of limited size (< 28 heavy atoms) and flexibility (< 6 rotatable bonds), we obtained a set of a few thousand diverse structures covering entirely the 3D shape space of the conformers of the dataset. Transformation matrices gathered from the overlays between these diverse structures and the 3D conformer dataset allowed us to drastically (100-fold) reduce the CPU time required for shape overlay. The alignment-recycling heuristic produces results consistent with de novo alignment calculation, with better than 80% hit list overlap on average. Conclusion Overlay-based 3D methods are computationally demanding when searching large databases. Alignment-recycling reduces the CPU time to perform shape similarity searches by breaking the alignment problem into three steps: selection of diverse shapes to describe the database shape-space; overlay of the database conformers to the diverse shapes; and non-optimized overlay of query and database conformers using common reference shapes. The precomputation, required by the first two steps, is a significant cost of the method; however, once performed, querying is two orders of magnitude faster. Extensions and variations of this methodology, for example, to handle more flexible and larger small-molecules are discussed. PMID:17880744

  1. Di-Isobutyl Phthalate (DIBP) Hazard Identification [Abstract ...

    EPA Pesticide Factsheets

    The hazard potential for DIBP is being evaluated as part of EPA’s Integrated Risk Information System (IRIS) Toxicological Review. DIBP is a plasticizer that confers flexibility and durability in industrial and consumer products. A literature search identified a relatively small epidemiology and animal toxicology database for DIBP. The epidemiological database includes studies that assessed the relationship between urinary concentrations of the DIBP metabolite mono-isobutyl phthalate (MIBP)and developmental, neurodevelopmental, immunological or breast cancer outcomes. There is limited support for associations between MIBP and inflammatory biomarker levels and decreased masculine play behavior. The animal toxicological database includes studies that assessed “phthalate syndrome” male reproductive developmental endpoints after in utero DIBP exposure. Data from the largest developmental study, Saillenfait et al. (2008), shows changes in anogenital distance, male reproductive organ weights, and litter incidence of phthalate syndrome endpoints in the lower dose range after early gestational exposure. Other studies observed increased fetal mortality, male postnatal and adult growth decrements, decreased fetal testicular testosterone and changes in expression of genes in androgen production pathways. The developmental reproductive effects observed in animal studies are consistent with the reduced testicular testosterone mode of action that is well-characterize

  2. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY ...

    EPA Pesticide Factsheets

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, SAR model development, or building of chemical relational databases (CRD). The Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: 1) to adopt and encourage the use of a common standard file format (SDF) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; 2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data s

  3. PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.

    PubMed

    Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X

    2017-01-01

    Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.

  4. Small Business Innovations (Automated Information)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Bruce G. Jackson & Associates Document Director is an automated tool that combines word processing and database management technologies to offer the flexibility and convenience of text processing with the linking capability of database management. Originally developed for NASA, it provides a means to collect and manage information associated with requirements development. The software system was used by NASA in the design of the Assured Crew Return Vehicle, as well as by other government and commercial organizations including the Southwest Research Institute.

  5. Photo-z-SQL: Photometric redshift estimation framework

    NASA Astrophysics Data System (ADS)

    Beck, Róbert; Dobos, László; Budavári, Tamás; Szalay, Alexander S.; Csabai, István

    2017-04-01

    Photo-z-SQL is a flexible template-based photometric redshift estimation framework that can be seamlessly integrated into a SQL database (or DB) server and executed on demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and uses the computational capabilities of DB hardware. Photo-z-SQL performs both maximum likelihood and Bayesian estimation and handles inputs of variable photometric filter sets and corresponding broad-band magnitudes.

  6. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.

  7. Exchange, interpretation, and database-search of ion mobility spectra supported by data format JCAMP-DX

    NASA Technical Reports Server (NTRS)

    Baumback, J. I.; Davies, A. N.; Vonirmer, A.; Lampen, P. H.

    1995-01-01

    To assist peak assignment in ion mobility spectrometry it is important to have quality reference data. The reference collection should be stored in a database system which is capable of being searched using spectral or substance information. We propose to build such a database customized for ion mobility spectra. To start off with it is important to quickly reach a critical mass of data in the collection. We wish to obtain as many spectra combined with their IMS parameters as possible. Spectra suppliers will be rewarded for their participation with access to the database. To make the data exchange between users and system administration possible, it is important to define a file format specially made for the requirements of ion mobility spectra. The format should be computer readable and flexible enough for extensive comments to be included. In this document we propose a data exchange format, and we would like you to give comments on it. For the international data exchange it is important, to have a standard data exchange format. We propose to base the definition of this format on the JCAMP-DX protocol, which was developed for the exchange of infrared spectra. This standard made by the Joint Committee on Atomic and Molecular Physical Data is of a flexible design. The aim of this paper is to adopt JCAMP-DX to the special requirements of ion mobility spectra.

  8. FLASHFLOOD: A 3D Field-based similarity search and alignment method for flexible molecules

    NASA Astrophysics Data System (ADS)

    Pitman, Michael C.; Huber, Wolfgang K.; Horn, Hans; Krämer, Andreas; Rice, Julia E.; Swope, William C.

    2001-07-01

    A three-dimensional field-based similarity search and alignment method for flexible molecules is introduced. The conformational space of a flexible molecule is represented in terms of fragments and torsional angles of allowed conformations. A user-definable property field is used to compute features of fragment pairs. Features are generalizations of CoMMA descriptors (Silverman, B.D. and Platt, D.E., J. Med. Chem., 39 (1996) 2129.) that characterize local regions of the property field by its local moments. The features are invariant under coordinate system transformations. Features taken from a query molecule are used to form alignments with fragment pairs in the database. An assembly algorithm is then used to merge the fragment pairs into full structures, aligned to the query. Key to the method is the use of a context adaptive descriptor scaling procedure as the basis for similarity. This allows the user to tune the weights of the various feature components based on examples relevant to the particular context under investigation. The property fields may range from simple, phenomenological fields, to fields derived from quantum mechanical calculations. We apply the method to the dihydrofolate/methotrexate benchmark system, and show that when one injects relevant contextual information into the descriptor scaling procedure, better results are obtained more efficiently. We also show how the method works and include computer times for a query from a database that represents approximately 23 million conformers of seventeen flexible molecules.

  9. LS-align: an atom-level, flexible ligand structural alignment algorithm for high-throughput virtual screening.

    PubMed

    Hu, Jun; Liu, Zi; Yu, Dong-Jun; Zhang, Yang

    2018-02-15

    Sequence-order independent structural comparison, also called structural alignment, of small ligand molecules is often needed for computer-aided virtual drug screening. Although many ligand structure alignment programs are proposed, most of them build the alignments based on rigid-body shape comparison which cannot provide atom-specific alignment information nor allow structural variation; both abilities are critical to efficient high-throughput virtual screening. We propose a novel ligand comparison algorithm, LS-align, to generate fast and accurate atom-level structural alignments of ligand molecules, through an iterative heuristic search of the target function that combines inter-atom distance with mass and chemical bond comparisons. LS-align contains two modules of Rigid-LS-align and Flexi-LS-align, designed for rigid-body and flexible alignments, respectively, where a ligand-size independent, statistics-based scoring function is developed to evaluate the similarity of ligand molecules relative to random ligand pairs. Large-scale benchmark tests are performed on prioritizing chemical ligands of 102 protein targets involving 1,415,871 candidate compounds from the DUD-E (Database of Useful Decoys: Enhanced) database, where LS-align achieves an average enrichment factor (EF) of 22.0 at the 1% cutoff and the AUC score of 0.75, which are significantly higher than other state-of-the-art methods. Detailed data analyses show that the advanced performance is mainly attributed to the design of the target function that combines structural and chemical information to enhance the sensitivity of recognizing subtle difference of ligand molecules and the introduces of structural flexibility that help capture the conformational changes induced by the ligand-receptor binding interactions. These data demonstrate a new avenue to improve the virtual screening efficiency through the development of sensitive ligand structural alignments. http://zhanglab.ccmb.med.umich.edu/LS-align/. njyudj@njust.edu.cn or zhng@umich.edu. Supplementary data are available at Bioinformatics online.

  10. ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature

    PubMed Central

    McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry BF; Tipton, Keith F

    2007-01-01

    Background We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. Description The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at . The data are available for download as SQL and XML files via FTP. Conclusion ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List. PMID:17662133

  11. Djeen (Database for Joomla!'s Extensible Engine): a research information management system for flexible multi-technology project administration.

    PubMed

    Stahl, Olivier; Duvergey, Hugo; Guille, Arnaud; Blondin, Fanny; Vecchio, Alexandre Del; Finetti, Pascal; Granjeaud, Samuel; Vigy, Oana; Bidaut, Ghislain

    2013-06-06

    With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. We developed Djeen (Database for Joomla!'s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group.Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material.

  12. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    PubMed Central

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  13. The plant phenological online database (PPODB): an online database for long-term phenological data

    NASA Astrophysics Data System (ADS)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  14. SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access.

    PubMed

    Amigo, Jorge; Salas, Antonio; Phillips, Christopher; Carracedo, Angel

    2008-10-10

    In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs) in widespread use in human population genetics: SPSmart (SNPs for Population Studies). A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 x 10(9) genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full numerical description of the data is output in statistical results panels that include common population genetics metrics such as heterozygosity, Fst and In.

  15. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  16. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  17. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  18. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  19. Searching Across the International Space Station Databases

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana

    2007-01-01

    Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.

  20. The Strabo digital data system for Structural Geology and Tectonics

    NASA Astrophysics Data System (ADS)

    Tikoff, Basil; Newman, Julie; Walker, J. Doug; Williams, Randy; Michels, Zach; Andrews, Joseph; Bunse, Emily; Ash, Jason; Good, Jessica

    2017-04-01

    We are developing the Strabo data system for the structural geology and tectonics community. The data system will allow researchers to share primary data, apply new types of analytical procedures (e.g., statistical analysis), facilitate interaction with other geology communities, and allow new types of science to be done. The data system is based on a graph database, rather than relational database approach, to increase flexibility and allow geologically realistic relationships between observations and measurements. Development is occurring on: 1) A field-based application that runs on iOS and Android mobile devices and can function in either internet connected or disconnected environments; and 2) A desktop system that runs only in connected settings and directly addresses the back-end database. The field application also makes extensive use of images, such as photos or sketches, which can be hierarchically arranged with encapsulated field measurements/observations across all scales. The system also accepts Shapefile, GEOJSON, KML formats made in ArcGIS and QGIS, and will allow export to these formats as well. Strabo uses two main concepts to organize the data: Spots and Tags. A Spot is any observation that characterizes a specific area. Below GPS resolution, a Spot can be tied to an image (outcrop photo, thin section, etc.). Spots are related in a purely spatial manner (one spot encloses anther spot, which encloses another, etc.). Tags provide a linkage between conceptually related spots. Together, this organization works seamlessly with the workflow of most geologists. We are expanding this effort to include microstructural data, as well as to the disciplines of sedimentology and petrology.

  1. CRYSTMET—The NRCC Metals Crystallographic Data File

    PubMed Central

    Wood, Gordon H.; Rodgers, John R.; Gough, S. Roger; Villars, Pierre

    1996-01-01

    CRYSTMET is a computer-readable database of critically evaluated crystallographic data for metals (including alloys, intermetallics and minerals) accompanied by pertinent chemical, physical and bibliographic information. It currently contains about 60 000 entries and covers the literature exhaustively from 1913. Scientific editing of the abstracted entries, consisting of numerous automated and manual checks, is done to ensure consistency with related, previously published studies, to assign structure types where necessary and to help guarantee the accuracy of the data and related information. Analyses of the entries and their distribution across key journals as a function of time show interesting trends in the complexity of the compounds studied as well as in the elements they contain. Two applications of CRYSTMET are the identification of unknowns and the prediction of properties of materials. CRYSTMET is available either online or via license of a private copy from the Canadian Scientific Numeric Database Service (CAN/SND). The indexed online search and analysis system is easy and economical to use yet fast and powerful. Development of a new system is under way combining the capabilities of ORACLE with the flexibility of a modern interface based on the Netscape browsing tool. PMID:27805157

  2. Data collection and population of the database (The DSS and RDSSP).

    DOT National Transportation Integrated Search

    2014-11-01

    This study was initiated to collect materials and pavement performance data on a minimum of : 100 highway test sections around the state of Texas, incorporating both flexible pavements and : overlays. Besides being used to calibrate and validate mech...

  3. CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.

    PubMed

    Hallin, Peter F; Ussery, David W

    2004-12-12

    Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.

  4. Flexible retrieval: When true inferences produce false memories.

    PubMed

    Carpenter, Alexis C; Schacter, Daniel L

    2017-03-01

    Episodic memory involves flexible retrieval processes that allow us to link together distinct episodes, make novel inferences across overlapping events, and recombine elements of past experiences when imagining future events. However, the same flexible retrieval and recombination processes that underpin these adaptive functions may also leave memory prone to error or distortion, such as source misattributions in which details of one event are mistakenly attributed to another related event. To determine whether the same recombination-related retrieval mechanism supports both successful inference and source memory errors, we developed a modified version of an associative inference paradigm in which participants encoded everyday scenes comprised of people, objects, and other contextual details. These scenes contained overlapping elements (AB, BC) that could later be linked to support novel inferential retrieval regarding elements that had not appeared together previously (AC). Our critical experimental manipulation concerned whether contextual details were probed before or after the associative inference test, thereby allowing us to assess whether (a) false memories increased for successful versus unsuccessful inferences, and (b) any such effects were specific to after compared with before participants received the inference test. In each of 4 experiments that used variants of this paradigm, participants were more susceptible to false memories for contextual details after successful than unsuccessful inferential retrieval, but only when contextual details were probed after the associative inference test. These results suggest that the retrieval-mediated recombination mechanism that underlies associative inference also contributes to source misattributions that result from combining elements of distinct episodes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Preventive care utilization: Association with individual- and workgroup-level policy and practice perceptions.

    PubMed

    Sabbath, Erika L; Sparer, Emily H; Boden, Leslie I; Wagner, Gregory R; Hashimoto, Dean M; Hopcia, Karen; Sorensen, Glorian

    2018-06-01

    Preventive medical care may reduce downstream medical costs and reduce population burden of disease. However, although social, demographic, and geographic determinants of preventive care have been studied, there is little information about how the workplace affects preventive care utilization. This study examines how four types of organizational policies and practices (OPPs) are associated with individual workers' preventive care utilization. We used data collected in 2012 from 838 hospital patient care workers, grouped in 84 patient care units at two hospitals in Boston. Via survey, we assessed individuals' perceptions of four types of OPPs on their work units. We linked the survey data to a database containing detailed information on medical expenditures. Using multilevel models, we tested whether individual-level perceptions, workgroup-average perceptions, and their combination were associated with individual workers' preventive care utilization (measured by number of preventive care encounters over a two-year period). Adjusting for worker characteristics, higher individual-level perceptions of workplace flexibility were associated with greater preventive care utilization. Higher average unit-level perceptions of people-oriented culture, ergonomic practices, and flexibility were associated with greater preventive care utilization. Overall, we find that workplace policies and practices supporting flexibility, ergonomics, and people-oriented culture are associated with positive preventive care-seeking behavior among workers, with some policies and practices operating at the individual level and some at the group level. Improving the work environment could impact employers' health-related expenditures and improve workers' health-related quality of life. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Creditibility of Shutdown Offsets at Sonoco Flexible Packaging

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  7. United States Air Force Summer Research Program 1991. Graduate Student Research Program (GSRP) Reports. Volume 6. Armstrong Laboratory, Wilford Hall Medical Center

    DTIC Science & Technology

    1992-01-09

    necrosis and thus maintain viability during acute condi- tions of ischemia and compartmental syndrome . It is not known. how- ever, if HBO will continue...adds considerable incentive for flexible database design. Adding to the complexity of the database are emitter sector coverage, radiating power, and...rather, it supplements the time-weighted average(TWA) limit where there are recognized acute effects from a substance whose toxic effects are

  8. MIPS plant genome information resources.

    PubMed

    Spannagl, Manuel; Haberer, Georg; Ernst, Rebecca; Schoof, Heiko; Mayer, Klaus F X

    2007-01-01

    The Munich Institute for Protein Sequences (MIPS) has been involved in maintaining plant genome databases since the Arabidopsis thaliana genome project. Genome databases and analysis resources have focused on individual genomes and aim to provide flexible and maintainable data sets for model plant genomes as a backbone against which experimental data, for example from high-throughput functional genomics, can be organized and evaluated. In addition, model genomes also form a scaffold for comparative genomics, and much can be learned from genome-wide evolutionary studies.

  9. A flexible computer aid for conceptual design based on constraint propagation and component-modeling. [of aircraft in three dimensions

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1988-01-01

    The Rubber Airplane program, which combines two symbolic processing techniques with a component-based database of design knowledge, is proposed as a computer aid for conceptual design. Using object-oriented programming, programs are organized around the objects and behavior to be simulated, and using constraint propagation, declarative statements designate mathematical relationships among all the equation variables. It is found that the additional level of organizational structure resulting from the arrangement of the design information in terms of design components provides greater flexibility and convenience.

  10. 3DSDSCAR--a three dimensional structural database for sialic acid-containing carbohydrates through molecular dynamics simulation.

    PubMed

    Veluraja, Kasinadar; Selvin, Jeyasigamani F A; Venkateshwari, Selvakumar; Priyadarzini, Thanu R K

    2010-09-23

    The inherent flexibility and lack of strong intramolecular interactions of oligosaccharides demand the use of theoretical methods for their structural elucidation. In spite of the developments of theoretical methods, not much research on glycoinformatics is done so far when compared to bioinformatics research on proteins and nucleic acids. We have developed three dimensional structural database for a sialic acid-containing carbohydrates (3DSDSCAR). This is an open-access database that provides 3D structural models of a given sialic acid-containing carbohydrate. At present, 3DSDSCAR contains 60 conformational models, belonging to 14 different sialic acid-containing carbohydrates, deduced through 10 ns molecular dynamics (MD) simulations. The database is available at the URL: http://www.3dsdscar.org. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database

    NASA Technical Reports Server (NTRS)

    Laher, Russ; Rector, John

    2004-01-01

    Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.

  12. Federated or cached searches: Providing expected performance from multiple invasive species databases

    NASA Astrophysics Data System (ADS)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  13. Federated or cached searches: providing expected performance from multiple invasive species databases

    USGS Publications Warehouse

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  14. Development of flexible pavement database for local calibration of MEPDG : volume 1.

    DOT National Transportation Integrated Search

    2011-06-01

    The new mechanistic-empirical pavement design guide (MEPDG), based on the National Cooperative Highway : Research Program (NCHRP) study 1-37A, replaces the widely used but more empirical 1993 AASHTO Guide : for Design of Pavement Structures. The MEPD...

  15. Bibliographies without Tears: Bibliography-Managers Round-Up.

    ERIC Educational Resources Information Center

    Science Software Quarterly, 1984

    1984-01-01

    Reviews and compares "Sci-Mate,""Reference Manager," and "BIBLIOPHILE" software packages used for storage and retrieval tasks involving bibliographic data. Each program handles search tasks well; major differences are in the amount of flexibility in customizing the database structure, their import and export…

  16. Meta-analysis of aquatic chronic chemical toxicity data

    EPA Science Inventory

    Chronic toxicity data from the open literature and from tests submitted for pesticide registration were extracted and assembled into a database, AquaChronTox, with a flexible search interface. Data were captured at a treatment and, when available, replicate level to support conc...

  17. Layer moduli of Nebraska pavements for the new Mechanistic-Empirical Pavement Design Guide (MEPDG).

    DOT National Transportation Integrated Search

    2010-12-01

    As a step-wise implementation effort of the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the design : and analysis of Nebraska flexible pavement systems, this research developed a database of layer moduli dynamic : modulus, creep compl...

  18. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    PubMed

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or understudied species. For species for which more data are available, ODG can be used to conduct complex multi-omics, pattern-matching queries.

  19. The FoodCast research image database (FRIDa)

    PubMed Central

    Foroni, Francesco; Pergola, Giulio; Argiris, Georgette; Rumiati, Raffaella I.

    2013-01-01

    In recent years we have witnessed an increasing interest in food processing and eating behaviors. This is probably due to several reasons. The biological relevance of food choices, the complexity of the food-rich environment in which we presently live (making food-intake regulation difficult), and the increasing health care cost due to illness associated with food (food hazards, food contamination, and aberrant food-intake). Despite the importance of the issues and the relevance of this research, comprehensive and validated databases of stimuli are rather limited, outdated, or not available for non-commercial purposes to independent researchers who aim at developing their own research program. The FoodCast Research Image Database (FRIDa) we present here includes 877 images belonging to eight different categories: natural-food (e.g., strawberry), transformed-food (e.g., french fries), rotten-food (e.g., moldy banana), natural-non-food items (e.g., pinecone), artificial food-related objects (e.g., teacup), artificial objects (e.g., guitar), animals (e.g., camel), and scenes (e.g., airport). FRIDa has been validated on a sample of healthy participants (N = 73) on standard variables (e.g., valence, familiarity, etc.) as well as on other variables specifically related to food items (e.g., perceived calorie content); it also includes data on the visual features of the stimuli (e.g., brightness, high frequency power, etc.). FRIDa is a well-controlled, flexible, validated, and freely available (http://foodcast.sissa.it/neuroscience/) tool for researchers in a wide range of academic fields and industry. PMID:23459781

  20. Database architectures for Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  1. IMAT graphics manual

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.; Cooper, Paul A.

    1991-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

  2. AQUIS: A PC-based air inventory and permit manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1992-01-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate and track sources, emissions, stacks, permits, and related information. The system runs on IBM-compatible personal computers with dBASE IV and tracks more than 1,200 data items distributed among various source categories. AQUIS is currently operating at nine US Air Force facilities that have up to 1,000 sources. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user-specified information. In addition to six criteria pollutants, AQUIS calculates compound-specific emissions and allows users to enter their own emissionmore » estimates.« less

  3. Extending the temporal context of ethnobotanical databases: the case study of the Campania region (southern Italy)

    PubMed Central

    De Natale, Antonino; Pezzatti, Gianni Boris; Pollio, Antonino

    2009-01-01

    Background Ethnobotanical studies generally describe the traditional knowledge of a territory according to a "hic et nunc" principle. The need of approaching this field also embedding historical data has been frequently acknowledged. With their long history of civilization some regions of the Mediterranean basin seem to be particularly suited for an historical approach to be adopted. Campania, a region of southern Italy, has been selected for a database implementation containing present and past information on plant uses. Methods A relational database has been built on the basis of information gathered from different historical sources, including diaries, travel accounts, and treatises on medicinal plants, written by explorers, botanists, physicians, who travelled in Campania during the last three centuries. Moreover, ethnobotanical uses described in historical herbal collections and in Ancient and Medieval texts from the Mediterranean Region have been included in the database. Results 1672 different uses, ranging from medicinal, to alimentary, ceremonial, veterinary, have been recorded for 474 species listed in the data base. Information is not uniformly spread over the Campanian territory; Sannio being the most studied geographical area and Cilento the least one. About 50 plants have been continuously used in the last three centuries in the cure of the same affections. A comparison with the uses reported for the same species in Ancient treatises shows that the origin of present ethnomedicine from old learned medical doctrines needs a case-by-case confirmation. Conclusion The database is flexible enough to represent a useful tool for researchers who need to store and compare present and previous ethnobotanical uses from Mediterranean Countries. PMID:19228384

  4. Atlas - a data warehouse for integrative bioinformatics.

    PubMed

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis

    2005-02-21

    We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: http://bioinformatics.ubc.ca/atlas/

  5. Strabo: An App and Database for Structural Geology and Tectonics Data

    NASA Astrophysics Data System (ADS)

    Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.

    2016-12-01

    Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to other sub-disciplines as well as developing a desktop system for the enhanced collection and organization of microstructural data.

  6. Atlas – a data warehouse for integrative bioinformatics

    PubMed Central

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire MS; Ling, John; Ouellette, BF Francis

    2005-01-01

    Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: PMID:15723693

  7. 78 FR 1562 - Improving Government Regulations; Unified Agenda of Federal Regulatory and Deregulatory Actions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-08

    ... statutory administration requirements as required. Starting with the fall 2007 edition, the Internet became... Agenda database. Because publication in the Federal Register is mandated for the regulatory flexibility.... Michael L. Rhodes, Director, Administration and Management. Defense Acquisition Regulations Council...

  8. Constructing compact and effective graphs for recommender systems via node and edge aggregations

    DOE PAGES

    Lee, Sangkeun; Kahng, Minsuk; Lee, Sang-goo

    2014-12-10

    Exploiting graphs for recommender systems has great potential to flexibly incorporate heterogeneous information for producing better recommendation results. As our baseline approach, we first introduce a naive graph-based recommendation method, which operates with a heterogeneous log-metadata graph constructed from user log and content metadata databases. Although the na ve graph-based recommendation method is simple, it allows us to take advantages of heterogeneous information and shows promising flexibility and recommendation accuracy. However, it often leads to extensive processing time due to the sheer size of the graphs constructed from entire user log and content metadata databases. In this paper, we proposemore » node and edge aggregation approaches to constructing compact and e ective graphs called Factor-Item bipartite graphs by aggregating nodes and edges of a log-metadata graph. Furthermore, experimental results using real world datasets indicate that our approach can significantly reduce the size of graphs exploited for recommender systems without sacrificing the recommendation quality.« less

  9. Conformational flexibility of two RNA trimers explored by computational tools and database search.

    PubMed

    Fadrná, Eva; Koca, Jaroslav

    2003-04-01

    Two RNA sequences, AAA and AUG, were studied by the conformational search program CICADA and by molecular dynamics (MD) in the framework of the AMBER force field, and also via thorough PDB database search. CICADA was used to provide detailed information about conformers and conformational interconversions on the energy surfaces of the above molecules. Several conformational families were found for both sequences. Analysis of the results shows differences, especially between the energy of the single families, and also in flexibility and concerted conformational movement. Therefore, several MD trajectories (altogether 16 ns) were run to obtain more details about both the stability of conformers belonging to different conformational families and about the dynamics of the two systems. Results show that the trajectories strongly depend on the starting structure. When the MD start from the global minimum found by CICADA, they provide a stable run, while MD starting from another conformational family generates a trajectory where several different conformational families are visited. The results obtained by theoretical methods are compared with the thorough database search data. It is concluded that all except for the highest energy conformational families found in theoretical result also appear in experimental data. Registry numbers: adenylyl-(3' --> 5')-adenylyl-(3' --> 5')-adenosine [917-44-2] adenylyl-(3' --> 5')-uridylyl-(3' --> 5')-guanosine [3494-35-7].

  10. EnsMart: A Generic System for Fast and Flexible Access to Biological Data

    PubMed Central

    Kasprzyk, Arek; Keefe, Damian; Smedley, Damian; London, Darin; Spooner, William; Melsopp, Craig; Hammond, Martin; Rocca-Serra, Philippe; Cox, Tony; Birney, Ewan

    2004-01-01

    The EnsMart system (www.ensembl.org/EnsMart) provides a generic data warehousing solution for fast and flexible querying of large biological data sets and integration with third-party data and tools. The system consists of a query-optimized database and interactive, user-friendly interfaces. EnsMart has been applied to Ensembl, where it extends its genomic browser capabilities, facilitating rapid retrieval of customized data sets. A wide variety of complex queries, on various types of annotations, for numerous species are supported. These can be applied to many research problems, ranging from SNP selection for candidate gene screening, through cross-species evolutionary comparisons, to microarray annotation. Users can group and refine biological data according to many criteria, including cross-species analyses, disease links, sequence variations, and expression patterns. Both tabulated list data and biological sequence output can be generated dynamically, in HTML, text, Microsoft Excel, and compressed formats. A wide range of sequence types, such as cDNA, peptides, coding regions, UTRs, and exons, with additional upstream and downstream regions, can be retrieved. The EnsMart database can be accessed via a public Web site, or through a Java application suite. Both implementations and the database are freely available for local installation, and can be extended or adapted to `non-Ensembl' data sets. PMID:14707178

  11. Updating the 2001 National Land Cover Database Impervious Surface Products to 2006 using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, George; Homer, Collin G.

    2010-01-01

    A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline.

  12. PDB-Ligand: a ligand database based on PDB for the automated and customized classification of ligand-binding structures.

    PubMed

    Shin, Jae-Min; Cho, Doo-Ho

    2005-01-01

    PDB-Ligand (http://www.idrtech.com/PDB-Ligand/) is a three-dimensional structure database of small molecular ligands that are bound to larger biomolecules deposited in the Protein Data Bank (PDB). It is also a database tool that allows one to browse, classify, superimpose and visualize these structures. As of May 2004, there are about 4870 types of small molecular ligands, experimentally determined as a complex with protein or DNA in the PDB. The proteins that a given ligand binds are often homologous and present the same binding structure to the ligand. However, there are also many instances wherein a given ligand binds to two or more unrelated proteins, or to the same or homologous protein in different binding environments. PDB-Ligand serves as an interactive structural analysis and clustering tool for all the ligand-binding structures in the PDB. PDB-Ligand also provides an easier way to obtain a number of different structure alignments of many related ligand-binding structures based on a simple and flexible ligand clustering method. PDB-Ligand will be a good resource for both a better interpretation of ligand-binding structures and the development of better scoring functions to be used in many drug discovery applications.

  13. Active in-database processing to support ambient assisted living systems.

    PubMed

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  14. Active In-Database Processing to Support Ambient Assisted Living Systems

    PubMed Central

    de Morais, Wagner O.; Lundström, Jens; Wickström, Nicholas

    2014-01-01

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare. PMID:25120164

  15. Expert database system for quality control

    NASA Astrophysics Data System (ADS)

    Wang, Anne J.; Li, Zhi-Cheng

    1993-09-01

    There are more competitors today. Markets are not homogeneous they are fragmented into increasingly focused niches requiring greater flexibility in the product mix shorter manufacturing production runs and above allhigher quality. In this paper the author identified a real-time expert system as a way to improve plantwide quality management. The quality control expert database system (QCEDS) by integrating knowledge of experts in operations quality management and computer systems use all information relevant to quality managementfacts as well as rulesto determine if a product meets quality standards. Keywords: expert system quality control data base

  16. Testing the Use of Implicit Solvent in the Molecular Dynamics Modelling of DNA Flexibility

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Harris, S.

    DNA flexibility controls packaging, looping and in some cases sequence specific protein binding. Molecular dynamics simulations carried out with a computationally efficient implicit solvent model are potentially a powerful tool for studying larger DNA molecules than can be currently simulated when water and counterions are represented explicitly. In this work we compare DNA flexibility at the base pair step level modelled using an implicit solvent model to that previously determined from explicit solvent simulations and database analysis. Although much of the sequence dependent behaviour is preserved in implicit solvent, the DNA is considerably more flexible when the approximate model is used. In addition we test the ability of the implicit solvent to model stress induced DNA disruptions by simulating a series of DNA minicircle topoisomers which vary in size and superhelical density. When compared with previously run explicit solvent simulations, we find that while the levels of DNA denaturation are similar using both computational methodologies, the specific structural form of the disruptions is different.

  17. Age and health jointly moderate the influence of flexible work arrangements on work engagement: Evidence from two empirical studies.

    PubMed

    Rudolph, Cort W; Baltes, Boris B

    2017-01-01

    Research and theory support the notion that flexible work arrangements (i.e., job resources in the form of formal policies that allow employees the latitude to manage when, where, and how they work) can have a positive influence on various outcomes that are valued both by organizations and their constituents. In the present study, we integrate propositions from various theoretical perspectives to investigate how flexible work arrangements influence work engagement. Then, in 2 studies we test this association and model the influence of different conceptualizations of health and age as joint moderators of this relationship. Study 1 focuses on functional health and chronological age in an age-diverse sample, whereas study 2 focuses on health symptom severity and subjective age in a sample of older workers. In both studies, we demonstrate that the influence of flexible work arrangements on work engagement is contingent upon age and health. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Science returns of flexible scheduling on UKIRT and the JCMT

    NASA Astrophysics Data System (ADS)

    Adamson, Andrew J.; Tilanus, Remo P.; Buckle, Jane; Davis, Gary R.; Economou, Frossie; Jenness, Tim; Delorey, K.

    2004-09-01

    The Joint Astronomy Centre operates two telescopes at the Mauna Kea Observatory: the James Clerk Maxwell Telescope, operating in the submillimetre, and the United Kingdom Infrared Telescope, operating in the near and thermal infrared. Both wavelength regimes benefit from the ability to schedule observations flexibly according to observing conditions, albeit via somewhat different "site quality" criteria. Both UKIRT and JCMT now operate completely flexible schedules. These operations are based on telescope hardware which can quickly switch between observing modes, and on a comprehensive suite of software (ORAC/OMP) which handles observing preparation by remote PIs, observation submission into the summit database, conditions-based programme selection at the summit, pipeline data reduction for all observing modes, and instant data quality feedback to the PI who may or may not be remote from the telescope. This paper describes the flexible scheduling model and presents science statistics for the first complete year of UKIRT and JCMT observing under the combined system.

  19. epiPATH: an information system for the storage and management of molecular epidemiology data from infectious pathogens.

    PubMed

    Amadoz, Alicia; González-Candelas, Fernando

    2007-04-20

    Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.

  20. Evaluating a NoSQL Alternative for Chilean Virtual Observatory Services

    NASA Astrophysics Data System (ADS)

    Antognini, J.; Araya, M.; Solar, M.; Valenzuela, C.; Lira, F.

    2015-09-01

    Currently, the standards and protocols for data access in the Virtual Observatory architecture (DAL) are generally implemented with relational databases based on SQL. In particular, the Astronomical Data Query Language (ADQL), language used by IVOA to represent queries to VO services, was created to satisfy the different data access protocols, such as Simple Cone Search. ADQL is based in SQL92, and has extra functionality implemented using PgSphere. An emergent alternative to SQL are the so called NoSQL databases, which can be classified in several categories such as Column, Document, Key-Value, Graph, Object, etc.; each one recommended for different scenarios. Within their notable characteristics we can find: schema-free, easy replication support, simple API, Big Data, etc. The Chilean Virtual Observatory (ChiVO) is developing a functional prototype based on the IVOA architecture, with the following relevant factors: Performance, Scalability, Flexibility, Complexity, and Functionality. Currently, it's very difficult to compare these factors, due to a lack of alternatives. The objective of this paper is to compare NoSQL alternatives with SQL through the implementation of a Web API REST that satisfies ChiVO's needs: a SESAME-style name resolver for the data from ALMA. Therefore, we propose a test scenario by configuring a NoSQL database with data from different sources and evaluating the feasibility of creating a Simple Cone Search service and its performance. This comparison will allow to pave the way for the application of Big Data databases in the Virtual Observatory.

  1. Designing an international industrial hygiene database of exposures among workers in the asphalt industry.

    PubMed

    Burstyn, I; Kromhout, H; Cruise, P J; Brennan, P

    2000-01-01

    The objective of this project was to construct a database of exposure measurements which would be used to retrospectively assess the intensity of various exposures in an epidemiological study of cancer risk among asphalt workers. The database was developed as a stand-alone Microsoft Access 2.0 application, which could work in each of the national centres. Exposure data included in the database comprised measurements of exposure levels, plus supplementary information on production characteristics which was analogous to that used to describe companies enrolled in the study. The database has been successfully implemented in eight countries, demonstrating the flexibility and data security features adequate to the task. The database allowed retrieval and consistent coding of 38 data sets of which 34 have never been described in peer-reviewed scientific literature. We were able to collect most of the data intended. As of February 1999 the database consisted of 2007 sets of measurements from persons or locations. The measurements appeared to be free from any obvious bias. The methodology embodied in the creation of the database can be usefully employed to develop exposure assessment tools in epidemiological studies.

  2. BIOFRAG – a new database for analyzing BIOdiversity responses to forest FRAGmentation

    PubMed Central

    Pfeifer, Marion; Lefebvre, Veronique; Gardner, Toby A; Arroyo-Rodriguez, Victor; Baeten, Lander; Banks-Leite, Cristina; Barlow, Jos; Betts, Matthew G; Brunet, Joerg; Cerezo, Alexis; Cisneros, Laura M; Collard, Stuart; D'Cruze, Neil; da Silva Motta, Catarina; Duguay, Stephanie; Eggermont, Hilde; Eigenbrod, Felix; Hadley, Adam S; Hanson, Thor R; Hawes, Joseph E; Heartsill Scalley, Tamara; Klingbeil, Brian T; Kolb, Annette; Kormann, Urs; Kumar, Sunil; Lachat, Thibault; Lakeman Fraser, Poppy; Lantschner, Victoria; Laurance, William F; Leal, Inara R; Lens, Luc; Marsh, Charles J; Medina-Rangel, Guido F; Melles, Stephanie; Mezger, Dirk; Oldekop, Johan A; Overal, William L; Owen, Charlotte; Peres, Carlos A; Phalan, Ben; Pidgeon, Anna M; Pilia, Oriana; Possingham, Hugh P; Possingham, Max L; Raheem, Dinarzarde C; Ribeiro, Danilo B; Ribeiro Neto, Jose D; Douglas Robinson, W; Robinson, Richard; Rytwinski, Trina; Scherber, Christoph; Slade, Eleanor M; Somarriba, Eduardo; Stouffer, Philip C; Struebig, Matthew J; Tylianakis, Jason M; Tscharntke, Teja; Tyre, Andrew J; Urbina Cardona, Jose N; Vasconcelos, Heraldo L; Wearn, Oliver; Wells, Konstans; Willig, Michael R; Wood, Eric; Young, Richard P; Bradley, Andrew V; Ewers, Robert M

    2014-01-01

    Habitat fragmentation studies have produced complex results that are challenging to synthesize. Inconsistencies among studies may result from variation in the choice of landscape metrics and response variables, which is often compounded by a lack of key statistical or methodological information. Collating primary datasets on biodiversity responses to fragmentation in a consistent and flexible database permits simple data retrieval for subsequent analyses. We present a relational database that links such field data to taxonomic nomenclature, spatial and temporal plot attributes, and environmental characteristics. Field assessments include measurements of the response(s) (e.g., presence, abundance, ground cover) of one or more species linked to plots in fragments within a partially forested landscape. The database currently holds 9830 unique species recorded in plots of 58 unique landscapes in six of eight realms: mammals 315, birds 1286, herptiles 460, insects 4521, spiders 204, other arthropods 85, gastropods 70, annelids 8, platyhelminthes 4, Onychophora 2, vascular plants 2112, nonvascular plants and lichens 320, and fungi 449. Three landscapes were sampled as long-term time series (>10 years). Seven hundred and eleven species are found in two or more landscapes. Consolidating the substantial amount of primary data available on biodiversity responses to fragmentation in the context of land-use change and natural disturbances is an essential part of understanding the effects of increasing anthropogenic pressures on land. The consistent format of this database facilitates testing of generalizations concerning biologic responses to fragmentation across diverse systems and taxa. It also allows the re-examination of existing datasets with alternative landscape metrics and robust statistical methods, for example, helping to address pseudo-replication problems. The database can thus help researchers in producing broad syntheses of the effects of land use. The database is dynamic and inclusive, and contributions from individual and large-scale data-collection efforts are welcome. PMID:24967073

  3. Executive Functions in Children with Specific Language Impairment: A Meta-Analysis

    ERIC Educational Resources Information Center

    Pauls, Laura J.; Archibald, Lisa M. D.

    2016-01-01

    Purpose: Mounting evidence demonstrates deficits in children with specific language impairment (SLI) beyond the linguistic domain. Using meta-analysis, this study examined differences in children with and without SLI on tasks measuring inhibition and cognitive flexibility. Method: Databases were searched for articles comparing children (4-14…

  4. [Application of the life sciences platform based on oracle to biomedical informations].

    PubMed

    Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao

    2008-03-01

    The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.

  5. A Drug Discovery Partnership for Personalized Breast Cancer Therapy

    DTIC Science & Technology

    2012-09-01

    flexible searches of compound databases using detailed pharmacophore and CoMFA QSAR results. (Months 9-24). 1,3,8-trihydroxyanthraquinone was taken...Cytochrome P450 Inhibitors- A Study of Their Potency and Selectivity”, J. Sridhar, J. Liu, C.L.K. Stevens, and M. Foroozesh, Society of Toxicology

  6. OASIS: Prototyping Graphical Interfaces to Networked Information.

    ERIC Educational Resources Information Center

    Buckland, Michael K.; And Others

    1993-01-01

    Describes the latest modifications being made to OASIS, a front-end enhancement to the University of California's MELVYL online union catalog. Highlights include the X Windows interface; multiple database searching to act as an information network; Lisp implementation for flexible data representation; and OASIS commands and features to help…

  7. [Synthesis of 107 Workplace Literacy Programs.

    ERIC Educational Resources Information Center

    Bussert, Kathy M.

    A study examined information from 107 workplace literacy program descriptions from the United States and drew conclusions about joint partnerships, funding, and flexibility. Most of the program descriptions were found in an extensive search using the ERIC database. The programs described were from 1989 and 1990. Some of the findings were the…

  8. Get It Together: Integrating Data with XML.

    ERIC Educational Resources Information Center

    Miller, Ron

    2003-01-01

    Discusses the use of XML for data integration to move data across different platforms, including across the Internet, from a variety of sources. Topics include flexibility; standards; organizing databases; unstructured data and the use of meta tags to encode it with XML information; cost effectiveness; and eliminating client software licenses.…

  9. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    PubMed

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.

  10. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    USGS Publications Warehouse

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.

  11. Cognitive flexibility and religious disbelief.

    PubMed

    Zmigrod, Leor; Rentfrow, P Jason; Zmigrod, Sharon; Robbins, Trevor W

    2018-06-11

    Cognitive flexibility is operationalized in the neuropsychological literature as the ability to shift between modes of thinking and adapt to novel or changing environments. Religious belief systems consist of strict rules and rituals that offer adherents certainty, consistency, and stability. Consequently, we hypothesized that religious adherence and practice of repetitive religious rituals may be related to the persistence versus flexibility of one's cognition. The present study investigated the extent to which tendencies towards cognitive flexibility versus persistence are related to three facets of religious life: religious affiliation, religious practice, and religious upbringing. In a large sample (N = 744), we found that religious disbelief was related to cognitive flexibility across three independent behavioural measures: the Wisconsin Card Sorting Test, Remote Associates Test, and Alternative Uses Test. Furthermore, lower frequency of religious service attendance was related to cognitive flexibility. When analysing participants' religious upbringing in relation to their current religious affiliation, it was manifest that current affiliation was more influential than religious upbringing in all the measured facets of cognitive flexibility. The findings indicate that religious affiliation and engagement may shape and be shaped by cognitive control styles towards flexibility versus persistence, highlighting the tight links between flexibility of thought and religious ideologies.

  12. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    NASA Technical Reports Server (NTRS)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  13. Parent's Relative Perceived Work Flexibility Compared to Their Partner Is Associated With Emotional Exhaustion.

    PubMed

    Leineweber, Constanze; Falkenberg, Helena; Albrecht, Sophie C

    2018-01-01

    A number of studies have found that control over work conditions and hours is positively related to mental health. Still, potential positive and negative effects of work flexibility remain to be fully explored. On the one hand, higher work flexibility might provide better opportunities for recovery. On the other hand, especially mothers may use flexibility to meet household and family demands. Here, we investigated the association between parent's work flexibility, rated relative to their partner, and emotional exhaustion in interaction with gender. Additionally, gender differences in time use were investigated. Cross-sectional analyses based on responses of employed parents to the 2012 wave of the Swedish Longitudinal Occupational Survey of Health (SLOSH) were conducted ( N = 2,911). Generalized linear models with gamma distribution and a log-link function were used to investigate associations between relative work-flexibility (lower, equal, or higher as compared to partner), gender, and emotional exhaustion. After control for potential confounders, we found that having lower work flexibility than the partner was associated with higher levels of emotional exhaustion as compared to those with higher relative work flexibility. Also, being a mother was associated with higher levels of emotional exhaustion, independent of possible confounders. An interaction effect between low relative work flexibility and gender was found in relation to emotional exhaustion. Regarding time use, clear differences between mothers' and fathers' were found. However, few indications were found that relative work flexibility influenced time use. Mothers spent more time on household chores as compared to fathers, while fathers reported longer working hours. Fathers spent more time on relaxation compared with mothers. To conclude, our results indicate that lower relative work flexibility is detrimental for mental health both for mothers and fathers. However, while gender seems to have a pronounced effect on time use, relative work flexibility seems to have less influence on how time is used. Generally, mothers tend to spend more time on unpaid work while fathers spend longer hours on paid work and report more time for relaxation.

  14. Parent's Relative Perceived Work Flexibility Compared to Their Partner Is Associated With Emotional Exhaustion

    PubMed Central

    Leineweber, Constanze; Falkenberg, Helena; Albrecht, Sophie C.

    2018-01-01

    A number of studies have found that control over work conditions and hours is positively related to mental health. Still, potential positive and negative effects of work flexibility remain to be fully explored. On the one hand, higher work flexibility might provide better opportunities for recovery. On the other hand, especially mothers may use flexibility to meet household and family demands. Here, we investigated the association between parent's work flexibility, rated relative to their partner, and emotional exhaustion in interaction with gender. Additionally, gender differences in time use were investigated. Cross-sectional analyses based on responses of employed parents to the 2012 wave of the Swedish Longitudinal Occupational Survey of Health (SLOSH) were conducted (N = 2,911). Generalized linear models with gamma distribution and a log-link function were used to investigate associations between relative work-flexibility (lower, equal, or higher as compared to partner), gender, and emotional exhaustion. After control for potential confounders, we found that having lower work flexibility than the partner was associated with higher levels of emotional exhaustion as compared to those with higher relative work flexibility. Also, being a mother was associated with higher levels of emotional exhaustion, independent of possible confounders. An interaction effect between low relative work flexibility and gender was found in relation to emotional exhaustion. Regarding time use, clear differences between mothers' and fathers' were found. However, few indications were found that relative work flexibility influenced time use. Mothers spent more time on household chores as compared to fathers, while fathers reported longer working hours. Fathers spent more time on relaxation compared with mothers. To conclude, our results indicate that lower relative work flexibility is detrimental for mental health both for mothers and fathers. However, while gender seems to have a pronounced effect on time use, relative work flexibility seems to have less influence on how time is used. Generally, mothers tend to spend more time on unpaid work while fathers spend longer hours on paid work and report more time for relaxation. PMID:29774006

  15. Evaluation of personal digital assistant drug information databases for the managed care pharmacist.

    PubMed

    Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W

    2003-01-01

    Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.

  16. Modernization and multiscale databases at the U.S. geological survey

    USGS Publications Warehouse

    Morrison, J.L.

    1992-01-01

    The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.

  17. CADB: Conformation Angles DataBase of proteins

    PubMed Central

    Sheik, S. S.; Ananthalakshmi, P.; Bhargavi, G. Ramya; Sekar, K.

    2003-01-01

    Conformation Angles DataBase (CADB) provides an online resource to access data on conformation angles (both main-chain and side-chain) of protein structures in two data sets corresponding to 25% and 90% sequence identity between any two proteins, available in the Protein Data Bank. In addition, the database contains the necessary crystallographic parameters. The package has several flexible options and display facilities to visualize the main-chain and side-chain conformation angles for a particular amino acid residue. The package can also be used to study the interrelationship between the main-chain and side-chain conformation angles. A web based JAVA graphics interface has been deployed to display the user interested information on the client machine. The database is being updated at regular intervals and can be accessed over the World Wide Web interface at the following URL: http://144.16.71.148/cadb/. PMID:12520049

  18. Virtual Atomic and Molecular Data Center (VAMDC) and Stark-B Database

    NASA Astrophysics Data System (ADS)

    Dimitrijevic, M. S.; Sahal-Brechot, S.; Kovacevic, A.; Jevremovic, D.; Popovic, L. C.; VAMDC Consortium; Dubernet, Marie-Lise

    2012-01-01

    Virtual Atomic and Molecular Data Center (VAMDC) is an European FP7 project with aims to build a flexible and interoperable e-science environment based interface to the existing Atomic and Molecular data. The VAMDC will be built upon the expertise of existing Atomic and Molecular databases, data producers and service providers with the specific aim of creating an infrastructure that is easily tuned to the requirements of a wide variety of users in academic, governmental, industrial or public communities. In VAMDC will enter also STARK-B database, containing Stark broadening parameters for a large number of lines, obtained by the semiclassical perturbation method during more than 30 years of collaboration of authors of this work (MSD and SSB) and their co-workers. In this contribution we will review the VAMDC project, STARK-B database and discuss the benefits of both for the corresponding data users.

  19. GEOmetadb: powerful alternative search engine for the Gene Expression Omnibus

    PubMed Central

    Zhu, Yuelin; Davis, Sean; Stephens, Robert; Meltzer, Paul S.; Chen, Yidong

    2008-01-01

    The NCBI Gene Expression Omnibus (GEO) represents the largest public repository of microarray data. However, finding data in GEO can be challenging. We have developed GEOmetadb in an attempt to make querying the GEO metadata both easier and more powerful. All GEO metadata records as well as the relationships between them are parsed and stored in a local MySQL database. A powerful, flexible web search interface with several convenient utilities provides query capabilities not available via NCBI tools. In addition, a Bioconductor package, GEOmetadb that utilizes a SQLite export of the entire GEOmetadb database is also available, rendering the entire GEO database accessible with full power of SQL-based queries from within R. Availability: The web interface and SQLite databases available at http://gbnci.abcc.ncifcrf.gov/geo/. The Bioconductor package is available via the Bioconductor project. The corresponding MATLAB implementation is also available at the same website. Contact: yidong@mail.nih.gov PMID:18842599

  20. AQUIS: A PC-based source information manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1993-05-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less

  1. AQUIS: A PC-based source information manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1993-01-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less

  2. DASTCOM5: A Portable and Current Database of Asteroid and Comet Orbit Solutions

    NASA Astrophysics Data System (ADS)

    Giorgini, Jon D.; Chamberlin, Alan B.

    2014-11-01

    A portable direct-access database containing all NASA/JPL asteroid and comet orbit solutions, with the software to access it, is available for download (ftp://ssd.jpl.nasa.gov/pub/xfr/dastcom5.zip; unzip -ao dastcom5.zip). DASTCOM5 contains the latest heliocentric IAU76/J2000 ecliptic osculating orbital elements for all known asteroids and comets as determined by a least-squares best-fit to ground-based optical, spacecraft, and radar astrometric measurements. Other physical, dynamical, and covariance parameters are included when known. A total of 142 parameters per object are supported within DASTCOM5. This information is suitable for initializing high-precision numerical integrations, assessing orbit geometry, computing trajectory uncertainties, visual magnitude, and summarizing physical characteristics of the body. The DASTCOM5 distribution is updated as often as hourly to include newly discovered objects or orbit solution updates. It includes an ASCII index of objects that supports look-ups based on name, current or past designation, SPK ID, MPC packed-designations, or record number. DASTCOM5 is the database used by the NASA/JPL Horizons ephemeris system. It is a subset exported from a larger MySQL-based relational Small-Body Database ("SBDB") maintained at JPL. The DASTCOM5 distribution is intended for programmers comfortable with UNIX/LINUX/MacOSX command-line usage who need to develop stand-alone applications. The goal of the implementation is to provide small, fast, portable, and flexibly programmatic access to JPL comet and asteroid orbit solutions. The supplied software library, examples, and application programs have been verified under gfortran, Lahey, Intel, and Sun 32/64-bit Linux/UNIX FORTRAN compilers. A command-line tool ("dxlook") is provided to enable database access from shell or script environments.

  3. Using narrative inquiry to listen to the voices of adolescent mothers in relation to their use of social networking sites (SNS).

    PubMed

    Nolan, Samantha; Hendricks, Joyce; Williamson, Moira; Ferguson, Sally

    2018-03-01

    This article presents a discussion highlighting the relevance and strengths of using narrative inquiry to explore experiences of social networking site (SNS) use by adolescent mothers. Narrative inquiry as a method reveals truths about holistic human experience. Knowledge gleaned from personal narratives informs nursing knowledge and clinical practice. This approach gives voice to adolescent mothers in relation to their experiences with SNS as a means of providing social support. Discussion paper. This paper draws and reflects on the author's experiences using narrative inquiry and is supported by literature and theory. The following databases were searched: CINAHL, Cochrane Library, Medline, Scopus, ERIC, ProQuest, PsychINFO, Web of Science and Health Collection (Informit). Key terms and Boolean search operators were used to broaden the search criteria. Search terms included: adolescent mother, teenage mother, "social networking sites", online, social media, Facebook, social support, social capital and information. Dates for the search were limited to January 1995-June 2017. Narrative research inherently values the individual "story" of experience. This approach facilitates rapport building and methodological flexibility with an often difficult to engage sample group, adolescents. Narrative inquiry reveals a deep level of insight into social networking site use by adolescent mothers. The flexibility afforded by use of a narrative approach allows for fluidity and reflexivity in the research process. © 2017 John Wiley & Sons Ltd.

  4. The relationship between active travel to school and health-related fitness in children and adolescents: a systematic review.

    PubMed

    Lubans, David R; Boreham, Colin A; Kelly, Paul; Foster, Charlie E

    2011-01-26

    Active travel to school (ATS) has been identified as an important source of physical activity for youth. However, the relationship between ATS and health-related fitness (HRF) among youth remains unclear. A systematic search of seven electronic databases (EMBASE, OVID MEDLINE, PsycINFO, PubMed, Scopus, SPORTDiscus and TRIS on line) was conducted in December 2009 and studies published since 1980 were considered for inclusion. Twenty seven articles were identified that explored the relationship between ATS and the following aspects of HRF: weight status/body composition, cardiorespiratory fitness, muscular fitness and flexibility. Forty-eight percent of the studies that examined the relationship between ATS and weight status/body composition reported significant associations, this increased to 55% once poor quality studies were removed. Furthermore, the findings from five studies, including one longitudinal study, indicate that ATS is positively associated with cardiorespiratory fitness in youth. However, the evidence for the relationships between ATS and muscular fitness or flexibility is equivocal and limited by low study numbers. There is some evidence to suggest that ATS is associated with a healthier body composition and level of cardiorespiratory fitness among youth. Strategies to increase ATS are warranted and should be included in whole-of-school approaches to the promotion of physical activity. © 2011 Lubans et al; licensee BioMed Central Ltd.

  5. PoMaMo--a comprehensive database for potato genome data.

    PubMed

    Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane

    2005-01-01

    A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes.

  6. PoMaMo—a comprehensive database for potato genome data

    PubMed Central

    Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane

    2005-01-01

    A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes. PMID:15608284

  7. Role stress in nurses: review of related factors and strategies for moving forward.

    PubMed

    Chang, Esther M; Hancock, Karen M; Johnson, Amanda; Daly, John; Jackson, Debra

    2005-03-01

    The aim of this paper was to review the literature on factors related to role stress in nurses, and present strategies for addressing this issue based on the findings of this review while considering potential areas for development and research. Computerized databases were searched as well as hand searching of articles in order to conduct this review. This review identified multiple factors related to the experience of role stress in nurses. Role stress, in particular, work overload, has been reported as one of the main reasons for nurses leaving the workforce. This paper concludes that it is a priority to find new and innovative ways of supporting nurses in their experience of role stress. Some examples discussed in this article include use of stress education and management strategies; team-building strategies; balancing priorities; enhancing social and peer support; flexibility in work hours; protocols to deal with violence; and retention and attraction of nursing staff strategies. These strategies need to be empirically evaluated for their efficacy in reducing role stress.

  8. Development of a flexible pavement database for local calibration of the MEPDG : part 2, evaluation of ODOT SMA mixtures.

    DOT National Transportation Integrated Search

    2011-03-01

    There has been some reluctance on the part of some in Oklahoma to use SMA mixtures. There are several factors that could be involved in the slow acceptance of SMA mixtures in Oklahoma. These factors are 1) the extra expense associated with the higher...

  9. MIMO: an efficient tool for molecular interaction maps overlap

    PubMed Central

    2013-01-01

    Background Molecular pathways represent an ensemble of interactions occurring among molecules within the cell and between cells. The identification of similarities between molecular pathways across organisms and functions has a critical role in understanding complex biological processes. For the inference of such novel information, the comparison of molecular pathways requires to account for imperfect matches (flexibility) and to efficiently handle complex network topologies. To date, these characteristics are only partially available in tools designed to compare molecular interaction maps. Results Our approach MIMO (Molecular Interaction Maps Overlap) addresses the first problem by allowing the introduction of gaps and mismatches between query and template pathways and permits -when necessary- supervised queries incorporating a priori biological information. It then addresses the second issue by relying directly on the rich graph topology described in the Systems Biology Markup Language (SBML) standard, and uses multidigraphs to efficiently handle multiple queries on biological graph databases. The algorithm has been here successfully used to highlight the contact point between various human pathways in the Reactome database. Conclusions MIMO offers a flexible and efficient graph-matching tool for comparing complex biological pathways. PMID:23672344

  10. A Parametric Sizing Model for Molten Regolith Electrolysis Reactors to Produce Oxygen from Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Schreiner, Samuel S.; Dominguez, Jesus A.; Sibille, Laurent; Hoffman, Jeffrey A.

    2015-01-01

    We present a parametric sizing model for a Molten Electrolysis Reactor that produces oxygen and molten metals from lunar regolith. The model has a foundation of regolith material properties validated using data from Apollo samples and simulants. A multiphysics simulation of an MRE reactor is developed and leveraged to generate a vast database of reactor performance and design trends. A novel design methodology is created which utilizes this database to parametrically design an MRE reactor that 1) can sustain the required mass of molten regolith, current, and operating temperature to meet the desired oxygen production level, 2) can operate for long durations via joule heated, cold wall operation in which molten regolith does not touch the reactor side walls, 3) can support a range of electrode separations to enable operational flexibility. Mass, power, and performance estimates for an MRE reactor are presented for a range of oxygen production levels. The effects of several design variables are explored, including operating temperature, regolith type/composition, batch time, and the degree of operational flexibility.

  11. AgdbNet – antigen sequence database software for bacterial typing

    PubMed Central

    Jolley, Keith A; Maiden, Martin CJ

    2006-01-01

    Background Bacterial typing schemes based on the sequences of genes encoding surface antigens require databases that provide a uniform, curated, and widely accepted nomenclature of the variants identified. Due to the differences in typing schemes, imposed by the diversity of genes targeted, creating these databases has typically required the writing of one-off code to link the database to a web interface. Here we describe agdbNet, widely applicable web database software that facilitates simultaneous BLAST querying of multiple loci using either nucleotide or peptide sequences. Results Databases are described by XML files that are parsed by a Perl CGI script. Each database can have any number of loci, which may be defined by nucleotide and/or peptide sequences. The software is currently in use on at least five public databases for the typing of Neisseria meningitidis, Campylobacter jejuni and Streptococcus equi and can be set up to query internal isolate tables or suitably-configured external isolate databases, such as those used for multilocus sequence typing. The style of the resulting website can be fully configured by modifying stylesheets and through the use of customised header and footer files that surround the output of the script. Conclusion The software provides a rapid means of setting up customised Internet antigen sequence databases. The flexible configuration options enable typing schemes with differing requirements to be accommodated. PMID:16790057

  12. Use of Crystal Structure Informatics for Defining the Conformational Space Needed for Predicting Crystal Structures of Pharmaceutical Molecules.

    PubMed

    Iuzzolino, Luca; Reilly, Anthony M; McCabe, Patrick; Price, Sarah L

    2017-10-10

    Determining the range of conformations that a flexible pharmaceutical-like molecule could plausibly adopt in a crystal structure is a key to successful crystal structure prediction (CSP) studies. We aim to use conformational information from the crystal structures in the Cambridge Structural Database (CSD) to facilitate this task. The conformations produced by the CSD Conformer Generator are reduced in number by considering the underlying rotamer distributions, an analysis of changes in molecular shape, and a minimal number of molecular ab initio calculations. This method is tested for five pharmaceutical-like molecules where an extensive CSP study has already been performed. The CSD informatics-derived set of crystal structure searches generates almost all the low-energy crystal structures previously found, including all experimental structures. The workflow effectively combines information on individual torsion angles and then eliminates the combinations that are too high in energy to be found in the solid state, reducing the resources needed to cover the solid-state conformational space of a molecule. This provides insights into how the low-energy solid-state and isolated-molecule conformations are related to the properties of the individual flexible torsion angles.

  13. Critical evaluation of methods to incorporate entropy loss upon binding in high-throughput docking.

    PubMed

    Salaniwal, Sumeet; Manas, Eric S; Alvarez, Juan C; Unwalla, Rayomand J

    2007-02-01

    Proper accounting of the positional/orientational/conformational entropy loss associated with protein-ligand binding is important to obtain reliable predictions of binding affinity. Herein, we critically examine two simplified statistical mechanics-based approaches, namely a constant penalty per rotor method, and a more rigorous method, referred to here as the partition function-based scoring (PFS) method, to account for such entropy losses in high-throughput docking calculations. Our results on the estrogen receptor beta and dihydrofolate reductase proteins demonstrate that, while the constant penalty method over-penalizes molecules for their conformational flexibility, the PFS method behaves in a more "DeltaG-like" manner by penalizing different rotors differently depending on their residual entropy in the bound state. Furthermore, in contrast to no entropic penalty or the constant penalty approximation, the PFS method does not exhibit any bias towards either rigid or flexible molecules in the hit list. Preliminary enrichment studies using a lead-like random molecular database suggest that an accurate representation of the "true" energy landscape of the protein-ligand complex is critical for reliable predictions of relative binding affinities by the PFS method. Copyright 2006 Wiley-Liss, Inc.

  14. Minimizing Injuries and Enhancing Performance in Golf Through Training Programs

    PubMed Central

    Meira, Erik P.; Brumitt, Jason

    2010-01-01

    Context: Golf is a popular sport, particularly in older populations. Regardless of age and skill level, golfers risk injury to the back, shoulder, wrist and hand, elbow, and knee. Because of the unique compressive, shear, rotational, and lateral bending forces created in the lumbar region during the golf swing, the primary sport-related malady experienced by amateurs and professionals is low back pain. Extrinsic and intrinsic injury risk factors have been reported in the literature. A growing body of evidence supports the prescription of strength training routines to enhance performance and reduce the risk of injury. Evidence Acquisition: Relevant studies were reviewed on golf injuries, swing mechanics, training routines, and general training program design. The following electronic databases were used to identify research relevant to this report: MEDLINE (from 1950–November 2009), CINAHL (1982–November 2009), and SPORTDiscus (1830–November 2009). Results: Injuries may be associated with lack of warm-up, poor trunk flexibility and strength, faulty swing technique, and overuse. Conclusions: Implementing a training program that includes flexibility, strength, and power training with correction of faulty swing mechanics will help the golfer reduce the likelihood of injury and improve overall performance. PMID:23015957

  15. The 2014 Nucleic Acids Research Database Issue and an updated NAR online Molecular Biology Database Collection.

    PubMed

    Fernández-Suárez, Xosé M; Rigden, Daniel J; Galperin, Michael Y

    2014-01-01

    The 2014 Nucleic Acids Research Database Issue includes descriptions of 58 new molecular biology databases and recent updates to 123 databases previously featured in NAR or other journals. For convenience, the issue is now divided into eight sections that reflect major subject categories. Among the highlights of this issue are six databases of the transcription factor binding sites in various organisms and updates on such popular databases as CAZy, Database of Genomic Variants (DGV), dbGaP, DrugBank, KEGG, miRBase, Pfam, Reactome, SEED, TCDB and UniProt. There is a strong block of structural databases, which includes, among others, the new RNA Bricks database, updates on PDBe, PDBsum, ArchDB, Gene3D, ModBase, Nucleic Acid Database and the recently revived iPfam database. An update on the NCBI's MMDB describes VAST+, an improved tool for protein structure comparison. Two articles highlight the development of the Structural Classification of Proteins (SCOP) database: one describes SCOPe, which automates assignment of new structures to the existing SCOP hierarchy; the other one describes the first version of SCOP2, with its more flexible approach to classifying protein structures. This issue also includes a collection of articles on bacterial taxonomy and metagenomics, which includes updates on the List of Prokaryotic Names with Standing in Nomenclature (LPSN), Ribosomal Database Project (RDP), the Silva/LTP project and several new metagenomics resources. The NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/c/, has been expanded to 1552 databases. The entire Database Issue is freely available online on the Nucleic Acids Research website (http://nar.oxfordjournals.org/).

  16. Version VI of the ESTree db: an improved tool for peach transcriptome analysis

    PubMed Central

    Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Merelli, Ivan; Barale, Francesca; Milanesi, Luciano; Stella, Alessandra; Pozzi, Carlo

    2008-01-01

    Background The ESTree database (db) is a collection of Prunus persica and Prunus dulcis EST sequences that in its current version encompasses 75,404 sequences from 3 almond and 19 peach libraries. Nine peach genotypes and four peach tissues are represented, from four fruit developmental stages. The aim of this work was to implement the already existing ESTree db by adding new sequences and analysis programs. Particular care was given to the implementation of the web interface, that allows querying each of the database features. Results A Perl modular pipeline is the backbone of sequence analysis in the ESTree db project. Outputs obtained during the pipeline steps are automatically arrayed into the fields of a MySQL database. Apart from standard clustering and annotation analyses, version VI of the ESTree db encompasses new tools for tandem repeat identification, annotation against genomic Rosaceae sequences, and positioning on the database of oligomer sequences that were used in a peach microarray study. Furthermore, known protein patterns and motifs were identified by comparison to PROSITE. Based on data retrieved from sequence annotation against the UniProtKB database, a script was prepared to track positions of homologous hits on the GO tree and build statistics on the ontologies distribution in GO functional categories. EST mapping data were also integrated in the database. The PHP-based web interface was upgraded and extended. The aim of the authors was to enable querying the database according to all the biological aspects that can be investigated from the analysis of data available in the ESTree db. This is achieved by allowing multiple searches on logical subsets of sequences that represent different biological situations or features. Conclusions The version VI of ESTree db offers a broad overview on peach gene expression. Sequence analyses results contained in the database, extensively linked to external related resources, represent a large amount of information that can be queried via the tools offered in the web interface. Flexibility and modularity of the ESTree analysis pipeline and of the web interface allowed the authors to set up similar structures for different datasets, with limited manual intervention. PMID:18387211

  17. Robust QKD-based private database queries based on alternative sequences of single-qubit measurements

    NASA Astrophysics Data System (ADS)

    Yang, YuGuang; Liu, ZhiChao; Chen, XiuBo; Zhou, YiHua; Shi, WeiMin

    2017-12-01

    Quantum channel noise may cause the user to obtain a wrong answer and thus misunderstand the database holder for existing QKD-based quantum private query (QPQ) protocols. In addition, an outside attacker may conceal his attack by exploiting the channel noise. We propose a new, robust QPQ protocol based on four-qubit decoherence-free (DF) states. In contrast to existing QPQ protocols against channel noise, only an alternative fixed sequence of single-qubit measurements is needed by the user (Alice) to measure the received DF states. This property makes it easy to implement the proposed protocol by exploiting current technologies. Moreover, to retain the advantage of flexible database queries, we reconstruct Alice's measurement operators so that Alice needs only conditioned sequences of single-qubit measurements.

  18. Creating a FIESTA (Framework for Integrated Earth Science and Technology Applications) with MagIC

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.

    2017-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC) has recently developed a containerized web application to considerably reduce the friction in contributing, exploring and combining valuable and complex datasets for the paleo-, geo- and rock magnetic scientific community. The data produced in this scientific domain are inherently hierarchical and the communities evolving approaches to this scientific workflow, from sampling to taking measurements to multiple levels of interpretations, require a large and flexible data model to adequately annotate the results and ensure reproducibility. Historically, contributing such detail in a consistent format has been prohibitively time consuming and often resulted in only publishing the highly derived interpretations. The new open-source (https://github.com/earthref/MagIC) application provides a flexible upload tool integrated with the data model to easily create a validated contribution and a powerful search interface for discovering datasets and combining them to enable transformative science. MagIC is hosted at EarthRef.org along with several interdisciplinary geoscience databases. A FIESTA (Framework for Integrated Earth Science and Technology Applications) is being created by generalizing MagIC's web application for reuse in other domains. The application relies on a single configuration document that describes the routing, data model, component settings and external services integrations. The container hosts an isomorphic Meteor JavaScript application, MongoDB database and ElasticSearch search engine. Multiple containers can be configured as microservices to serve portions of the application or rely on externally hosted MongoDB, ElasticSearch, or third-party services to efficiently scale computational demands. FIESTA is particularly well suited for many Earth Science disciplines with its flexible data model, mapping, account management, upload tool to private workspaces, reference metadata, image galleries, full text searches and detailed filters. EarthRef's Seamount Catalog of bathymetry and morphology data, EarthRef's Geochemical Earth Reference Model (GERM) databases, and Oregon State University's Marine and Geology Repository (http://osu-mgr.org) will benefit from custom adaptations of FIESTA.

  19. How performance-contingent reward prospect modulates cognitive control: Increased cue maintenance at the cost of decreased flexibility.

    PubMed

    Hefer, Carmen; Dreisbach, Gesine

    2017-10-01

    Growing evidence suggests that reward prospect promotes cognitive stability in terms of increased context or cue maintenance. In 3 Experiments, using different versions of the AX-continuous performance task, we investigated whether this reward effect comes at the cost of decreased cognitive flexibility. Experiment 1 shows that the reward induced increase of cue maintenance perseverates even when reward is no longer available. Experiment 2 shows that this reward effect not only survives the withdrawal of reward but also delays the adaptation to changed task conditions that make cue usage maladaptive. And finally in Experiment 3, it is shown that this reduced flexibility to adapt is observed in a more demanding modified version of the AX-continuous performance task and is even stronger under conditions of sustained reward. Taken together, all 3 Experiments thus speak to the idea that the prospect of reward increases cue maintenance and thereby cognitive stability. This increased cognitive stability however comes at the cost of decreased flexibility in terms of delayed adaptation to new reward and task conditions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. IUEAGN: A database of ultraviolet spectra of active galactic nuclei

    NASA Technical Reports Server (NTRS)

    Pike, G.; Edelson, R.; Shull, J. M.; Saken, J.

    1993-01-01

    In 13 years of operation, IUE has gathered approximately 5000 spectra of almost 600 Active Galactic Nuclei (AGN). In order to undertake AGN studies which require large amounts of data, we are consistently reducing this entire archive and creating a homogeneous, easy-to-use database. First, the spectra are extracted using the Optimal extraction algorithm. Continuum fluxes are then measured across predefined bands, and line fluxes are measured with a multi-component fit. These results, along with source information such as redshifts and positions, are placed in the IUEAGN relational database. Analysis algorithms, statistical tests, and plotting packages run within the structure, and this flexible database can accommodate future data when they are released. This archival approach has already been used to survey line and continuum variability in six bright Seyfert 1s and rapid continuum variability in 14 blazars. Among the results that could only be obtained using a large archival study is evidence that blazars show a positive correlation between degree of variability and apparent luminosity, while Seyfert 1s show an anti-correlation. This suggests that beaming dominates the ultraviolet properties for blazars, while thermal emission from an accretion disk dominates for Seyfert 1s. Our future plans include a survey of line ratios in Seyfert 1s, to be fitted with photoionization models to test the models and determine the range of temperatures, densities and ionization parameters. We will also include data from IRAS, Einstein, EXOSAT, and ground-based telescopes to measure multi-wavelength correlations and broadband spectral energy distributions.

  1. Dyadic Affective Flexibility and Emotional Inertia in Relation to Youth Psychopathology: An Integrated Model at Two Timescales.

    PubMed

    Mancini, Kathryn J; Luebbe, Aaron M

    2016-06-01

    The current review examines characteristics of temporal affective functioning at both the individual and dyadic level. Specifically, the review examines the following three research questions: (1) How are dyadic affective flexibility and emotional inertia operationalized, and are they related to youth psychopathology? (2) How are dyadic affective flexibility and emotional inertia related, and does this relation occur at micro- and meso-timescales? and (3) How do these constructs combine to predict clinical outcomes? Using the Flex3 model of socioemotional flexibility as a frame, the current study proposes that dyadic affective flexibility and emotional inertia are bidirectionally related at micro- and meso-timescales, which yields psychopathological symptoms for youth. Specific future directions for examining individual, dyadic, and cultural characteristics that may influence relations between these constructs and psychopathology are also discussed.

  2. A Couple-Based Psychological Treatment for Chronic Pain and Relationship Distress.

    PubMed

    Cano, Annmarie; Corley, Angelia M; Clark, Shannon M; Martinez, Sarah C

    2018-02-01

    Chronic pain impacts individuals with pain as well as their loved ones. Yet, there has been little attention to the social context in individual psychological treatment approaches to chronic pain management. With this need in mind, we developed a couple-based treatment, "Mindful Living and Relating," aimed at alleviating pain and suffering by promoting couples' psychological and relational flexibility skills. Currently, there is no integrative treatment that fully harnesses the power of the couple, treating both the individual with chronic pain and the spouse as two individuals who are each in need of developing greater psychological and relational flexibility to improve their own and their partners' health. Mindfulness, acceptance, and values-based action exercises were used to promote psychological flexibility. The intervention also targets relational flexibility, which we define as the ability to interact with one's partner, fully attending to the present moment, and responding empathically in a way that serves one's own and one's partner's values. To this end, the intervention also included exercises aimed at applying psychological flexibility skills to social interactions as well as emotional disclosure and empathic responding exercises to enhance relational flexibility. The case presented demonstrates that healthy coping with pain and stress may be most successful and sustainable when one is involved in a supportive relationship with someone who also practices psychological flexibility skills and when both partners use relational flexibility skills during their interactions.

  3. Pathology Collection of the Rocky Mountain Research Station

    Treesearch

    John B. Popp; John E. Lundquist

    2006-01-01

    The pathology collection located at the Rocky Mountain Research Station is fairly extensive. The oldest specimen in the collection was acquired in 1871; since then over 4,600 samples have been added. The data associated with the RMRS collection was converted from a card catalog to an electronic database, allowing greater flexibility in sorting and querying. The...

  4. Experience with ATLAS MySQL PanDA database service

    NASA Astrophysics Data System (ADS)

    Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  5. Shuttle-Data-Tape XML Translator

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  6. An innovative approach to capability-based emergency operations planning

    PubMed Central

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology. PMID:28228987

  7. An innovative approach to capability-based emergency operations planning.

    PubMed

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology.

  8. Integration of Multidisciplinary Sensory Data:

    PubMed Central

    Miller, Perry L.; Nadkarni, Prakash; Singer, Michael; Marenco, Luis; Hines, Michael; Shepherd, Gordon

    2001-01-01

    The paper provides an overview of neuroinformatics research at Yale University being performed as part of the national Human Brain Project. This research is exploring the integration of multidisciplinary sensory data, using the olfactory system as a model domain. The neuroinformatics activities fall into three main areas: 1) building databases and related tools that support experimental olfactory research at Yale and can also serve as resources for the field as a whole, 2) using computer models (molecular models and neuronal models) to help understand data being collected experimentally and to help guide further laboratory experiments, 3) performing basic neuroinformatics research to develop new informatics technologies, including a flexible data model (EAV/CR, entity-attribute-value with classes and relationships) designed to facilitate the integration of diverse heterogeneous data within a single unifying framework. PMID:11141511

  9. Data base development and research and editorial support

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The Life Sciences Bibliographic Data Base was created in 1981 and subsequently expanded. A systematic, professional system was developed to collect, organize, and disseminate information about scientific publications resulting from research. The data base consists of bibliographic information and hard copies of all research papers published by Life Sciences-supported investigators. Technical improvements were instituted in the database. To minimize costs, take advantage of advances in personal computer technology, and achieve maximum flexibility and control, the data base was transferred from the JSC computer to personal computers at George Washington University (GWU). GWU also performed a range of related activities such as conducting in-depth searches on a variety of subjects, retrieving scientific literature, preparing presentations, summarizing research progress, answering correspondence requiring reference support, and providing writing and editorial support.

  10. Design and Development of a Linked Open Data-Based Health Information Representation and Visualization System: Potentials and Preliminary Evaluation

    PubMed Central

    Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur

    2014-01-01

    Background Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)—a new Semantic Web set of best practice of standards to publish and link heterogeneous data—can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. Objective The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. Methods We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk—a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. Results We developed an LOD-based health information representation, querying, and visualization system by using Linked Data tools. We imported more than 20,000 HIV-related data elements on mortality, prevalence, incidence, and related variables, which are freely available from the WHO global health observatory database. Additionally, we automatically linked 5312 data elements from DBpedia, Bio2RDF, and LinkedCT using the Silk framework. The system users can retrieve and visualize health information according to their interests. For users who are not familiar with SPARQL queries, we integrated a Linked Data search engine interface to search and browse the data. We used the system to represent and store the data, facilitating flexible queries and different kinds of visualizations. The preliminary user evaluation score by public health data managers and users was 82 on the SUS usability measurement scale. The need to write queries in the interface was the main reported difficulty of LOD-based systems to the end user. Conclusions The system introduced in this article shows that current LOD technologies are a promising alternative to represent heterogeneous health data in a flexible and reusable manner so that they can serve intelligent queries, and ultimately support decision-making. However, the development of advanced text-based search engines is necessary to increase its usability especially for nontechnical users. Further research with large datasets is recommended in the future to unfold the potential of Linked Data and Semantic Web for future health information systems development. PMID:25601195

  11. Design and development of a linked open data-based health information representation and visualization system: potentials and preliminary evaluation.

    PubMed

    Tilahun, Binyam; Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur

    2014-10-25

    Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)-a new Semantic Web set of best practice of standards to publish and link heterogeneous data-can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk-a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. We developed an LOD-based health information representation, querying, and visualization system by using Linked Data tools. We imported more than 20,000 HIV-related data elements on mortality, prevalence, incidence, and related variables, which are freely available from the WHO global health observatory database. Additionally, we automatically linked 5312 data elements from DBpedia, Bio2RDF, and LinkedCT using the Silk framework. The system users can retrieve and visualize health information according to their interests. For users who are not familiar with SPARQL queries, we integrated a Linked Data search engine interface to search and browse the data. We used the system to represent and store the data, facilitating flexible queries and different kinds of visualizations. The preliminary user evaluation score by public health data managers and users was 82 on the SUS usability measurement scale. The need to write queries in the interface was the main reported difficulty of LOD-based systems to the end user. The system introduced in this article shows that current LOD technologies are a promising alternative to represent heterogeneous health data in a flexible and reusable manner so that they can serve intelligent queries, and ultimately support decision-making. However, the development of advanced text-based search engines is necessary to increase its usability especially for nontechnical users. Further research with large datasets is recommended in the future to unfold the potential of Linked Data and Semantic Web for future health information systems development.

  12. The Relationship of the Sit and Reach Test to Criterion Measures of Hamstring and Back Flexibility in Young Females.

    ERIC Educational Resources Information Center

    Jackson, Allen W.; Baker, Alice A.

    1986-01-01

    This study tested 100 female adolescents to determine the relationships of the sit and reach test, a component of the Health Related Fitness Test, with back and hamstring flexibility. Findings indicate the sit and reach test is moderately related to hamstring flexibility but not to back and low back flexibility. (Author/MT)

  13. Thyroid Disease and Surgery in CHEER: The Nation’s Otolaryngology-Head and Neck Surgery Practice Based Network

    PubMed Central

    Parham, Kourosh; Chapurin, Nikita; Schulz, Kris; Shin, Jennifer J.; Pynnonen, Melissa A.; Witsell, David L.; Langman, Alan; Nguyen-Huynh, Anh; Ryan, Sheila E.; Vambutas, Andrea; Wolfley, Anne; Roberts, Rhonda; Lee, Walter T.

    2017-01-01

    Objectives 1) Describe thyroid-related diagnoses and procedures in CHEER across academic and community sites. 2) Compare management of malignant thyroid disease across these sites, and 3) Provide practice based data related to flexible laryngoscopy vocal fold assessment before and after thyroid surgery based on AAO-HNSF Clinical Practice Guidelines. Study Design Review of retrospective data collection (RDC) database of the CHEER network using ICD-9 and CPT codes related to thyroid conditions. Setting Multisite practice based network. Subjects and Methods There were 3,807 thyroid patients (1,392 malignant; 2,415 benign) with 10,160 unique visits identified from 1 year of patient data in the RDC. Analysis was performed for identified cohort of patients using demographics, site characteristics and diagnostic and procedural distribution. Results Mean number of patients with thyroid disease per site was 238 (range 23–715). In community practices, 19% of patients with thyroid disease had cancer versus 45% in the academic setting (p<0.001). While academic sites manage more cancer patients, community sites are also surgically treating thyroid cancer, and performed more procedures per cancer patient (4.2 vs. 3.5, p<0.001). Vocal fold function was assessed by flexible laryngoscopy in 34.0% of pre-operative patients and in 3.7% post-operatively. Conclusion This is the first overview of malignant and benign thyroid disease through CHEER. It shows how the RDC can be used alone and with national guidelines to inform of clinical practice patterns in academic and community sites. This demonstrates the potential for future thyroid related studies utilizing the Otolaryngology-H&N Surgery’s practice-based research network. PMID:27371622

  14. Perceived versus used workplace flexibility in Singapore: predicting work-family fit.

    PubMed

    Jones, Blake L; Scoville, D Phillip; Hill, E Jeffrey; Childs, Geniel; Leishman, Joan M; Nally, Kathryn S

    2008-10-01

    This study examined the relationship of 2 types of workplace flexibility to work-family fit and work, personal, and marriage-family outcomes using data (N = 1,601) representative of employed persons in Singapore. We hypothesized that perceived and used workplace flexibility would be positively related to the study variables. Results derived from structural equation modeling revealed that perceived flexibility predicted work-family fit; however, used flexibility did not. Work-family fit related positively to each work, personal, and marriage-family outcome; however, workplace flexibility only predicted work and personal outcomes. Findings suggest work-family fit may be an important facilitating factor in the interface between work and family life, relating directly to marital satisfaction and satisfaction in other family relationships. Implications of these findings are discussed. Copyright 2008 APA, all rights reserved.

  15. An Evaluation of Database Solutions to Spatial Object Association

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, V S; Kurc, T; Saltz, J

    2008-06-24

    Object association is a common problem encountered in many applications. Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two datasets based on their positions in a common spatial coordinate system--one of the datasets may correspond to a catalog of objects observed over time in a multi-dimensional domain; the other dataset may consist of objects observed in a snapshot of the domain at a time point. The use of database management systems to the solve the object association problem provides portability across different platforms and also greater flexibility. Increasingmore » dataset sizes in today's applications, however, have made object association a data/compute-intensive problem that requires targeted optimizations for efficient execution. In this work, we investigate how database-based crossmatch algorithms can be deployed on different database system architectures and evaluate the deployments to understand the impact of architectural choices on crossmatch performance and associated trade-offs. We investigate the execution of two crossmatch algorithms on (1) a parallel database system with active disk style processing capabilities, (2) a high-throughput network database (MySQL Cluster), and (3) shared-nothing databases with replication. We have conducted our study in the context of a large-scale astronomy application with real use-case scenarios.« less

  16. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  17. Database integration for investigative data visualization with the Temporal Analysis System

    NASA Astrophysics Data System (ADS)

    Barth, Stephen W.

    1997-02-01

    This paper describes an effort to provide mechanisms for integration of existing law enforcement databases with the temporal analysis system (TAS) -- an application for analysis and visualization of military intelligence data. Such integration mechanisms are essential for bringing advanced military intelligence data handling software applications to bear on the analysis of data used in criminal investigations. Our approach involved applying a software application for intelligence message handling to the problem of data base conversion. This application provides mechanisms for distributed processing and delivery of converted data records to an end-user application. It also provides a flexible graphic user interface for development and customization in the field.

  18. Hierarchical data security in a Query-By-Example interface for a shared database.

    PubMed

    Taylor, Merwyn

    2002-06-01

    Whenever a shared database resource, containing critical patient data, is created, protecting the contents of the database is a high priority goal. This goal can be achieved by developing a Query-By-Example (QBE) interface, designed to access a shared database, and embedding within the QBE a hierarchical security module that limits access to the data. The security module ensures that researchers working in one clinic do not get access to data from another clinic. The security can be based on a flexible taxonomy structure that allows ordinary users to access data from individual clinics and super users to access data from all clinics. All researchers submit queries through the same interface and the security module processes the taxonomy and user identifiers to limit access. Using this system, two different users with different access rights can submit the same query and get different results thus reducing the need to create different interfaces for different clinics and access rights.

  19. The Brainomics/Localizer database.

    PubMed

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. SeqDepot: streamlined database of biological sequences and precomputed features.

    PubMed

    Ulrich, Luke E; Zhulin, Igor B

    2014-01-15

    Assembling and/or producing integrated knowledge of sequence features continues to be an onerous and redundant task despite a large number of existing resources. We have developed SeqDepot-a novel database that focuses solely on two primary goals: (i) assimilating known primary sequences with predicted feature data and (ii) providing the most simple and straightforward means to procure and readily use this information. Access to >28.5 million sequences and 300 million features is provided through a well-documented and flexible RESTful interface that supports fetching specific data subsets, bulk queries, visualization and searching by MD5 digests or external database identifiers. We have also developed an HTML5/JavaScript web application exemplifying how to interact with SeqDepot and Perl/Python scripts for use with local processing pipelines. Freely available on the web at http://seqdepot.net/. RESTaccess via http://seqdepot.net/api/v1. Database files and scripts maybe downloaded from http://seqdepot.net/download.

  1. An optical scan/statistical package for clinical data management in C-L psychiatry.

    PubMed

    Hammer, J S; Strain, J J; Lyerly, M

    1993-03-01

    This paper explores aspects of the need for clinical database management systems that permit ongoing service management, measurement of the quality and appropriateness of care, databased administration of consultation liaison (C-L) services, teaching/educational observations, and research. It describes an OPTICAL SCAN databased management system that permits flexible form generation, desktop publishing, and linking of observations in multiple files. This enhanced MICRO-CARES software system--Medical Application Platform (MAP)--permits direct transfer of the data to ASCII and SAS format for mainframe manipulation of the clinical information. The director of a C-L service may now develop his or her own forms, incorporate structured instruments, or develop "branch chains" of essential data to add to the core data set without the effort and expense to reprint forms or consult with commercial vendors.

  2. Estimating Biofuel Feedstock Water Footprints Using System Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inman, Daniel; Warner, Ethan; Stright, Dana

    Increased biofuel production has prompted concerns about the environmental tradeoffs of biofuels compared to petroleum-based fuels. Biofuel production in general, and feedstock production in particular, is under increased scrutiny. Water footprinting (measuring direct and indirect water use) has been proposed as one measure to evaluate water use in the context of concerns about depleting rural water supplies through activities such as irrigation for large-scale agriculture. Water footprinting literature has often been limited in one or more key aspects: complete assessment across multiple water stocks (e.g., vadose zone, surface, and ground water stocks), geographical resolution of data, consistent representation of manymore » feedstocks, and flexibility to perform scenario analysis. We developed a model called BioSpatial H2O using a system dynamics modeling and database framework. BioSpatial H2O could be used to consistently evaluate the complete water footprints of multiple biomass feedstocks at high geospatial resolutions. BioSpatial H2O has the flexibility to perform simultaneous scenario analysis of current and potential future crops under alternative yield and climate conditions. In this proof-of-concept paper, we modeled corn grain (Zea mays L.) and soybeans (Glycine max) under current conditions as illustrative results. BioSpatial H2O links to a unique database that houses annual spatially explicit climate, soil, and plant physiological data. Parameters from the database are used as inputs to our system dynamics model for estimating annual crop water requirements using daily time steps. Based on our review of the literature, estimated green water footprints are comparable to other modeled results, suggesting that BioSpatial H2O is computationally sound for future scenario analysis. Our modeling framework builds on previous water use analyses to provide a platform for scenario-based assessment. BioSpatial H2O's system dynamics is a flexible and user-friendly interface for on-demand, spatially explicit, water use scenario analysis for many US agricultural crops. Built-in controls permit users to quickly make modifications to the model assumptions, such as those affecting yield, and to see the implications of those results in real time. BioSpatial H2O's dynamic capabilities and adjustable climate data allow for analyses of water use and management scenarios to inform current and potential future bioenergy policies. The model could also be adapted for scenario analysis of alternative climatic conditions and comparison of multiple crops. The results of such an analysis would help identify risks associated with water use competition among feedstocks in certain regions. Results could also inform research and development efforts that seek to reduce water-related risks of biofuel pathways.« less

  3. The Neotoma Paleoecology Database: An International Community-Curated Resource for Paleoecological and Paleoenvironmental Data

    NASA Astrophysics Data System (ADS)

    Williams, J. W.; Grimm, E. C.; Ashworth, A. C.; Blois, J.; Charles, D. F.; Crawford, S.; Davis, E.; Goring, S. J.; Graham, R. W.; Miller, D. A.; Smith, A. J.; Stryker, M.; Uhen, M. D.

    2017-12-01

    The Neotoma Paleoecology Database supports global change research at the intersection of geology and ecology by providing a high-quality, community-curated data repository for paleoecological data. These data are widely used to study biological responses and feedbacks to past environmental change at local to global scales. The Neotoma data model is flexible and can store multiple kinds of fossil, biogeochemical, or physical variables measured from sedimentary archives. Data additions to Neotoma are growing and include >3.5 million observations, >16,000 datasets, and >8,500 sites. Dataset types include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Neotoma data can be found and retrieved in multiple ways, including the Explorer map-based interface, a RESTful Application Programming Interface, the neotoma R package, and digital object identifiers. Neotoma has partnered with the Paleobiology Database to produce a common data portal for paleobiological data, called the Earth Life Consortium. A new embargo management is designed to allow investigators to put their data into Neotoma and then make use of Neotoma's value-added services. Neotoma's distributed scientific governance model is flexible and scalable, with many open pathways for welcoming new members, data contributors, stewards, and research communities. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.

  4. The Validity of the Modified Sit-and-Reach Test in College-Age Students.

    ERIC Educational Resources Information Center

    Minkler, Sharin; Patterson, Patricia

    1994-01-01

    Reports a study that examined the criterion-related validity of the modified sit-and-reach test against criterion measures of hamstring and low back flexibility in college students. Results indicated the modified sit-and-reach test moderately related to hamstring flexibility, but its relation to low back flexibility was low. (SM)

  5. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  6. PGSB/MIPS Plant Genome Information Resources and Concepts for the Analysis of Complex Grass Genomes.

    PubMed

    Spannagl, Manuel; Bader, Kai; Pfeifer, Matthias; Nussbaumer, Thomas; Mayer, Klaus F X

    2016-01-01

    PGSB (Plant Genome and Systems Biology; formerly MIPS-Munich Institute for Protein Sequences) has been involved in developing, implementing and maintaining plant genome databases for more than a decade. Genome databases and analysis resources have focused on individual genomes and aim to provide flexible and maintainable datasets for model plant genomes as a backbone against which experimental data, e.g., from high-throughput functional genomics, can be organized and analyzed. In addition, genomes from both model and crop plants form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny) between related species on macro- and micro-levels.The genomes of many economically important Triticeae plants such as wheat, barley, and rye present a great challenge for sequence assembly and bioinformatic analysis due to their enormous complexity and large genome size. Novel concepts and strategies have been developed to deal with these difficulties and have been applied to the genomes of wheat, barley, rye, and other cereals. This includes the GenomeZipper concept, reference-guided exome assembly, and "chromosome genomics" based on flow cytometry sorted chromosomes.

  7. RigFit: a new approach to superimposing ligand molecules.

    PubMed

    Lemmen, C; Hiller, C; Lengauer, T

    1998-09-01

    If structural knowledge of a receptor under consideration is lacking, drug design approaches focus on similarity or dissimilarity analysis of putative ligands. In this context the mutual ligand superposition is of utmost importance. Methods that are rapid enough to facilitate interactive usage, that allow to process sets of conformers and that enable database screening are of special interest here. The ability to superpose molecular fragments instead of entire molecules has proven to be helpful too. The RIGFIT approach meets these requirements and has several additional advantages. In three distinct test applications, we evaluated how closely we can approximate the observed relative orientation for a set of known crystal structures, we employed RIGFIT as a fragment placement procedure, and we performed a fragment-based database screening. The run time of RIGFIT can be traded off against its accuracy. To be competitive in accuracy with another state-of-the-art alignment tool, with which we compare our method explicitly, computing times of about 6 s per superposition on a common day workstation are required. If longer run times can be afforded the accuracy increases significantly. RIGFIT is part of the flexible superposition software FLEXS which can be accessed on the WWW [http:/(/)cartan.gmd.de/FlexS].

  8. Conformation-dependent restraints for polynucleotides: I. Clustering of the geometry of the phosphodiester group

    PubMed Central

    Kowiel, Marcin; Brzezinski, Dariusz; Jaskolski, Mariusz

    2016-01-01

    The refinement of macromolecular structures is usually aided by prior stereochemical knowledge in the form of geometrical restraints. Such restraints are also used for the flexible sugar-phosphate backbones of nucleic acids. However, recent highly accurate structural studies of DNA suggest that the phosphate bond angles may have inadequate description in the existing stereochemical dictionaries. In this paper, we analyze the bonding deformations of the phosphodiester groups in the Cambridge Structural Database, cluster the studied fragments into six conformation-related categories and propose a revised set of restraints for the O-P-O bond angles and distances. The proposed restraints have been positively validated against data from the Nucleic Acid Database and an ultrahigh-resolution Z-DNA structure in the Protein Data Bank. Additionally, the manual classification of PO4 geometry is compared with geometrical clusters automatically discovered by machine learning methods. The machine learning cluster analysis provides useful insights and a practical example for general applications of clustering algorithms for automatic discovery of hidden patterns of molecular geometry. Finally, we describe the implementation and application of a public-domain web server for automatic generation of the proposed restraints. PMID:27521371

  9. Towards automated visual flexible endoscope navigation.

    PubMed

    van der Stap, Nanda; van der Heijden, Ferdinand; Broeders, Ivo A M J

    2013-10-01

    The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research. A systematic literature search was performed using three general search terms in two medical-technological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included. Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date. Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process.

  10. Motor competence and health related physical fitness in youth: A systematic review.

    PubMed

    Cattuzzo, Maria Teresa; Dos Santos Henrique, Rafael; Ré, Alessandro Hervaldo Nicolai; de Oliveira, Ilana Santos; Melo, Bruno Machado; de Sousa Moura, Mariana; de Araújo, Rodrigo Cappato; Stodden, David

    2016-02-01

    This study aimed to review the scientific evidence on associations between motor competence (MC) and components of health related physical fitness (HRPF), in children and adolescents. Systematic review. Systematic search of Academic Search Premier, ERIC, PubMed, PsycInfo, Scopus, SportDiscus, and Web of Science databases was undertaken between October 2012 and December 2013. Studies examining associations between MC and HRPF components (body weight status, cardiorespiratory fitness, musculoskeletal fitness and flexibility) in healthy children and adolescents, published between 1990 and 2013, were included. Risk of bias within studies was assessed using CONSORT and STROBE guidelines. The origin, design, sample, measure of MC, measure of the HRPF, main results and statistics of the studies were analyzed and a narrative synthesis was conducted. Forty-four studies matched all criteria; 16 were classified as low risk of bias and 28 as medium risk. There is strong scientific evidence supporting an inverse association between MC and body weight status (27 out of 33 studies) and a positive association between MC and cardiorespiratory fitness (12 out of 12 studies) and musculoskeletal fitness (7 out of 11 studies). The relationship between MC and flexibility was uncertain. Considering the noted associations between various assessments of MC and with multiple aspects of HRPF, the development of MC in childhood may both directly and indirectly augment HRPF and may serve to enhance the development of long-term health outcomes in children and adolescents. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. Self-regulation as a moderator of the relation between coping and symptomatology in children of divorce.

    PubMed

    Lengua, L J; Sandler, I N

    1996-12-01

    Investigated the effects of self-regulation as a moderator of the relations between coping efforts and psychological symptoms of children of divorce. The interactions of two dimensions of self-regulation (task orientation and approach-flexibility) and two dimensions of coping (active and avoidant) predicting children's postdivorce symptoms were tested using a sample of 199 divorced mothers and their children, ages 8 to 12. The approach-flexibility dimension moderated the relations of both active and avoidant coping with children's self-report of anxiety. At higher levels of approach-flexibility, active coping was negatively related to anxiety, while at lower levels of approach-flexibility, active coping was unrelated to anxiety. Avoidant coping was unrelated to anxiety at higher levels of approach-flexibility, whereas at lower levels of approach-flexibility, avoidant coping was positively related to anxiety. The task orientation dimension did not interact with coping, but had direct, independent effects on children's self-report of conduct problems, depression, and parent-report of internalizing and externalizing behavior problems. The implications for understanding children's coping with divorce and future directions for research are discussed.

  12. Gender consistency and flexibility: using dynamics to understand the relationship between gender and adjustment.

    PubMed

    DiDonato, Matthew D; Martin, Carol L; Hessler, Eric E; Amazeen, Polemnia G; Hanish, Laura D; Fabes, Richard A

    2012-04-01

    Controversy surrounds questions regarding the influence of being gender consistent (i.e., having and expressing gendered characteristics that are consistent with one's biological sex) versus being gender flexible (i.e., having and expressing gendered characteristics that vary from masculine to feminine as circumstances arise) on children's adjustment outcomes, such as self-esteem, positive emotion, or behavior problems. Whereas evidence supporting the consistency hypothesis is abundant, little support exists for the flexibility hypothesis. To shed new light on the flexibility hypothesis, we explored children's gendered behavior from a dynamical perspective that highlighted variability and flexibility in addition to employing a conventional approach that emphasized stability and consistency. Conventional mean-level analyses supported the consistency hypothesis by revealing that gender atypical behavior was related to greater maladjustment, and dynamical analyses supported the flexibility hypothesis by showing that flexibility of gendered behavior over time was related to positive adjustment. Integrated analyses showed that gender typical behavior was related to the adjustment of children who were behaviorally inflexible, but not for those who were flexible. These results provided a more comprehensive understanding of the relation between gendered behavior and adjustment in young children and illustrated for the first time the feasibility of applying dynamical analyses to the study of gendered behavior.

  13. Adding Value to Large Multimedia Collections through Annotation Technologies and Tools: Serving Communities of Interest.

    ERIC Educational Resources Information Center

    Shabajee, Paul; Miller, Libby; Dingley, Andy

    A group of research projects based at HP-Labs Bristol, the University of Bristol (England) and ARKive (a new large multimedia database project focused on the worlds biodiversity based in the United Kingdom) are working to develop a flexible model for the indexing of multimedia collections that allows users to annotate content utilizing extensible…

  14. Generating Enhanced Natural Environments and Terrain for Interactive Combat Simulations (GENETICS)

    DTIC Science & Technology

    2005-09-01

    split to avoid T-junctions ........................................................................52 Figure 2-23 Longest edge bisection...database. This feature allows trainers the flexibility to use the same terrain repeatedly or use a new one each time, forcing trainees to avoid ...model are favored to create a good surface approximation. Cracks are avoided by projecting primitives and their respective textures onto multiple

  15. Defense Against National Vulnerabilities in Public Data

    DTIC Science & Technology

    2017-02-28

    ingestion of subscription based precision data sources ( Business Intelligence Databases, Monster, others).  Flexible data architecture that allows for... Architecture Objective: Develop a data acquisition architecture that can successfully ingest 1,000,000 records per hour from up to 100 different open...data sources.  Developed and operate a data acquisition architecture comprised of the four following major components:  Robust website

  16. Indexing molecules with chemical graph identifiers.

    PubMed

    Gregori-Puigjané, Elisabet; Garriga-Sust, Rut; Mestres, Jordi

    2011-09-01

    Fast and robust algorithms for indexing molecules have been historically considered strategic tools for the management and storage of large chemical libraries. This work introduces a modified and further extended version of the molecular equivalence number naming adaptation of the Morgan algorithm (J Chem Inf Comput Sci 2001, 41, 181-185) for the generation of a chemical graph identifier (CGI). This new version corrects for the collisions recognized in the original adaptation and includes the ability to deal with graph canonicalization, ensembles (salts), and isomerism (tautomerism, regioisomerism, optical isomerism, and geometrical isomerism) in a flexible manner. Validation of the current CGI implementation was performed on the open NCI database and the drug-like subset of the ZINC database containing 260,071 and 5,348,089 structures, respectively. The results were compared with those obtained with some of the most widely used indexing codes, such as the CACTVS hash code and the new InChIKey. The analyses emphasize the fact that compound management activities, like duplicate analysis of chemical libraries, are sensitive to the exact definition of compound uniqueness and thus still depend, to a minor extent, on the type and flexibility of the molecular index being used. Copyright © 2011 Wiley Periodicals, Inc.

  17. GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations

    PubMed Central

    Paila, Umadevi; Chapman, Brad A.; Kirchner, Rory; Quinlan, Aaron R.

    2013-01-01

    Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics. PMID:23874191

  18. Identification of critical factors affecting flexibility in hospital construction projects.

    PubMed

    Olsson, Nils E O; Hansen, Geir K

    2010-01-01

    This paper analyzes the dynamics relating to flexibility in a hospital project context. Three research questions are addressed: (1) When is flexibility used in the life cycle of a project? (2) What are the stakeholders' perspectives on project flexibility? And (3) What is the nature of the interaction between flexibility in the process of a project and flexibility in terms of the characteristics of a building? Flexibility is discussed from both a project management point of view and from a hospital architecture perspective. Flexibility in project life cycle and from a stakeholder perspective is examined, and the interaction between flexibility in scope lock-in and building flexibility is investigated. The results are based on case studies of four Norwegian hospital projects. Information relating to the projects has been obtained from evaluation reports, other relevant documents, and interviews. Observations were codified and analyzed based on selected parameters that represent different aspects of flexibility. One of the cases illustrates how late changes can have a significant negative impact on the project itself, contributing to delays and cost overruns. Another case illustrates that late scope lock-in on a limited part of the project, in this case related to medical equipment, can be done in a controlled manner. Project owners and users appear to have given flexibility high priority. Project management teams are less likely to embrace changes and late scope lock-in. Architects and consultants are important for translating program requirements into physical design. A highly flexible building did not stop some stakeholders from pushing for significant changes and extensions during construction.

  19. Flexible working conditions and their effects on employee health and wellbeing.

    PubMed

    Joyce, Kerry; Pabayo, Roman; Critchley, Julia A; Bambra, Clare

    2010-02-17

    Flexible working conditions are increasingly popular in developed countries but the effects on employee health and wellbeing are largely unknown. To evaluate the effects (benefits and harms) of flexible working interventions on the physical, mental and general health and wellbeing of employees and their families. Our searches (July 2009) covered 12 databases including the Cochrane Public Health Group Specialised Register, CENTRAL; MEDLINE; EMBASE; CINAHL; PsycINFO; Social Science Citation Index; ASSIA; IBSS; Sociological Abstracts; and ABI/Inform. We also searched relevant websites, handsearched key journals, searched bibliographies and contacted study authors and key experts. Randomised controlled trials (RCT), interrupted time series and controlled before and after studies (CBA), which examined the effects of flexible working interventions on employee health and wellbeing. We excluded studies assessing outcomes for less than six months and extracted outcomes relating to physical, mental and general health/ill health measured using a validated instrument. We also extracted secondary outcomes (including sickness absence, health service usage, behavioural changes, accidents, work-life balance, quality of life, health and wellbeing of children, family members and co-workers) if reported alongside at least one primary outcome. Two experienced review authors conducted data extraction and quality appraisal. We undertook a narrative synthesis as there was substantial heterogeneity between studies. Ten studies fulfilled the inclusion criteria. Six CBA studies reported on interventions relating to temporal flexibility: self-scheduling of shift work (n = 4), flexitime (n = 1) and overtime (n = 1). The remaining four CBA studies evaluated a form of contractual flexibility: partial/gradual retirement (n = 2), involuntary part-time work (n = 1) and fixed-term contract (n = 1). The studies retrieved had a number of methodological limitations including short follow-up periods, risk of selection bias and reliance on largely self-reported outcome data. Four CBA studies on self-scheduling of shifts and one CBA study on gradual/partial retirement reported statistically significant improvements in either primary outcomes (including systolic blood pressure and heart rate; tiredness; mental health, sleep duration, sleep quality and alertness; self-rated health status) or secondary health outcomes (co-workers social support and sense of community) and no ill health effects were reported. Flexitime was shown not to have significant effects on self-reported physiological and psychological health outcomes. Similarly, when comparing individuals working overtime with those who did not the odds of ill health effects were not significantly higher in the intervention group at follow up. The effects of contractual flexibility on self-reported health (with the exception of gradual/partial retirement, which when controlled by employees improved health outcomes) were either equivocal or negative. No studies differentiated results by socio-economic status, although one study did compare findings by gender but found no differential effect on self-reported health outcomes. The findings of this review tentatively suggest that flexible working interventions that increase worker control and choice (such as self-scheduling or gradual/partial retirement) are likely to have a positive effect on health outcomes. In contrast, interventions that were motivated or dictated by organisational interests, such as fixed-term contract and involuntary part-time employment, found equivocal or negative health effects. Given the partial and methodologically limited evidence base these findings should be interpreted with caution. Moreover, there is a clear need for well-designed intervention studies to delineate the impact of flexible working conditions on health, wellbeing and health inequalities.

  20. Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.

    PubMed

    Richard, Ann M; Williams, ClarLynda R

    2002-01-29

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.

  1. HITRAN2016 : new and improved data and tools towards studies of planetary atmospheres

    NASA Astrophysics Data System (ADS)

    Gordon, Iouli; Rothman, Laurence S.; Wilzewski, Jonas S.; Kochanov, Roman V.; Hill, Christian; Tan, Yan; Wcislo, Piotr

    2016-10-01

    The HITRAN2016 molecular spectroscopic database is scheduled to be released this year. It will replace the current edition, HITRAN2012 [1], which has been in use, along with some intermediate updates, since 2012.We have added, revised, and improved many transitions and bands of molecular species and their isotopologues. Also, the amount of parameters has also been significantly increased, now incorporating, for instance, broadening by He, H2 and CO2 which are dominant in different planetary atmospheres [2]; non-Voigt line profiles [3]; and other phenomena. This poster will provide a summary of the updates, emphasizing details of some of the most important or drastic improvements or additions.To allow flexible incorporation of the new parameters and improve the efficiency of the database usage, the whole database has been reorganized into a relational database structure and presented to the user by means of a very powerful, easy-to-use internet program called HITRANonline [4] accessible at . This interface allows the user many queries in standard and user-defined formats. In addition, a powerful application called HAPI (HITRAN Application Programing Interface) [5] was developed. HAPI is a set of Python libraries that allows much more functionality for the user. Demonstration of the power of the new tools will also be offered.This work is supported by the NASA PATM (NNX13AI59G), PDART (NNX16AG51G) and AURA (NNX14AI55G) programs.References[1] L.S. Rothman et al, JQSRT 130, 4 (2013)[2] J. S. Wilzewski et al., JQSRT 168, 193 (2016)[3] P. Wcislo et al., JQSRT 177, 75 (2016)[4] C. Hill et al, JQSRT 177, 4 (2016)[5] R.V. Kochanov et al, JQSRT 177, 15 (2016)

  2. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  3. Are the Most Plastic Species the Most Abundant Ones? An Assessment Using a Fish Assemblage

    PubMed Central

    Vidal, Nicolás; Zaldúa, Natalia; D'Anatro, Alejandro; Naya, Daniel E.

    2014-01-01

    Few studies have evaluated phenotypic plasticity at the community level, considering, for example, plastic responses in an entire species assemblage. In addition, none of these studies have addressed the relationship between phenotypic plasticity and community structure. Within this context, here we assessed the magnitude of seasonal changes in digestive traits (seasonal flexibility), and of changes during short-term fasting (flexibility during fasting), occurring in an entire fish assemblage, comprising ten species, four trophic levels, and a 37-fold range in body mass. In addition, we analyzed the relationship between estimates of digestive flexibility and three basic assemblage structure attributes, i.e., species trophic position, body size, and relative abundance. We found that: (1) Seasonal digestive flexibility was not related with species trophic position or with body size; (2) Digestive flexibility during fasting tended to be inversely correlated with body size, as expected from scaling relationships; (3) Digestive flexibility, both seasonal and during fasting, was positively correlated with species relative abundance. In conclusion, the present study identified two trends in digestive flexibility in relation to assemblage structure, which represents an encouraging departure point in the search of general patterns in phenotypic plasticity at the local community scale. PMID:24651865

  4. The relative benefits of green versus lean office space: three field experiments.

    PubMed

    Nieuwenhuis, Marlon; Knight, Craig; Postmes, Tom; Haslam, S Alexander

    2014-09-01

    Principles of lean office management increasingly call for space to be stripped of extraneous decorations so that it can flexibly accommodate changing numbers of people and different office functions within the same area. Yet this practice is at odds with evidence that office workers' quality of life can be enriched by office landscaping that involves the use of plants that have no formal work-related function. To examine the impact of these competing approaches, 3 field experiments were conducted in large commercial offices in The Netherlands and the U.K. These examined the impact of lean and "green" offices on subjective perceptions of air quality, concentration, and workplace satisfaction as well as objective measures of productivity. Two studies were longitudinal, examining effects of interventions over subsequent weeks and months. In all 3 experiments enhanced outcomes were observed when offices were enriched by plants. Implications for theory and practice are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Structure-directing weak phosphoryl XH...O=P (X = C, N) hydrogen bonds in cyclic oxazaphospholidines and oxazaphosphinanes.

    PubMed

    van der Lee, A; Rolland, M; Marat, X; Virieux, D; Volle, J N; Pirat, J L

    2008-04-01

    The structures of six cyclic oxazaphospholidines and three cyclic oxazaphosphinanes have been determined and their supramolecular structures have been compared. The molecules differ with respect to the functional groups attached to the central five- or six-membered rings, but have one phosphoryl group in common. The predominant feature in the supramolecular structures is the existence of relatively weak intermolecular phosphoryl XH...O=P (X = C, N) hydrogen bonds, creating in nearly all cases linear zigzag or double molecular chains. The molecular chains are in general linked to each other via very weak CH...pi or usual hydrogen-bond interactions. A survey of the Cambridge Structural Database on similar XH...O=P interactions shows a very large flexibility of the XH...O angle, which is in agreement with the DFT calculation reported elsewhere. The strength of the XH...O=P interaction can therefore be considered as relatively weak to moderately strong, and is expected to play at least a role in the formation of secondary substructures.

  6. Cognitive structure, flexibility, and plasticity in human multitasking-An integrative review of dual-task and task-switching research.

    PubMed

    Koch, Iring; Poljac, Edita; Müller, Hermann; Kiesel, Andrea

    2018-06-01

    Numerous studies showed decreased performance in situations that require multiple tasks or actions relative to appropriate control conditions. Because humans often engage in such multitasking activities, it is important to understand how multitasking affects performance. In the present article, we argue that research on dual-task interference and sequential task switching has proceeded largely separately using different experimental paradigms and methodology. In our article we aim at organizing this complex set of research in terms of three complementary research perspectives on human multitasking. One perspective refers to structural accounts in terms of cognitive bottlenecks (i.e., critical processing stages). A second perspective refers to cognitive flexibility in terms of the underlying cognitive control processes. A third perspective emphasizes cognitive plasticity in terms of the influence of practice on human multitasking abilities. With our review article we aimed at highlighting the value of an integrative position that goes beyond isolated consideration of a single theoretical research perspective and that broadens the focus from single experimental paradigms (dual task and task switching) to favor instead a view that emphasizes the fundamental similarity of the underlying cognitive mechanisms across multitasking paradigms. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Database for propagation models

    NASA Astrophysics Data System (ADS)

    Kantak, Anil V.

    1991-07-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  8. Child Temperamental Flexibility Moderates the Relation between Positive Parenting and Adolescent Adjustment

    PubMed Central

    Rabinowitz, Jill A.; Drabick, Deborah A.G.; Reynolds, Maureen D.; Clark, Duncan B.; Olino, Thomas M.

    2016-01-01

    Temperamental flexibility and lower positive parenting are associated with internalizing and externalizing problems; however, youth varying in flexibility may be differentially affected by positive parenting in the prediction of symptoms. We examined whether children's flexibility moderated prospective relations between maternal and paternal positive parenting and youth internalizing and externalizing symptoms during adolescence. Participants (N =775, 71% male) and their caregivers completed measures when youth were 10-12 and 12-14 years old. Father positive parenting interacted with child flexibility to predict father-reported internalizing and externalizing problems. Consistent with the diathesis-stress model, children lower in flexibility experienced greater symptoms than children higher in flexibility in lower positive parenting contexts. Among children lower in flexibility, lower paternal positive parenting was associated with greater internalizing and externalizing symptoms compared to higher paternal positive parenting. However, among youth higher in flexibility, symptom levels were similar regardless of whether youth experienced lower or higher paternal positive parenting. PMID:26834305

  9. Development of a statewide trauma registry using multiple linked sources of data.

    PubMed Central

    Clark, D. E.

    1993-01-01

    In order to develop a cost-effective method of injury surveillance and trauma system evaluation in a rural state, computer programs were written linking records from two major hospital trauma registries, a statewide trauma tracking study, hospital discharge abstracts, death certificates, and ambulance run reports. A general-purpose database management system, programming language, and operating system were used. Data from 1991 appeared to be successfully linked using only indirect identifying information. Familiarity with local geography and the idiosyncracies of each data source were helpful in programming for effective matching of records. For each individual case identified in this way, data from all available sources were then merged and imported into a standard database format. This inexpensive, population-based approach, maintaining flexibility for end-users with some database training, may be adaptable for other regions. There is a need for further improvement and simplification of the record-linkage process for this and similar purposes. PMID:8130556

  10. Location-Driven Image Retrieval for Images Collected by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji

    Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.

  11. An engineering database management system for spacecraft operations

    NASA Technical Reports Server (NTRS)

    Cipollone, Gregorio; Mckay, Michael H.; Paris, Joseph

    1993-01-01

    Studies at ESOC have demonstrated the feasibility of a flexible and powerful Engineering Database Management System in support for spacecraft operations documentation. The objectives set out were three-fold: first an analysis of the problems encountered by the Operations team in obtaining and managing operations documents; secondly, the definition of a concept for operations documentation and the implementation of prototype to prove the feasibility of the concept; and thirdly, definition of standards and protocols required for the exchange of data between the top-level partners in a satellite project. The EDMS prototype was populated with ERS-l satellite design data and has been used by the operations team at ESOC to gather operational experience. An operational EDMS would be implemented at the satellite prime contractor's site as a common database for all technical information surrounding a project and would be accessible by the cocontractor's and ESA teams.

  12. The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics

    NASA Astrophysics Data System (ADS)

    Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying

    Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.

  13. Flexible band versus rigid ring annuloplasty for tricuspid regurgitation: a systematic review and meta-analysis.

    PubMed

    Wang, Nelson; Phan, Steven; Tian, David H; Yan, Tristan D; Phan, Kevin

    2017-05-01

    Up to 20% of patients have pre-discharge residual moderate to severe tricuspid regurgitation (TR) after tricuspid repair. Reoperations for recurrent TR carry high mortality rates, which emphasizes the importance of identifying the optimal technique for the surgical management of TR. The present study is a systematic review and meta-analysis that aims to compare short and long term survival and freedom from TR of flexible band ring versus rigid ring for annuloplasty of TR. We conducted a systematic review and meta-analysis of comparative studies to evaluate these procedures. A systematic search of the literature was performed from six electronic databases. Pooled meta-analysis was conducted using odds ratio (OR) and weighted mean difference (WMD). The rates of in-hospital mortality were not different between the two groups, with cumulative rates of 6.9% for flexible band and 7.3% for rigid ring (OR: 0.92; 95% CI: 0.49-1.71). Rates of stroke were also similar with 1.7% of flexible band and 1.3% of rigid rings suffering a perioperative stroke (OR: 1.29; 95% CI: 0.74-2.23). Rigid ring had significantly better freedom from grade ≥2 TR at 5 years (OR: 0.44; 95% CI: 0.20-0.99) and overall (P=0.005). There was no significant difference in overall rates of reoperation (P=0.232) and survival (P=0.086) between flexible band and rigid ring. Both rigid ring and flexible band offer acceptable outcomes for the treatment of TR. Compared to flexible band, rates of TR are stable after rigid ring annuloplasty and long term freedom from TR are superior for rigid ring devices. Large prospective randomized trials are required in order to validate these findings and assess for improvements in patient survival.

  14. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    PubMed

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  15. Plant Genome Resources at the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Smith-White, Brian; Chetvernin, Vyacheslav; Resenchuk, Sergei; Dombrowski, Susan M.; Pechous, Steven W.; Tatusova, Tatiana; Ostell, James

    2005-01-01

    The National Center for Biotechnology Information (NCBI) integrates data from more than 20 biological databases through a flexible search and retrieval system called Entrez. A core Entrez database, Entrez Nucleotide, includes GenBank and is tightly linked to the NCBI Taxonomy database, the Entrez Protein database, and the scientific literature in PubMed. A suite of more specialized databases for genomes, genes, gene families, gene expression, gene variation, and protein domains dovetails with the core databases to make Entrez a powerful system for genomic research. Linked to the full range of Entrez databases is the NCBI Map Viewer, which displays aligned genetic, physical, and sequence maps for eukaryotic genomes including those of many plants. A specialized plant query page allow maps from all plant genomes covered by the Map Viewer to be searched in tandem to produce a display of aligned maps from several species. PlantBLAST searches against the sequences shown in the Map Viewer allow BLAST alignments to be viewed within a genomic context. In addition, precomputed sequence similarities, such as those for proteins offered by BLAST Link, enable fluid navigation from unannotated to annotated sequences, quickening the pace of discovery. NCBI Web pages for plants, such as Plant Genome Central, complete the system by providing centralized access to NCBI's genomic resources as well as links to organism-specific Web pages beyond NCBI. PMID:16010002

  16. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    PubMed

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  17. Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning

    NASA Astrophysics Data System (ADS)

    de, Sandip; Ceriotti, Michele

    Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.

  18. ARMOUR - A Rice miRNA: mRNA Interaction Resource.

    PubMed

    Sanan-Mishra, Neeti; Tripathi, Anita; Goswami, Kavita; Shukla, Rohit N; Vasudevan, Madavan; Goswami, Hitesh

    2018-01-01

    ARMOUR was developed as A Rice miRNA:mRNA interaction resource. This informative and interactive database includes the experimentally validated expression profiles of miRNAs under different developmental and abiotic stress conditions across seven Indian rice cultivars. This comprehensive database covers 689 known and 1664 predicted novel miRNAs and their expression profiles in more than 38 different tissues or conditions along with their predicted/known target transcripts. The understanding of miRNA:mRNA interactome in regulation of functional cellular machinery is supported by the sequence information of the mature and hairpin structures. ARMOUR provides flexibility to users in querying the database using multiple ways like known gene identifiers, gene ontology identifiers, KEGG identifiers and also allows on the fly fold change analysis and sequence search query with inbuilt BLAST algorithm. ARMOUR database provides a cohesive platform for novel and mature miRNAs and their expression in different experimental conditions and allows searching for their interacting mRNA targets, GO annotation and their involvement in various biological pathways. The ARMOUR database includes a provision for adding more experimental data from users, with an aim to develop it as a platform for sharing and comparing experimental data contributed by research groups working on rice.

  19. Numerical Mantle Convection Models With a Flexible Thermodynamic Interface

    NASA Astrophysics Data System (ADS)

    van den Berg, A. P.; Jacobs, M. H.; de Jong, B. H.

    2001-12-01

    Accurate material properties are needed for deep mantle (P,T) conditions in order to predict the longterm behavior of convection planetary mantles. Also the interpretation of seismological observations concerning the deep mantle in terms of mantle flow models calls for a consistent thermodynamical description of the basic physical parameters. We have interfaced a compressible convection code using the anelastic liquid approach based on finite element methods, to a database containing a full thermodynamic description of mantle silicates (Ita and King, J. Geophys. Res., 99, 15,939-15,940, 1994). The model is based on high resolution (P,T) tables of the relevant thermodynamic properties containing typically 50 million (P,T) table gridpoints to obtain resolution in (P,T) space of 1 K and an equivalent of 1 km. The resulting model is completely flexible such that numerical mantle convection experiments can be performed for any mantle composition for which the thermodynamic database is available. We present results of experiments for 2D cartesian models using a data base for magnesium-iron silicate in a pyrolitic composition (Stixrude and Bukowinski, Geoph.Monogr.Ser., 74, 131-142, 1993) and a recent thermodynamical model for magnesium silicate for the complete mantle (P,T) range, (Jacobs and Oonk, Phys. Chem. Mineral, 269, inpress 2001). Preliminary results of bulksound velocity distribution derived in a consistent way from the convection results and the thermodynamic database show a `realistic' mantle profile with bulkvelocity variations decreasing from several percent in the upper mantle to less than a percent in the deep lower mantle.

  20. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and storms separated by user defined inter-event periods. A separate storm database has been created to store the selected output. By storing output within tables in a separate database, we can make use of powerful SQL capabilities to perform flexible pattern analysis. Previous efforts have made use of historic data from limited clusters of irregularly spaced physical gauges. Spatial extent of the observational network has been a limiting factor. The relatively dense distribution of MPE provides a virtual mesh of observations stretched over the landscape. This work combines a unique hydrologic data resource with programming and database analysis to characterize storm depth-area-duration relationships.

  1. Barrow real-time sea ice mass balance data: ingestion, processing, dissemination and archival of multi-sensor data

    NASA Astrophysics Data System (ADS)

    Grimes, J.; Mahoney, A. R.; Heinrichs, T. A.; Eicken, H.

    2012-12-01

    Sensor data can be highly variable in nature and also varied depending on the physical quantity being observed, sensor hardware and sampling parameters. The sea ice mass balance site (MBS) operated in Barrow by the University of Alaska Fairbanks (http://seaice.alaska.edu/gi/observatories/barrow_sealevel) is a multisensor platform consisting of a thermistor string, air and water temperature sensors, acoustic altimeters above and below the ice and a humidity sensor. Each sensor has a unique specification and configuration. The data from multiple sensors are combined to generate sea ice data products. For example, ice thickness is calculated from the positions of the upper and lower ice surfaces, which are determined using data from downward-looking and upward-looking acoustic altimeters above and below the ice, respectively. As a data clearinghouse, the Geographic Information Network of Alaska (GINA) processes real time data from many sources, including the Barrow MBS. Doing so requires a system that is easy to use, yet also offers the flexibility to handle data from multisensor observing platforms. In the case of the Barrow MBS, the metadata system needs to accommodate the addition of new and retirement of old sensors from year to year as well as instrument configuration changes caused by, for example, spring melt or inquisitive polar bears. We also require ease of use for both administrators and end users. Here we present the data and processing steps of using sensor data system powered by the NoSQL storage engine, MongoDB. The system has been developed to ingest, process, disseminate and archive data from the Barrow MBS. Storing sensor data in a generalized format, from many different sources, is a challenging task, especially for traditional SQL databases with a set schema. MongoDB is a NoSQL (not only SQL) database that does not require a fixed schema. There are several advantages using this model over the traditional relational database management system (RDBMS) model databases. The lack of a required schema allows flexibility in how the data can be ingested into the database. For example, MongoDB imposes no restrictions on field names. For researchers using the system, this means that the name they have chosen for the sensor is carried through the database, any processing, and to the final output helping to preserve data integrity. Also, MongoDB allows the data to be pushed to it dynamically meaning that field attributes can be defined at the point of ingestion. This allows any sensor data to be ingested as a document and for this functionality to be transferred to the user interface, allowing greater adaptability to different use-case scenarios. In presenting the MondoDB data system being developed for the Barrow MBS, we demonstrate the versatility of this approach and its suitability as the foundation of a Barrow node of the Arctic Observing Network. Authors Jason Grimes - Geographic Information Network of Alaska - jason@gina.alaska.edu Andy Mahony - Geophysical Institute - mahoney@gi.alaska.edu Hajo Eiken - Geophysical Institute - Hajo.Eicken@gi.alaska.edu Tom Heinrichs - Geographic Information Network of Alaska - Tom.Heinrichs@alaska.edu

  2. “Gone are the days of mass-media marketing plans and short term customer relationships”: tobacco industry direct mail and database marketing strategies

    PubMed Central

    Lewis, M Jane; Ling, Pamela M

    2015-01-01

    Background As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. Methods We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Results Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers’ names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers’ names each. These companies valued direct mail/database marketing’s flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail’s limited visibility to tobacco control, public health and regulators. Conclusions Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers’ perceptions and smoking behaviours. PMID:26243810

  3. Extension modules for storage, visualization and querying of genomic, genetic and breeding data in Tripal databases

    PubMed Central

    Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie

    2017-01-01

    Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/

  4. A community effort to construct a gravity database for the United States and an associated Web portal

    USGS Publications Warehouse

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  5. Dyadic Flexibility in Early Parent-Child Interactions: Relations with Maternal Depressive Symptoms and Child Negativity and Behaviour Problems

    ERIC Educational Resources Information Center

    Lunkenheimer, Erika S.; Albrecht, Erin C.; Kemp, Christine J.

    2013-01-01

    Lower levels of parent-child affective flexibility indicate risk for children's problem outcomes. This short-term longitudinal study examined whether maternal depressive symptoms were related to lower levels of dyadic affective flexibility and positive affective content in mother-child problem-solving interactions at age 3.5?years…

  6. SuperNatural: a searchable database of available natural compounds

    PubMed Central

    Dunkel, Mathias; Fullbeck, Melanie; Neumann, Stefanie; Preissner, Robert

    2006-01-01

    Although tremendous effort has been put into synthetic libraries, most drugs on the market are still natural compounds or derivatives thereof. There are encyclopaedias of natural compounds, but the availability of these compounds is often unclear and catalogues from numerous suppliers have to be checked. To overcome these problems we have compiled a database of ∼50 000 natural compounds from different suppliers. To enable efficient identification of the desired compounds, we have implemented substructure searches with typical templates. Starting points for in silico screenings are about 2500 well-known and classified natural compounds from a compendium that we have added. Possible medical applications can be ascertained via automatic searches for similar drugs in a free conformational drug database containing WHO indications. Furthermore, we have computed about three million conformers, which are deployed to account for the flexibilities of the compounds when the 3D superposition algorithm that we have developed is used. The SuperNatural Database is publicly available at . Viewing requires the free Chime-plugin from MDL (Chime) or Java2 Runtime Environment (MView), which is also necessary for using Marvin application for chemical drawing. PMID:16381957

  7. E-health and healthcare enterprise information system leveraging service-oriented architecture.

    PubMed

    Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Cheng, Po-Hsun; Lai, Feipei

    2012-04-01

    To present the successful experiences of an integrated, collaborative, distributed, large-scale enterprise healthcare information system over a wired and wireless infrastructure in National Taiwan University Hospital (NTUH). In order to smoothly and sequentially transfer from the complex relations among the old (legacy) systems to the new-generation enterprise healthcare information system, we adopted the multitier framework based on service-oriented architecture to integrate the heterogeneous systems as well as to interoperate among many other components and multiple databases. We also present mechanisms of a logical layer reusability approach and data (message) exchange flow via Health Level 7 (HL7) middleware, DICOM standard, and the Integrating the Healthcare Enterprise workflow. The architecture and protocols of the NTUH enterprise healthcare information system, especially in the Inpatient Information System (IIS), are discussed in detail. The NTUH Inpatient Healthcare Information System is designed and deployed on service-oriented architecture middleware frameworks. The mechanisms of integration as well as interoperability among the components and the multiple databases apply the HL7 standards for data exchanges, which are embedded in XML formats, and Microsoft .NET Web services to integrate heterogeneous platforms. The preliminary performance of the current operation IIS is evaluated and analyzed to verify the efficiency and effectiveness of the designed architecture; it shows reliability and robustness in the highly demanding traffic environment of NTUH. The newly developed NTUH IIS provides an open and flexible environment not only to share medical information easily among other branch hospitals, but also to reduce the cost of maintenance. The HL7 message standard is widely adopted to cover all data exchanges in the system. All services are independent modules that enable the system to be deployed and configured to the highest degree of flexibility. Furthermore, we can conclude that the multitier Inpatient Healthcare Information System has been designed successfully and in a collaborative manner, based on the index of performance evaluations, central processing unit, and memory utilizations.

  8. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.

  9. CycADS: an annotation database system to ease the development and update of BioCyc databases

    PubMed Central

    Vellozo, Augusto F.; Véron, Amélie S.; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E.; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org PMID:21474551

  10. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  11. Tendon length and joint flexibility are related to running economy.

    PubMed

    Hunter, Gary R; Katsoulis, Konstantina; McCarthy, John P; Ogard, William K; Bamman, Marcas M; Wood, David S; Den Hollander, Jan A; Blaudeau, Tamilane E; Newcomer, Bradley R

    2011-08-01

    The purpose of study was to determine whether quadriceps/patella and Achilles tendon length and flexibility of the knee extensors and plantar flexors are related to walking and running economy. Twenty-one male distance runners were subjects. Quadriceps/patella and Achilles tendon length were measured by magnetic resonance imaging; body composition was measured DXA; oxygen uptake at rest while seated, walking (3 mph), and running (6 and 7 mph) were measured by indirect calorimetry; knee and ankle joint flexibility were measured by goniometry; and leg lengths were measured by anthropometry while seated. Correlations were used to identify relationships between variables of interest. Net VO2 (exercise VO2 - rest VO2) for walking (NVOWK) and running at 6 and 7 mph (NVO6 and NVO7, respectively) was significantly related to Achilles tendon length (r varying from -0.40 to -0.51, P all < 0.04). Achilles tendon cross section was not related to walking or running economy. Quadriceps/patella tendon length was significantly related to NVO7 (r = -0.43, P = 0.03) and approached significance for NVO6 (r = -0.36, P = 0.06). Flexibility of the plantar flexors was related to NVO7 (+0.38, P = 0.05). Multiple regression showed that Achilles tendon length was independently related to NVO6 and NVO7 (partial r varying from -0.53 to -0.64, all P < 0.02) independent of lower leg length, upper leg length, quadriceps/patella tendon length, knee extension flexibility, or plantarflexion flexibility. These data support the premise that longer lower limb tendons (especially Achilles tendon) and less flexible lower limb joints are associated with improved running economy.

  12. Current Issues in Flexibility Fitness.

    ERIC Educational Resources Information Center

    Knudson, Duane V.; Magnusson, Peter; McHugh, Malachy

    2000-01-01

    Physical activity is extremely important in maintaining good health. Activity is not possible without a certain amount of flexibility. This report discusses issues related to flexibility fitness. Flexibility is a property of the musculoskeletal system that determines the range of motion achievable without injury to the joints. Static flexibility…

  13. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  14. NALDB: nucleic acid ligand database for small molecules targeting nucleic acid

    PubMed Central

    Kumar Mishra, Subodh; Kumar, Amit

    2016-01-01

    Nucleic acid ligand database (NALDB) is a unique database that provides detailed information about the experimental data of small molecules that were reported to target several types of nucleic acid structures. NALDB is the first ligand database that contains ligand information for all type of nucleic acid. NALDB contains more than 3500 ligand entries with detailed pharmacokinetic and pharmacodynamic information such as target name, target sequence, ligand 2D/3D structure, SMILES, molecular formula, molecular weight, net-formal charge, AlogP, number of rings, number of hydrogen bond donor and acceptor, potential energy along with their Ki, Kd, IC50 values. All these details at single platform would be helpful for the development and betterment of novel ligands targeting nucleic acids that could serve as a potential target in different diseases including cancers and neurological disorders. With maximum 255 conformers for each ligand entry, our database is a multi-conformer database and can facilitate the virtual screening process. NALDB provides powerful web-based search tools that make database searching efficient and simplified using option for text as well as for structure query. NALDB also provides multi-dimensional advanced search tool which can screen the database molecules on the basis of molecular properties of ligand provided by database users. A 3D structure visualization tool has also been included for 3D structure representation of ligands. NALDB offers an inclusive pharmacological information and the structurally flexible set of small molecules with their three-dimensional conformers that can accelerate the virtual screening and other modeling processes and eventually complement the nucleic acid-based drug discovery research. NALDB can be routinely updated and freely available on bsbe.iiti.ac.in/bsbe/naldb/HOME.php. Database URL: http://bsbe.iiti.ac.in/bsbe/naldb/HOME.php PMID:26896846

  15. Volcanoes of the World: Reconfiguring a scientific database to meet new goals and expectations

    NASA Astrophysics Data System (ADS)

    Venzke, Edward; Andrews, Ben; Cottrell, Elizabeth

    2015-04-01

    The Smithsonian Global Volcanism Program's (GVP) database of Holocene volcanoes and eruptions, Volcanoes of the World (VOTW), originated in 1971, and was largely populated with content from the IAVCEI Catalog of Volcanoes of Active Volcanoes and some independent datasets. Volcanic activity reported by Smithsonian's Bulletin of the Global Volcanism Network and USGS/SI Weekly Activity Reports (and their predecessors), published research, and other varied sources has expanded the database significantly over the years. Three editions of the VOTW were published in book form, creating a catalog with new ways to display data that included regional directories, a gazetteer, and a 10,000-year chronology of eruptions. The widespread dissemination of the data in electronic media since the first GVP website in 1995 has created new challenges and opportunities for this unique collection of information. To better meet current and future goals and expectations, we have recently transitioned VOTW into a SQL Server database. This process included significant schema changes to the previous relational database, data auditing, and content review. We replaced a disparate, confusing, and changeable volcano numbering system with unique and permanent volcano numbers. We reconfigured structures for recording eruption data to allow greater flexibility in describing the complexity of observed activity, adding in the ability to distinguish episodes within eruptions (in time and space) and events (including dates) rather than characteristics that take place during an episode. We have added a reference link field in multiple tables to enable attribution of sources at finer levels of detail. We now store and connect synonyms and feature names in a more consistent manner, which will allow for morphological features to be given unique numbers and linked to specific eruptions or samples; if the designated overall volcano name is also a morphological feature, it is then also listed and described as that feature. One especially significant audit involved re-evaluating the categories of evidence used to include a volcano in the Holocene list, and reviewing in detail the entries in low-certainty categories. Concurrently, we developed a new data entry system that may in the future allow trusted users outside of Smithsonian to input data into VOTW. A redesigned website now provides new search tools and data download options. We are collaborating with organizations that manage volcano and eruption databases, physical sample databases, and geochemical databases to allow real-time connections and complex queries. VOTW serves the volcanological community by providing a clear and consistent core database of distinctly identified volcanoes and eruptions to advance goals in research, civil defense, and public outreach.

  16. Parsing trait and state effects of depression severity on neurocognition: Evidence from a 26-year longitudinal study.

    PubMed

    Sarapas, Casey; Shankman, Stewart A; Harrow, Martin; Goldberg, Joseph F

    2012-11-01

    Cognitive dysfunction in mood disorders falls along a continuum, such that more severe current depression is associated with greater cognitive impairment. It is not clear whether this association reflects transient state effects of current symptoms on cognitive performance, or persistent, trait-like differences in cognition that are related to overall disorder severity. We addressed this question in 42 unipolar and 47 bipolar participants drawn from a 26-year longitudinal study of psychopathology, using measures of attention/psychomotor processing speed, cognitive flexibility, verbal fluency, and verbal memory. We assessed (a) the extent to which current symptom severity and past average disorder severity predicted unique variance in cognitive performance; (b) whether cognitive performance covaried with within-individual changes in symptom severity; and (c) the stability of neurocognitive measures over six years. We also tested for differences among unipolar and bipolar groups and published norms. Past average depression severity predicted performance on attention/psychomotor processing speed in both groups, and in cognitive flexibility among unipolar participants, even after controlling for current symptom severity, which did not independently predict cognition. Within-participant state changes in depressive symptoms did not predict change in any cognitive domain. All domains were stable over the course of six years. Both groups showed generalized impairment relative to published norms, and bipolar participants performed more poorly than unipolar participants on attention/psychomotor processing speed. The results suggest a stable relationship between mood disorder severity and cognitive deficits. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  17. IAIMS Architecture

    PubMed Central

    Hripcsak, George

    1997-01-01

    Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884

  18. Multidisciplinary analysis of actively controlled large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Cooper, Paul A.; Young, John W.; Sutter, Thomas R.

    1986-01-01

    The control of Flexible Structures (COFS) program has supported the development of an analysis capability at the Langley Research Center called the Integrated Multidisciplinary Analysis Tool (IMAT) which provides an efficient data storage and transfer capability among commercial computer codes to aid in the dynamic analysis of actively controlled structures. IMAT is a system of computer programs which transfers Computer-Aided-Design (CAD) configurations, structural finite element models, material property and stress information, structural and rigid-body dynamic model information, and linear system matrices for control law formulation among various commercial applications programs through a common database. Although general in its formulation, IMAT was developed specifically to aid in the evaluation of the structures. A description of the IMAT system and results of an application of the system are given.

  19. Competing while cooperating with the same others: The consequences of conflicting demands in co-opetition.

    PubMed

    Landkammer, Florian; Sassenberg, Kai

    2016-12-01

    Numerous studies comparing the effects of competition and cooperation demonstrated that competition is detrimental on the social level. However, instead of purely competing, many social contexts require competing while cooperating with the same social target. The current work examined the consequences of such "co-opetition" situations between individuals. Because having to compete and to cooperate with the same social target constitutes conflicting demands, co-opetition should lead to more flexibility, such as (a) less rigid transfer effects of competitive behavior and (b) less rigidity/more flexibility in general. Supporting these predictions, Studies 1a and 1b demonstrated that co-opetition did not elicit competitive behavior in a subsequent task (here: enhanced deceiving of uninvolved others). Study 2 showed that adding conflicting demands (independent of social interdependence) to competition likewise elicits less competitive transfer than competition without such conflicting demands. Beyond that, co-opetition reduced rigid response tendencies during a classification task in Studies 3a and 3b and enhanced flexibility during brainstorming in Study 4, compared with other forms of interdependence. Together, these results suggest that co-opetition leads to more flexible behavior when individuals have to reconcile conflicting demands. Implications for research on social priming, interdependence and competition in everyday life are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. The microcomputer scientific software series 9: user's guide to Geo-CLM: geostatistical interpolation of the historical climatic record in the Lake States.

    Treesearch

    Margaret R. Holdaway

    1994-01-01

    Describes Geo-CLM, a computer application (for Mac or DOS) whose primary aim is to perform multiple kriging runs to interpolate the historic climatic record at research plots in the Lake States. It is an exploration and analysis tool. Addition capabilities include climatic databases, a flexible test mode, cross validation, lat/long conversion, English/metric units,...

  1. [The Effectiveness of a Strategy for the Flexible Management of Nursing Human Resources: A Pilot Study].

    PubMed

    Huang, Chung-I; Lu, Meei-Shiow

    2017-12-01

    The flexibility of a hospital's nursing-related human resource management policies affects the working willingness and retention of nurses. To explore the effectiveness of a flexible nursing-related human resource management strategy. This quasi-experimental research used a one group pretest-posttest design. Supervisors at participating hospitals attended the "Application of Flexible Nursing Human Resources Management Strategies" workshop, which introduced the related measures and assessed nurses' pretest satisfaction. After these measures were implemented at the participating hospitals, implementation-related problems were investigated and appropriate consultation was provided. The posttest was implemented after the end of the project. Data were collected from nurses at the participating hospitals who had served in their present hospital for more than three months. The participating hospitals were all nationally certified healthcare providers, including 13 medical centers, 17 regional hospitals, and 3 district hospitals. A total of nurses 2,810 nurses took the pretest and 2,437 took the posttest. The research instruments included the "Satisfaction with working conditions and system flexibility" scale and the "Flexible nursing human resource management strategies". The effectiveness of the implemented strategy was assessed using independent samples t-test and variance analysis. The result of implementing the flexible strategies shows that the total mean of pretest satisfaction (Likert 5 scores) was 3.47 (SD = 0.65), and the posttest satisfaction was 3.52 (SD = 0.65), with significant statistical differences in task, numerical, divisional, and leading flexibility. Due to the good implementation effectiveness, the authors strongly suggest that all of the participating hospitals continue to apply this strategic model to move toward a more flexible nursing system and work.

  2. Identifying work-related motor vehicle crashes in multiple databases.

    PubMed

    Thomas, Andrea M; Thygerson, Steven M; Merrill, Ray M; Cook, Lawrence J

    2012-01-01

    To compare and estimate the magnitude of work-related motor vehicle crashes in Utah using 2 probabilistically linked statewide databases. Data from 2006 and 2007 motor vehicle crash and hospital databases were joined through probabilistic linkage. Summary statistics and capture-recapture were used to describe occupants injured in work-related motor vehicle crashes and estimate the size of this population. There were 1597 occupants in the motor vehicle crash database and 1673 patients in the hospital database identified as being in a work-related motor vehicle crash. We identified 1443 occupants with at least one record from either the motor vehicle crash or hospital database indicating work-relatedness that linked to any record in the opposing database. We found that 38.7 percent of occupants injured in work-related motor vehicle crashes identified in the motor vehicle crash database did not have a primary payer code of workers' compensation in the hospital database and 40.0 percent of patients injured in work-related motor vehicle crashes identified in the hospital database did not meet our definition of a work-related motor vehicle crash in the motor vehicle crash database. Depending on how occupants injured in work-related motor crashes are identified, we estimate the population to be between 1852 and 8492 in Utah for the years 2006 and 2007. Research on single databases may lead to biased interpretations of work-related motor vehicle crashes. Combining 2 population based databases may still result in an underestimate of the magnitude of work-related motor vehicle crashes. Improved coding of work-related incidents is needed in current databases.

  3. The Flexibility Scale: Development and Preliminary Validation of a Cognitive Flexibility Measure in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Strang, John F.; Anthony, Laura G.; Yerys, Benjamin E.; Hardy, Kristina K.; Wallace, Gregory L.; Armour, Anna C.; Dudley, Katerina; Kenworthy, Lauren

    2017-01-01

    Flexibility is a key component of executive function, and is related to everyday functioning and adult outcomes. However, existing informant reports do not densely sample cognitive aspects of flexibility; the Flexibility Scale (FS) was developed to address this gap. This study investigates the validity of the FS in 221 youth with ASD and 57…

  4. Drug2Gene: an exhaustive resource to explore effectively the drug-target relation network.

    PubMed

    Roider, Helge G; Pavlova, Nadia; Kirov, Ivaylo; Slavov, Stoyan; Slavov, Todor; Uzunov, Zlatyo; Weiss, Bertram

    2014-03-11

    Information about drug-target relations is at the heart of drug discovery. There are now dozens of databases providing drug-target interaction data with varying scope, and focus. Therefore, and due to the large chemical space, the overlap of the different data sets is surprisingly small. As searching through these sources manually is cumbersome, time-consuming and error-prone, integrating all the data is highly desirable. Despite a few attempts, integration has been hampered by the diversity of descriptions of compounds, and by the fact that the reported activity values, coming from different data sets, are not always directly comparable due to usage of different metrics or data formats. We have built Drug2Gene, a knowledge base, which combines the compound/drug-gene/protein information from 19 publicly available databases. A key feature is our rigorous unification and standardization process which makes the data truly comparable on a large scale, allowing for the first time effective data mining in such a large knowledge corpus. As of version 3.2, Drug2Gene contains 4,372,290 unified relations between compounds and their targets most of which include reported bioactivity data. We extend this set with putative (i.e. homology-inferred) relations where sufficient sequence homology between proteins suggests they may bind to similar compounds. Drug2Gene provides powerful search functionalities, very flexible export procedures, and a user-friendly web interface. Drug2Gene v3.2 has become a mature and comprehensive knowledge base providing unified, standardized drug-target related information gathered from publicly available data sources. It can be used to integrate proprietary data sets with publicly available data sets. Its main goal is to be a 'one-stop shop' to identify tool compounds targeting a given gene product or for finding all known targets of a drug. Drug2Gene with its integrated data set of public compound-target relations is freely accessible without restrictions at http://www.drug2gene.com.

  5. The EPA Comptox Chemistry Dashboard: A Web-Based Data ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data but recent developments have focused on the development of a new software architecture that assembles the resources into a single platform. A new web application, the CompTox Chemistry Dashboard provides access to data associated with ~720,000 chemical substances. These data include experimental and predicted physicochemical property data, bioassay screening data associated with the ToxCast program, product and functional use information and a myriad of related data of value to environmental scientists. The dashboard provides chemical-based searching based on chemical names, synonyms and CAS Registry Numbers. Flexible search capabilities allow for chemical identificati

  6. The Royal Society of Chemistry and the delivery of chemistry data repositories for the community.

    PubMed

    Williams, Antony; Tkachenko, Valery

    2014-10-01

    Since 2009 the Royal Society of Chemistry (RSC) has been delivering access to chemistry data and cheminformatics tools via the ChemSpider database and has garnered a significant community following in terms of usage and contribution to the platform. ChemSpider has focused only on those chemical entities that can be represented as molecular connection tables or, to be more specific, the ability to generate an InChI from the input structure. As a structure centric hub ChemSpider is built around the molecular structure with other data and links being associated with this structure. As a result the platform has been limited in terms of the types of data that can be managed, and the flexibility of its searches, and it is constrained by the data model. New technologies and approaches, specifically taking into account a shift from relational to NoSQL databases, and the growing importance of the semantic web, has motivated RSC to rearchitect and create a more generic data repository utilizing these new technologies. This article will provide an overview of our activities in delivering data sharing platforms for the chemistry community including the development of the new data repository expanding into more extensive domains of chemistry data.

  7. The Royal Society of Chemistry and the delivery of chemistry data repositories for the community

    NASA Astrophysics Data System (ADS)

    Williams, Antony; Tkachenko, Valery

    2014-10-01

    Since 2009 the Royal Society of Chemistry (RSC) has been delivering access to chemistry data and cheminformatics tools via the ChemSpider database and has garnered a significant community following in terms of usage and contribution to the platform. ChemSpider has focused only on those chemical entities that can be represented as molecular connection tables or, to be more specific, the ability to generate an InChI from the input structure. As a structure centric hub ChemSpider is built around the molecular structure with other data and links being associated with this structure. As a result the platform has been limited in terms of the types of data that can be managed, and the flexibility of its searches, and it is constrained by the data model. New technologies and approaches, specifically taking into account a shift from relational to NoSQL databases, and the growing importance of the semantic web, has motivated RSC to rearchitect and create a more generic data repository utilizing these new technologies. This article will provide an overview of our activities in delivering data sharing platforms for the chemistry community including the development of the new data repository expanding into more extensive domains of chemistry data.

  8. Self-discrepancies in work-related upper extremity pain: relation to emotions and flexible-goal adjustment.

    PubMed

    Goossens, Mariëlle E; Kindermans, Hanne P; Morley, Stephen J; Roelofs, Jeffrey; Verbunt, Jeanine; Vlaeyen, Johan W

    2010-08-01

    Recurrent pain not only has an impact on disability, but on the long term it may become a threat to one's sense of self. This paper presents a cross-sectional study of patients with work-related upper extremity pain and focuses on: (1) the role of self-discrepancies in this group, (2) the associations between self-discrepancies, pain, emotions and (3) the interaction between self-discrepancies and flexible-goal adjustment. Eighty-nine participants completed standardized self-report measures of pain intensity, pain duration, anxiety, depression and flexible-goal adjustment. A Selves Questionnaire was used to generate self-discrepancies. A series of hierarchical regression analyses showed relationships between actual-ought other, actual-ought self, actual-feared self-discrepancies and depression as well as a significant association between actual-ought other self-discrepancy and anxiety. Furthermore, significant interactions were found between actual-ought other self-discrepancies and flexibility, indicating that less flexible participants with large self-discrepancies score higher on depression. This study showed that self-discrepancies are related to negative emotions and that flexible-goal adjustment served as a moderator in this relationship. The view of self in pain and flexible-goal adjustment should be considered as important variables in the process of chronic pain. Copyright (c) 2009 European Federation of International Association for the Study of Pain Chapters. Published by Elsevier Ltd. All rights reserved.

  9. Performance assessment of EMR systems based on post-relational database.

    PubMed

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  10. The Role of Data Archives in Synoptic Solar Physics

    NASA Astrophysics Data System (ADS)

    Reardon, Kevin

    The detailed study of solar cycle variations requires analysis of recorded datasets spanning many years of observations, that is, a data archive. The use of digital data, combined with powerful database server software, gives such archives new capabilities to provide, quickly and flexibly, selected pieces of information to scientists. Use of standardized protocols will allow multiple databases, independently maintained, to be seamlessly joined, allowing complex searches spanning multiple archives. These data archives also benefit from being developed in parallel with the telescope itself, which helps to assure data integrity and to provide close integration between the telescope and archive. Development of archives that can guarantee long-term data availability and strong compatibility with other projects makes solar-cycle studies easier to plan and realize.

  11. Telecommunications issues of intelligent database management for ground processing systems in the EOS era

    NASA Technical Reports Server (NTRS)

    Touch, Joseph D.

    1994-01-01

    Future NASA earth science missions, including the Earth Observing System (EOS), will be generating vast amounts of data that must be processed and stored at various locations around the world. Here we present a stepwise-refinement of the intelligent database management (IDM) of the distributed active archive center (DAAC - one of seven regionally-located EOSDIS archive sites) architecture, to showcase the telecommunications issues involved. We develop this architecture into a general overall design. We show that the current evolution of protocols is sufficient to support IDM at Gbps rates over large distances. We also show that network design can accommodate a flexible data ingestion storage pipeline and a user extraction and visualization engine, without interference between the two.

  12. Open source hardware and software platform for robotics and artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  13. GIS based solid waste management information system for Nagpur, India.

    PubMed

    Vijay, Ritesh; Jain, Preeti; Sharma, N; Bhattacharyya, J K; Vaidya, A N; Sohony, R A

    2013-01-01

    Solid waste management is one of the major problems of today's world and needs to be addressed by proper utilization of technologies and design of effective, flexible and structured information system. Therefore, the objective of this paper was to design and develop a GIS based solid waste management information system as a decision making and planning tool for regularities and municipal authorities. The system integrates geo-spatial features of the city and database of existing solid waste management. GIS based information system facilitates modules of visualization, query interface, statistical analysis, report generation and database modification. It also provides modules like solid waste estimation, collection, transportation and disposal details. The information system is user-friendly, standalone and platform independent.

  14. Disordered Eating-Related Cognition and Psychological Flexibility as Predictors of Psychological Health among College Students

    ERIC Educational Resources Information Center

    Masuda, Akihiko; Price, Matthew; Anderson, Page L.; Wendell, Johanna W.

    2010-01-01

    The present cross-sectional study investigated the relation among disordered eating-related cognition, psychological flexibility, and poor psychological outcomes among a nonclinical college sample. As predicted, conviction of disordered eating-related cognitions was positively associated with general psychological ill-health and emotional distress…

  15. "TPSX: Thermal Protection System Expert and Material Property Database"

    NASA Technical Reports Server (NTRS)

    Squire, Thomas H.; Milos, Frank S.; Rasky, Daniel J. (Technical Monitor)

    1997-01-01

    The Thermal Protection Branch at NASA Ames Research Center has developed a computer program for storing, organizing, and accessing information about thermal protection materials. The program, called Thermal Protection Systems Expert and Material Property Database, or TPSX, is available for the Microsoft Windows operating system. An "on-line" version is also accessible on the World Wide Web. TPSX is designed to be a high-quality source for TPS material properties presented in a convenient, easily accessible form for use by engineers and researchers in the field of high-speed vehicle design. Data can be displayed and printed in several formats. An information window displays a brief description of the material with properties at standard pressure and temperature. A spread sheet window displays complete, detailed property information. Properties which are a function of temperature and/or pressure can be displayed as graphs. In any display the data can be converted from English to SI units with the click of a button. Two material databases included with TPSX are: 1) materials used and/or developed by the Thermal Protection Branch at NASA Ames Research Center, and 2) a database compiled by NASA Johnson Space Center 9JSC). The Ames database contains over 60 advanced TPS materials including flexible blankets, rigid ceramic tiles, and ultra-high temperature ceramics. The JSC database contains over 130 insulative and structural materials. The Ames database is periodically updated and expanded as required to include newly developed materials and material property refinements.

  16. The Web Based Monitoring Project at the CMS Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf

    The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To the end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters,more » including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).« less

  17. Application driven interface generation for EASIE. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kao, Ya-Chen

    1992-01-01

    The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.

  18. Thriving on Chaos: The Development of a Surgical Information System

    PubMed Central

    Olund, Steven R.

    1988-01-01

    Hospitals present unique challenges to the computer industry, generating a greater quantity and variety of data than nearly any other enterprise. This is complicated by the fact that a hospital is not one homogenous organization, but a bundle of semi-independent groups with unique data requirements. Therefore hospital information systems must be fast, flexible, reliable, easy to use and maintain, and cost-effective. The Surgical Information System at Rush Presbyterian-St. Luke's Medical Center, Chicago is such system. It uses a Sequent Balance 21000 multi-processor superminicomputer, running industry standard tools such as the Unix operating system, a 4th generation programming language (4GL), and Structured Query Language (SQL) relational database management software. This treatise illustrates a comprehensive yet generic approach which can be applied to almost any clinical situation where access to patient data is required by a variety of medical professionals.

  19. The web based monitoring project at the CMS experiment

    NASA Astrophysics Data System (ADS)

    Lopez-Perez, Juan Antonio; Badgett, William; Behrens, Ulf; Chakaberia, Irakli; Jo, Youngkwon; Maeshima, Kaori; Maruyama, Sho; Patrick, James; Rapsevicius, Valdas; Soha, Aron; Stankevicius, Mantas; Sulmanas, Balys; Toda, Sachiko; Wan, Zongru

    2017-10-01

    The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To that end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user’s side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).

  20. Identification and Analysis of Critical Gaps in Nuclear Fuel Cycle Codes Required by the SINEMA Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adrian Miron; Joshua Valentine; John Christenson

    2009-10-01

    The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less

  1. Factors influencing community nursing roles and health service provision in rural areas: a review of literature.

    PubMed

    Barrett, Annette; Terry, Daniel R; Lê, Quynh; Hoang, Ha

    2016-02-01

    This review sought to better understand the issues and challenges experienced by community nurses working in rural areas and how these factors shape their role. Databases were searched to identify relevant studies, published between 1990 and 2015, that focussed on issues and challenges experienced by rural community nurses. Generic and grey literature relating to the subject was also searched. The search was systematically conducted multiple times to assure accuracy. A total of 14 articles met the inclusion criteria. This critical review identified common issues impacting community nursing and included role definition, organisational change, human resource, workplace and geographic challenges. Community nurses are flexible, autonomous, able to adapt care to the service delivery setting, and have a diversity of knowledge and skills. Considerably more research is essential to identify factors that impact rural community nursing practice. In addition, greater advocacy is required to develop the role.

  2. BioInt: an integrative biological object-oriented application framework and interpreter.

    PubMed

    Desai, Sanket; Burra, Prasad

    2015-01-01

    BioInt, a biological programming application framework and interpreter, is an attempt to equip the researchers with seamless integration, efficient extraction and effortless analysis of the data from various biological databases and algorithms. Based on the type of biological data, algorithms and related functionalities, a biology-specific framework was developed which has nine modules. The modules are a compilation of numerous reusable BioADTs. This software ecosystem containing more than 450 biological objects underneath the interpreter makes it flexible, integrative and comprehensive. Similar to Python, BioInt eliminates the compilation and linking steps cutting the time significantly. The researcher can write the scripts using available BioADTs (following C++ syntax) and execute them interactively or use as a command line application. It has features that enable automation, extension of the framework with new/external BioADTs/libraries and deployment of complex work flows.

  3. Medical record management systems: criticisms and new perspectives.

    PubMed

    Frénot, S; Laforest, F

    1999-06-01

    The first generation of computerized medical records stored the data as text, but these records did not bring any improvement in information manipulation. The use of a relational database management system (DBMS) has largely solved this problem as it allows for data requests by using SQL. However, this requires data structuring which is not very appropriate to medicine. Moreover, the use of templates and icon user interfaces has introduced a deviation from the paper-based record (still existing). The arrival of hypertext user interfaces has proven to be of interest to fill the gap between the paper-based medical record and its electronic version. We think that further improvement can be accomplished by using a fully document-based system. We present the architecture, advantages and disadvantages of classical DBMS-based and Web/DBMS-based solutions. We also present a document-based solution and explain its advantages, which include communication, security, flexibility and genericity.

  4. DTS: Building custom, intelligent schedulers

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1994-01-01

    DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

  5. Web Based Monitoring in the CMS Experiment at CERN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badgett, William; Borrello, Laura; Chakaberia, Irakli

    2014-09-03

    The Compact Muon Solenoid (CMS) is a large and complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To this end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to themore » experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user side. This paper describes the WBM system architecture and describes how the system was used during the first major data taking run of the LHC.« less

  6. On the origin of fluorescence in bacteriophytochrome infrared fluorescent proteins

    PubMed Central

    Samma, Alex A.; Johnson, Chelsea K.; Song, Shuang; Alvarez, Samuel

    2010-01-01

    Tsien (Science, 2009, 324, 804-807) has recently reported the creation of the first infrared fluorescent protein (IFP). It was engineered from bacterial phytochrome by removing the PHY and histidine kinase-related domains, by optimizing the protein to prevent dimerization and by limiting the biliverdins conformational freedom, especially around its D ring. We have used database analyses and molecular dynamics simulations with freely rotating chromophoric dihedrals in order to model the dihedral freedom available to the biliverdin D ring in the excited state; to show that the tetrapyrrole ligands in phytochromes are flexible and can adopt many conformations, however their conformational space is limited/defined by the chemospatial characteristics of the protein cavity. Our simulations confirm that the reduced accessibility to conformations geared to an excited state proton transfer may be responsible for the fluorescence in IFP, just as has been suggested by Kennis (PNAS, 2010, 107, 9170-9175) for fluorescent bacteriophytochrome from Rhodopseudomonas palustris. PMID:21047084

  7. 'Am I covered?': an analysis of a national enquiry database on scope of practice.

    PubMed

    Brady, Anne-Marie; Fealy, Gerard; Casey, Mary; Hegarty, Josephine; Kennedy, Catriona; McNamara, Martin; O'Reilly, Pauline; Prizeman, Geraldine; Rohde, Daniela

    2015-10-01

    Analysis of a national database of enquiries to a professional body pertaining to the scope of nursing and midwifery practice. Against a backdrop of healthcare reform is a demand for flexibility in nursing and midwifery roles with unprecedented redefinition of role boundaries and/or expansion. Guidance from professional regulatory bodies is being sought around issues of concern that are arising in practice. Qualitative thematic analysis. The database of telephone enquiries (n = 9818) made by Registered Nurses and midwives to a national regulatory body (2001-2013) was subjected to a cleaning process and examined to detect those concerns that pertained to scope of practice. A total of 978 enquiries were subjected to thematic analysis. Enquiries were concerned with three main areas: medication management, changing and evolving scope of practice and professional role boundaries. The context was service developments, staff shortages and uncertainty about role expansion and professional accountability. Other concerns related to expectations around responsibility and accountability for other support staff. Efforts by employers to maximize the skill mix of their staff and optimally deploy staff to meet service needs and/or address gaps in service represented the primary service context from which many enquiries arose. The greatest concern for nurses arises around medication management but innovation in healthcare delivery and the demands of service are also creating challenges for nurses and midwives. Maintaining and developing competence is a concern among nurses and midwives particularly in an environment of limited resources and where re-deployment is common. © 2015 John Wiley & Sons Ltd.

  8. Time limits during visual foraging reveal flexible working memory templates.

    PubMed

    Kristjánsson, Tómas; Thornton, Ian M; Kristjánsson, Árni

    2018-06-01

    During difficult foraging tasks, humans rarely switch between target categories, but switch frequently during easier foraging. Does this reflect fundamental limits on visual working memory (VWM) capacity or simply strategic choice due to effort? Our participants performed time-limited or unlimited foraging tasks where they tapped stimuli from 2 target categories while avoiding items from 2 distractor categories. These time limits should have no effect if capacity imposes limits on VWM representations but more flexible VWM could allow observers to use VWM according to task demands in each case. We found that with time limits, participants switched more frequently and switch-costs became much smaller than during unlimited foraging. Observers can therefore switch between complex (conjunction) target categories when needed. We propose that while maintaining many complex templates in working memory is effortful and observers avoid this, they can do so if this fits task demands, showing the flexibility of working memory representations used for visual exploration. This is in contrast with recent proposals, and we discuss the implications of these findings for theoretical accounts of working memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Mental traveling along psychological distances: The effects of cultural syndromes, perspective flexibility, and construal level.

    PubMed

    Wong, Vincent Chi; Wyer, Robert S

    2016-07-01

    Individuals' psychological distance from the stimuli they encounter in daily life can influence the abstractness or generality of the mental representations they form of these stimuli. However, these representations can also depend on the perspective from which the stimuli are construed. When individuals have either an individualistic social orientation or a short-term temporal orientation, they construe psychologically distal events more globally than they construe proximal ones, as implied by construal level theory (Trope & Liberman, 2010). When they have either a collectivistic social orientation or a long-term temporal orientation, however, they not only construe the implications of distal events more concretely than individuals with an egocentric perspective but also construe the implications of proximal events in more abstract terms. These effects are mediated by the flexibility of the perspectives that people take when they make judgments. Differences in perspective flexibility account for the impact of both situationally induced differences in social and temporal orientation and more chronic cultural differences in these orientations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Development of flexible Ni80Fe20 magnetic nano-thin films

    NASA Astrophysics Data System (ADS)

    Vopson, M. M.; Naylor, J.; Saengow, T.; Rogers, E. G.; Lepadatu, S.; Fetisov, Y. K.

    2017-11-01

    Flexible magnetic Ni80Fe20 thin films with excellent adhesion, mechanical and magnetic properties have been fabricated using magnetron plasma deposition. We demonstrate that flexible Ni80Fe20 thin films maintain their non-flexible magnetic properties when the films are over 60 nm thick. However, when their thickness is reduced, the flexible thin films display significant increase in their magnetic coercive field compared to identical films coated on a solid Silicon substrate. For a 15 nm flexible Ni80Fe20 film coated onto 110 μm Polyvinylidene fluoride polymer substrate, we achieved a remarkable 355% increase in the magnetic coercive field relative to the same film deposited onto a Si substrate. Experimental evidence, backed by micro-magnetic modelling, indicates that the increase in the coercive fields is related to the larger roughness texture of the flexible substrates. This effect essentially transforms soft Ni80Fe20 permalloy thin films into medium/hard magnetic films allowing not only mechanical flexibility of the structure, but also fine tuning of their magnetic properties.

  11. A Visual Interface for Querying Heterogeneous Phylogenetic Databases.

    PubMed

    Jamil, Hasan M

    2017-01-01

    Despite the recent growth in the number of phylogenetic databases, access to these wealth of resources remain largely tool or form-based interface driven. It is our thesis that the flexibility afforded by declarative query languages may offer the opportunity to access these repositories in a better way, and to use such a language to pose truly powerful queries in unprecedented ways. In this paper, we propose a substantially enhanced closed visual query language, called PhyQL, that can be used to query phylogenetic databases represented in a canonical form. The canonical representation presented helps capture most phylogenetic tree formats in a convenient way, and is used as the storage model for our PhyloBase database for which PhyQL serves as the query language. We have implemented a visual interface for the end users to pose PhyQL queries using visual icons, and drag and drop operations defined over them. Once a query is posed, the interface translates the visual query into a Datalog query for execution over the canonical database. Responses are returned as hyperlinks to phylogenies that can be viewed in several formats using the tree viewers supported by PhyloBase. Results cached in PhyQL buffer allows secondary querying on the computed results making it a truly powerful querying architecture.

  12. Palmprint and face score level fusion: hardware implementation of a contactless small sample biometric system

    NASA Astrophysics Data System (ADS)

    Poinsot, Audrey; Yang, Fan; Brost, Vincent

    2011-02-01

    Including multiple sources of information in personal identity recognition and verification gives the opportunity to greatly improve performance. We propose a contactless biometric system that combines two modalities: palmprint and face. Hardware implementations are proposed on the Texas Instrument Digital Signal Processor and Xilinx Field-Programmable Gate Array (FPGA) platforms. The algorithmic chain consists of a preprocessing (which includes palm extraction from hand images), Gabor feature extraction, comparison by Hamming distance, and score fusion. Fusion possibilities are discussed and tested first using a bimodal database of 130 subjects that we designed (uB database), and then two common public biometric databases (AR for face and PolyU for palmprint). High performance has been obtained for recognition and verification purpose: a recognition rate of 97.49% with AR-PolyU database and an equal error rate of 1.10% on the uB database using only two training samples per subject have been obtained. Hardware results demonstrate that preprocessing can easily be performed during the acquisition phase, and multimodal biometric recognition can be treated almost instantly (0.4 ms on FPGA). We show the feasibility of a robust and efficient multimodal hardware biometric system that offers several advantages, such as user-friendliness and flexibility.

  13. Pseudomonas Genome Database: facilitating user-friendly, comprehensive comparisons of microbial genomes.

    PubMed

    Winsor, Geoffrey L; Van Rossum, Thea; Lo, Raymond; Khaira, Bhavjinder; Whiteside, Matthew D; Hancock, Robert E W; Brinkman, Fiona S L

    2009-01-01

    Pseudomonas aeruginosa is a well-studied opportunistic pathogen that is particularly known for its intrinsic antimicrobial resistance, diverse metabolic capacity, and its ability to cause life threatening infections in cystic fibrosis patients. The Pseudomonas Genome Database (http://www.pseudomonas.com) was originally developed as a resource for peer-reviewed, continually updated annotation for the Pseudomonas aeruginosa PAO1 reference strain genome. In order to facilitate cross-strain and cross-species genome comparisons with other Pseudomonas species of importance, we have now expanded the database capabilities to include all Pseudomonas species, and have developed or incorporated methods to facilitate high quality comparative genomics. The database contains robust assessment of orthologs, a novel ortholog clustering method, and incorporates five views of the data at the sequence and annotation levels (Gbrowse, Mauve and custom views) to facilitate genome comparisons. A choice of simple and more flexible user-friendly Boolean search features allows researchers to search and compare annotations or sequences within or between genomes. Other features include more accurate protein subcellular localization predictions and a user-friendly, Boolean searchable log file of updates for the reference strain PAO1. This database aims to continue to provide a high quality, annotated genome resource for the research community and is available under an open source license.

  14. An online database for IHN virus in Pacific Salmonid fish: MEAP-IHNV

    USGS Publications Warehouse

    Kurath, Gael

    2012-01-01

    The MEAP-IHNV database provides access to detailed data for anyone interested in IHNV molecular epidemiology, such as fish health professionals, fish culture facility managers, and academic researchers. The flexible search capabilities enable the user to generate various output formats, including tables and maps, which should assist users in developing and testing hypotheses about how IHNV moves across landscapes and changes over time. The MEAP-IHNV database is available online at http://gis.nacse.org/ihnv/ (fig. 1). The database contains records that provide background information and genetic sequencing data for more than 1,000 individual field isolates of the fish virus Infectious hematopoietic necrosis virus (IHNV), and is updated approximately annually. It focuses on IHNV isolates collected throughout western North America from 1966 to the present. The database also includes a small number of IHNV isolates from Eastern Russia. By engaging the expertise of the broader community of colleagues interested in IHNV, our goal is to enhance the overall understanding of IHNV epidemiology, including defining sources of disease outbreaks and viral emergence events, identifying virus traffic patterns and potential reservoirs, and understanding how human management of salmonid fish culture affects disease. Ultimately, this knowledge can be used to develop new strategies to reduce the effect of IHN disease in cultured and wild fish.

  15. "Gone are the days of mass-media marketing plans and short term customer relationships": tobacco industry direct mail and database marketing strategies.

    PubMed

    Lewis, M Jane; Ling, Pamela M

    2016-07-01

    As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers' names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers' names each. These companies valued direct mail/database marketing's flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail's limited visibility to tobacco control, public health and regulators. Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers' perceptions and smoking behaviours. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Fast and Flexible Multivariate Time Series Subsequence Search

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Oza, Nikunj C.; Zhu, Qiang; Srivastava, Ashok N.

    2010-01-01

    Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which often contain several gigabytes of data. Surprisingly, research on MTS search is very limited. Most of the existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two algorithms to solve this problem (1) a List Based Search (LBS) algorithm which uses sorted lists for indexing, and (2) a R*-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences. Both algorithms guarantee that all matching patterns within the specified thresholds will be returned (no false dismissals). The very few false alarms can be removed by a post-processing step. Since our framework is also capable of Univariate Time-Series (UTS) subsequence search, we first demonstrate the efficiency of our algorithms on several UTS datasets previously used in the literature. We follow this up with experiments using two large MTS databases from the aviation domain, each containing several millions of observations. Both these tests show that our algorithms have very high prune rates (>99%) thus needing actual disk access for only less than 1% of the observations. To the best of our knowledge, MTS subsequence search has never been attempted on datasets of the size we have used in this paper.

  17. Effect of the Pilates method on physical conditioning of healthy subjects: a systematic review and meta-analysis.

    PubMed

    Campos, Renata R; Dias, Josilainne M; Pereira, Ligia M; Obara, Karen; Barreto, Maria S; Silva, Mariana F; Mazuquin, Bruno F; Christofaro, Diego G; Fernandes, Romulo A; Iversen, Maura D; Cardoso, Jefferson R

    2016-01-01

    Physical conditioning consists of a variety of health-related attributes and Pilates exercises are described as a form of this conditioning. The objective of this systematic review was to determine the effect of the Pilates method on health and ability outcome of the physical conditioning of healthy individuals. The search was performed in the following databases: Medline, Cinahl, Embase, Lilacs, Scielo, Web of Science, PEDro, Cochrane Controlled Trials Register Library, Scopus, Science Direct and Google Scholar. (1950-2014). Included studies were randomized controlled trials (RCTs) that assessed the effects of the Pilates method on healthy subjects. Nine RCTs met the inclusion criteria. Pilates improved abdominal muscular endurance when compared with no exercises (mean difference [MD]=9.53%; 95% CI: 2.41, 16.43; P=0.009), however, there was no difference in flexibility (MD=4.97; 95% CI: -0.53, 10.47; P=0.08). Some positive effects (up to 6 months) of the Pilates practice were found in some RCTs' results as follows: Improvement of dynamic balance, quality of life and back muscle flexibility. The results indicate the Pilates exercises performed on the mat or apparatus 2 to 3 times a week, for 5 to 12 weeks, improves abdominal muscular endurance (on average, 10 more abdominals curls in 1-minute sit-up test) for both genders, when compared to no exercises.

  18. An ethogram for Benthic Octopods (Cephalopoda: Octopodidae).

    PubMed

    Mather, Jennifer A; Alupay, Jean S

    2016-05-01

    The present paper constructs a general ethogram for the actions of the flexible body as well as the skin displays of octopuses in the family Octopodidae. The actions of 6 sets of structures (mantle-funnel, arms, sucker-stalk, skin-web, head, and mouth) combine to produce behavioral units that involve positioning of parts leading to postures such as the flamboyant, movements of parts of the animal with relation to itself including head bob and grooming, and movements of the whole animal by both jetting in the water and crawling along the substrate. Muscular actions result in 4 key changes in skin display: (a) chromatophore expansion, (b) chromatophore contraction resulting in appearance of reflective colors such as iridophores and leucophores, (c) erection of papillae on the skin, and (d) overall postures of arms and mantle controlled by actions of the octopus muscular hydrostat. They produce appearances, including excellent camouflage, moving passing cloud and iridescent blue rings, with only a few known species-specific male visual sexual displays. Commonalities across the family suggest that, despite having flexible muscular hydrostat movement systems producing several behavioral units, simplicity of production may underlie the complexity of movement and appearance. This systematic framework allows researchers to take the next step in modeling how such diversity can be a combination of just a few variables. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Comparative transcriptome analysis between planarian Dugesia japonica and other platyhelminth species.

    PubMed

    Nishimura, Osamu; Hirao, Yukako; Tarui, Hiroshi; Agata, Kiyokazu

    2012-06-29

    Planarians are considered to be among the extant animals close to one of the earliest groups of organisms that acquired a central nervous system (CNS) during evolution. Planarians have a bilobed brain with nine lateral branches from which a variety of external signals are projected into different portions of the main lobes. Various interneurons process different signals to regulate behavior and learning/memory. Furthermore, planarians have robust regenerative ability and are attracting attention as a new model organism for the study of regeneration. Here we conducted large-scale EST analysis of the head region of the planarian Dugesia japonica to construct a database of the head-region transcriptome, and then performed comparative analyses among related species. A total of 54,752 high-quality EST reads were obtained from a head library of the planarian Dugesia japonica, and 13,167 unigene sequences were produced by de novo assembly. A new method devised here revealed that proteins related to metabolism and defense mechanisms have high flexibility of amino-acid substitutions within the planarian family. Eight-two CNS-development genes were found in the planarian (cf. C. elegans 3; chicken 129). Comparative analysis revealed that 91% of the planarian CNS-development genes could be mapped onto the schistosome genome, but one-third of these shared genes were not expressed in the schistosome. We constructed a database that is a useful resource for comparative planarian transcriptome studies. Analysis comparing homologous genes between two planarian species showed that the potential of genes is important for accumulation of amino-acid substitutions. The presence of many CNS-development genes in our database supports the notion that the planarian has a fundamental brain with regard to evolution and development at not only the morphological/functional, but also the genomic, level. In addition, our results indicate that the planarian CNS-development genes already existed before the divergence of planarians and schistosomes from their common ancestor.

  20. Short Fiction on Film: A Relational DataBase.

    ERIC Educational Resources Information Center

    May, Charles

    Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…

  1. Novel Hybrid Virtual Screening Protocol Based on Molecular Docking and Structure-Based Pharmacophore for Discovery of Methionyl-tRNA Synthetase Inhibitors as Antibacterial Agents

    PubMed Central

    Liu, Chi; He, Gu; Jiang, Qinglin; Han, Bo; Peng, Cheng

    2013-01-01

    Methione tRNA synthetase (MetRS) is an essential enzyme involved in protein biosynthesis in all living organisms and is a potential antibacterial target. In the current study, the structure-based pharmacophore (SBP)-guided method has been suggested to generate a comprehensive pharmacophore of MetRS based on fourteen crystal structures of MetRS-inhibitor complexes. In this investigation, a hybrid protocol of a virtual screening method, comprised of pharmacophore model-based virtual screening (PBVS), rigid and flexible docking-based virtual screenings (DBVS), is used for retrieving new MetRS inhibitors from commercially available chemical databases. This hybrid virtual screening approach was then applied to screen the Specs (202,408 compounds) database, a structurally diverse chemical database. Fifteen hit compounds were selected from the final hits and shifted to experimental studies. These results may provide important information for further research of novel MetRS inhibitors as antibacterial agents. PMID:23839093

  2. Development of a paperless, Y2K compliant exposure tracking database at Los Alamos National Laboratory.

    PubMed

    Conwell, J L; Creek, K L; Pozzi, A R; Whyte, H M

    2001-02-01

    The Industrial Hygiene and Safety Group at Los Alamos National Laboratory (LANL) developed a database application known as IH DataView, which manages industrial hygiene monitoring data. IH DataView replaces a LANL legacy system, IHSD, that restricted user access to a single point of data entry needed enhancements that support new operational requirements, and was not Year 2000 (Y2K) compliant. IH DataView features a comprehensive suite of data collection and tracking capabilities. Through the use of Oracle database management and application development tools, the system is Y2K compliant and Web enabled for easy deployment and user access via the Internet. System accessibility is particularly important because LANL operations are spread over 43 square miles, and industrial hygienists (IHs) located across the laboratory will use the system. IH DataView shows promise of being useful in the future because it eliminates these problems. It has a flexible architecture and sophisticated capability to collect, track, and analyze data in easy-to-use form.

  3. Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.

    PubMed

    Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K

    2004-07-01

    Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.

  4. Interactive and Versatile Navigation of Structural Databases.

    PubMed

    Korb, Oliver; Kuhn, Bernd; Hert, Jérôme; Taylor, Neil; Cole, Jason; Groom, Colin; Stahl, Martin

    2016-05-12

    We present CSD-CrossMiner, a novel tool for pharmacophore-based searches in crystal structure databases. Intuitive pharmacophore queries describing, among others, protein-ligand interaction patterns, ligand scaffolds, or protein environments can be built and modified interactively. Matching crystal structures are overlaid onto the query and visualized as soon as they are available, enabling the researcher to quickly modify a hypothesis on the fly. We exemplify the utility of the approach by showing applications relevant to real-world drug discovery projects, including the identification of novel fragments for a specific protein environment or scaffold hopping. The ability to concurrently search protein-ligand binding sites extracted from the Protein Data Bank (PDB) and small organic molecules from the Cambridge Structural Database (CSD) using the same pharmacophore query further emphasizes the flexibility of CSD-CrossMiner. We believe that CSD-CrossMiner closes an important gap in mining structural data and will allow users to extract more value from the growing number of available crystal structures.

  5. SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases

    NASA Astrophysics Data System (ADS)

    Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.

    One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.

  6. A Priority Fuzzy Logic Extension of the XQuery Language

    NASA Astrophysics Data System (ADS)

    Škrbić, Srdjan; Wettayaprasit, Wiphada; Saeueng, Pannipa

    2011-09-01

    In recent years there have been significant research findings in flexible XML querying techniques using fuzzy set theory. Many types of fuzzy extensions to XML data model and XML query languages have been proposed. In this paper, we introduce priority fuzzy logic extensions to XQuery language. Describing these extensions we introduce a new query language. Moreover, we describe a way to implement an interpreter for this language using an existing XML native database.

  7. Class dependency of fuzzy relational database using relational calculus and conditional probability

    NASA Astrophysics Data System (ADS)

    Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya

    2018-03-01

    In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.

  8. Aerodynamic effects of flexibility in flapping wings.

    PubMed

    Zhao, Liang; Huang, Qingfeng; Deng, Xinyan; Sane, Sanjay P

    2010-03-06

    Recent work on the aerodynamics of flapping flight reveals fundamental differences in the mechanisms of aerodynamic force generation between fixed and flapping wings. When fixed wings translate at high angles of attack, they periodically generate and shed leading and trailing edge vortices as reflected in their fluctuating aerodynamic force traces and associated flow visualization. In contrast, wings flapping at high angles of attack generate stable leading edge vorticity, which persists throughout the duration of the stroke and enhances mean aerodynamic forces. Here, we show that aerodynamic forces can be controlled by altering the trailing edge flexibility of a flapping wing. We used a dynamically scaled mechanical model of flapping flight (Re approximately 2000) to measure the aerodynamic forces on flapping wings of variable flexural stiffness (EI). For low to medium angles of attack, as flexibility of the wing increases, its ability to generate aerodynamic forces decreases monotonically but its lift-to-drag ratios remain approximately constant. The instantaneous force traces reveal no major differences in the underlying modes of force generation for flexible and rigid wings, but the magnitude of force, the angle of net force vector and centre of pressure all vary systematically with wing flexibility. Even a rudimentary framework of wing veins is sufficient to restore the ability of flexible wings to generate forces at near-rigid values. Thus, the magnitude of force generation can be controlled by modulating the trailing edge flexibility and thereby controlling the magnitude of the leading edge vorticity. To characterize this, we have generated a detailed database of aerodynamic forces as a function of several variables including material properties, kinematics, aerodynamic forces and centre of pressure, which can also be used to help validate computational models of aeroelastic flapping wings. These experiments will also be useful for wing design for small robotic insects and, to a limited extent, in understanding the aerodynamics of flapping insect wings.

  9. Foot orthoses for adults with flexible pes planus: a systematic review.

    PubMed

    Banwell, Helen A; Mackintosh, Shylie; Thewlis, Dominic

    2014-04-05

    Foot orthoses are widely used in the management of flexible pes planus, yet the evidence to support this intervention has not been clearly defined. This systematic review aimed to critically appraise the evidence for the use of foot orthoses for flexible pes planus in adults. Electronic databases (Medline, CINAHL, Cochrane, Web of science, SportDiscus, Embase) were systematically searched in June 2013 for randomised controlled, controlled clinical and repeated measure trials where participants had identified flexible pes planus using a validated and reliable measure of pes planus and the intervention was a rigid or semi-rigid orthoses with the comparison being a no-orthoses (shoes alone or flat non-posted insert) condition. Outcomes of interest were foot pain, rearfoot kinematics, foot kinetics and physical function. Of the 2,211 articles identified by the searches, 13 studies met the inclusion criteria; two were randomised controlled trials, one was a controlled trial and 10 were repeated measure studies. Across the included studies, 59 relevant outcome measures were reported with 17 calculated as statistically significant large or medium effects observed with use of foot orthoses compared to the no orthoses condition (SMD range 1.13 to -4.11). No high level evidence supported the use of foot orthoses for flexible pes planus. There is good to moderate level evidence that foot orthoses improve physical function (medial-lateral sway in standing (level II) and energy cost during walking (level III)). There is low level evidence (level IV) that foot orthoses improve pain, reduce rearfoot eversion, alter loading and impact forces; and reduce rearfoot inversion and eversion moments in flexible pes planus. Well-designed randomised controlled trials that include appropriate sample sizes, clinical cohorts and involve a measure of symptom change are required to determine the efficacy of foot orthoses to manage adult flexible pes planus.

  10. Aerodynamic effects of flexibility in flapping wings

    PubMed Central

    Zhao, Liang; Huang, Qingfeng; Deng, Xinyan; Sane, Sanjay P.

    2010-01-01

    Recent work on the aerodynamics of flapping flight reveals fundamental differences in the mechanisms of aerodynamic force generation between fixed and flapping wings. When fixed wings translate at high angles of attack, they periodically generate and shed leading and trailing edge vortices as reflected in their fluctuating aerodynamic force traces and associated flow visualization. In contrast, wings flapping at high angles of attack generate stable leading edge vorticity, which persists throughout the duration of the stroke and enhances mean aerodynamic forces. Here, we show that aerodynamic forces can be controlled by altering the trailing edge flexibility of a flapping wing. We used a dynamically scaled mechanical model of flapping flight (Re ≈ 2000) to measure the aerodynamic forces on flapping wings of variable flexural stiffness (EI). For low to medium angles of attack, as flexibility of the wing increases, its ability to generate aerodynamic forces decreases monotonically but its lift-to-drag ratios remain approximately constant. The instantaneous force traces reveal no major differences in the underlying modes of force generation for flexible and rigid wings, but the magnitude of force, the angle of net force vector and centre of pressure all vary systematically with wing flexibility. Even a rudimentary framework of wing veins is sufficient to restore the ability of flexible wings to generate forces at near-rigid values. Thus, the magnitude of force generation can be controlled by modulating the trailing edge flexibility and thereby controlling the magnitude of the leading edge vorticity. To characterize this, we have generated a detailed database of aerodynamic forces as a function of several variables including material properties, kinematics, aerodynamic forces and centre of pressure, which can also be used to help validate computational models of aeroelastic flapping wings. These experiments will also be useful for wing design for small robotic insects and, to a limited extent, in understanding the aerodynamics of flapping insect wings. PMID:19692394

  11. Foot orthoses for adults with flexible pes planus: a systematic review

    PubMed Central

    2014-01-01

    Background Foot orthoses are widely used in the management of flexible pes planus, yet the evidence to support this intervention has not been clearly defined. This systematic review aimed to critically appraise the evidence for the use of foot orthoses for flexible pes planus in adults. Methods Electronic databases (Medline, CINAHL, Cochrane, Web of science, SportDiscus, Embase) were systematically searched in June 2013 for randomised controlled, controlled clinical and repeated measure trials where participants had identified flexible pes planus using a validated and reliable measure of pes planus and the intervention was a rigid or semi-rigid orthoses with the comparison being a no-orthoses (shoes alone or flat non-posted insert) condition. Outcomes of interest were foot pain, rearfoot kinematics, foot kinetics and physical function. Results Of the 2,211 articles identified by the searches, 13 studies met the inclusion criteria; two were randomised controlled trials, one was a controlled trial and 10 were repeated measure studies. Across the included studies, 59 relevant outcome measures were reported with 17 calculated as statistically significant large or medium effects observed with use of foot orthoses compared to the no orthoses condition (SMD range 1.13 to -4.11). Conclusions No high level evidence supported the use of foot orthoses for flexible pes planus. There is good to moderate level evidence that foot orthoses improve physical function (medial-lateral sway in standing (level II) and energy cost during walking (level III)). There is low level evidence (level IV) that foot orthoses improve pain, reduce rearfoot eversion, alter loading and impact forces; and reduce rearfoot inversion and eversion moments in flexible pes planus. Well-designed randomised controlled trials that include appropriate sample sizes, clinical cohorts and involve a measure of symptom change are required to determine the efficacy of foot orthoses to manage adult flexible pes planus. PMID:24708560

  12. The effectiveness of non-surgical intervention (Foot Orthoses) for paediatric flexible pes planus: A systematic review: Update

    PubMed Central

    Uden, Hayley; Banwell, Helen A.; Kumar, Saravana

    2018-01-01

    Background Flexible pes planus (flat feet) in children is a common presenting condition in clinical practice due to concerns amongst parents and caregivers. While Foot Orthoses (FOs) are a popular intervention, their effectiveness remains unclear. Thus, the aim of this systematic review was to update the current evidence base for the effectiveness of FOs for paediatric flexible pes planus. Methods A systematic search of electronic databases (Cochrane, Medline, AMED, EMBASE, CINHAL, SportDiscus, Scopus and PEDro) was conducted from January 2011 to July 2017. Studies of children (0–18 years) diagnosed with flexible pes planus and intervention to be any type of Foot Orthoses (FOs) were included. This review was conducted and reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. McMaster critical review form for quantitative studies, was used to assess the methodological quality of the included studies. Given the heterogeneity of the included studies, a descriptive synthesis of the included studies was undertaken. Results Out of 606 articles identified, 11 studies (three RCTs; two case-controls; five case-series and one single case study) met the inclusion criteria. A diverse range of pre-fabricated and customised FOs were utilised and effectiveness measured through a plethora of outcomes. Summarised findings from the heterogeneous evidence base indicated that FOs may have a positive impact across a range of outcomes including pain, foot posture, gait, function and structural and kinetic measures. Despite these consistent positive outcomes reported in several studies, the current evidence base lacks clarity and uniformity in terms of diagnostic criteria, interventions delivered and outcomes measured for paediatric flexible pes planus. Conclusion There continues to remain uncertainty on the effectiveness of FOs for paediatric flexible pes planus. Despite a number of methodological limitations, FOs show potential as a treatment method for children with flexible pes planus. PROSPERO registration number CRD42017057310. PMID:29451921

  13. A psychological flexibility conceptualisation of the experience of injustice among individuals with chronic pain

    PubMed Central

    McCracken, Lance M; Trost, Zina

    2014-01-01

    Accumulating evidence suggests that the experience of injustice in patients with chronic pain is associated with poorer pain-related outcomes. Despite this evidence, a theoretical framework to understand this relationship is presently lacking. This review is the first to propose that the psychological flexibility model underlying Acceptance and Commitment Therapy (ACT) may provide a clinically useful conceptual framework to understand the association between the experience of injustice and chronic pain outcomes. A literature review was conducted to identify research and theory on the injustice experience in chronic pain, chronic pain acceptance, and ACT. Research relating injustice to chronic pain outcomes is summarised, the relevance of psychological flexibility to the injustice experience is discussed, and the subprocesses of psychological flexibility are proposed as potential mediating factors in the relationship between injustice and pain outcomes. Application of the psychological flexibility model to the experience of pain-related injustice may provide new avenues for future research and clinical interventions for patients with pain. Summary points • Emerging research links the experience of pain-related injustice to problematic pain outcomes. • A clinically relevant theoretical framework is currently lacking to guide future research and intervention on pain-related injustice. • The psychological flexibility model would suggest that the overarching process of psychological inflexibility mediates between the experience of injustice and adverse chronic pain outcomes. • Insofar as the processes of psychological inflexibility account for the association between injustice experiences and pain outcomes, methods of Acceptance and Commitment Therapy (ACT) may reduce the impact of injustice of pain outcomes. • Future research is needed to empirically test the proposed associations between the experience of pain-related injustice, psychological flexibility and pain outcomes, and whether ACT interventions mitigate the impact of pain-related injustice on pain outcomes. PMID:26516537

  14. Type-Based Access Control in Data-Centric Systems

    NASA Astrophysics Data System (ADS)

    Caires, Luís; Pérez, Jorge A.; Seco, João Costa; Vieira, Hugo Torres; Ferrão, Lúcio

    Data-centric multi-user systems, such as web applications, require flexible yet fine-grained data security mechanisms. Such mechanisms are usually enforced by a specially crafted security layer, which adds extra complexity and often leads to error prone coding, easily causing severe security breaches. In this paper, we introduce a programming language approach for enforcing access control policies to data in data-centric programs by static typing. Our development is based on the general concept of refinement type, but extended so as to address realistic and challenging scenarios of permission-based data security, in which policies dynamically depend on the database state, and flexible combinations of column- and row-level protection of data are necessary. We state and prove soundness and safety of our type system, stating that well-typed programs never break the declared data access control policies.

  15. Coactivation of cognitive control networks during task switching.

    PubMed

    Yin, Shouhang; Deák, Gedeon; Chen, Antao

    2018-01-01

    The ability to flexibly switch between tasks is considered an important component of cognitive control that involves frontal and parietal cortical areas. The present study was designed to characterize network dynamics across multiple brain regions during task switching. Functional magnetic resonance images (fMRI) were captured during a standard rule-switching task to identify switching-related brain regions. Multiregional psychophysiological interaction (PPI) analysis was used to examine effective connectivity between these regions. During switching trials, behavioral performance declined and activation of a generic cognitive control network increased. Concurrently, task-related connectivity increased within and between cingulo-opercular and fronto-parietal cognitive control networks. Notably, the left inferior frontal junction (IFJ) was most consistently coactivated with the 2 cognitive control networks. Furthermore, switching-dependent effective connectivity was negatively correlated with behavioral switch costs. The strength of effective connectivity between left IFJ and other regions in the networks predicted individual differences in switch costs. Task switching was supported by coactivated connections within cognitive control networks, with left IFJ potentially acting as a key hub between the fronto-parietal and cingulo-opercular networks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. A natural language interface plug-in for cooperative query answering in biological databases.

    PubMed

    Jamil, Hasan M

    2012-06-11

    One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a semantic description of the intended application. We demonstrate the feasibility of our approach with a practical example.

  17. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  18. SuperNatural: a searchable database of available natural compounds.

    PubMed

    Dunkel, Mathias; Fullbeck, Melanie; Neumann, Stefanie; Preissner, Robert

    2006-01-01

    Although tremendous effort has been put into synthetic libraries, most drugs on the market are still natural compounds or derivatives thereof. There are encyclopaedias of natural compounds, but the availability of these compounds is often unclear and catalogues from numerous suppliers have to be checked. To overcome these problems we have compiled a database of approximately 50,000 natural compounds from different suppliers. To enable efficient identification of the desired compounds, we have implemented substructure searches with typical templates. Starting points for in silico screenings are about 2500 well-known and classified natural compounds from a compendium that we have added. Possible medical applications can be ascertained via automatic searches for similar drugs in a free conformational drug database containing WHO indications. Furthermore, we have computed about three million conformers, which are deployed to account for the flexibilities of the compounds when the 3D superposition algorithm that we have developed is used. The SuperNatural Database is publicly available at http://bioinformatics.charite.de/supernatural. Viewing requires the free Chime-plugin from MDL (Chime) or Java2 Runtime Environment (MView), which is also necessary for using Marvin application for chemical drawing.

  19. NSDNA: a manually curated database of experimentally supported ncRNAs associated with nervous system diseases

    PubMed Central

    Wang, Jianjian; Cao, Yuze; Zhang, Huixue; Wang, Tianfeng; Tian, Qinghua; Lu, Xiaoyu; Lu, Xiaoyan; Kong, Xiaotong; Liu, Zhaojun; Wang, Ning; Zhang, Shuai; Ma, Heping; Ning, Shangwei; Wang, Lihua

    2017-01-01

    The Nervous System Disease NcRNAome Atlas (NSDNA) (http://www.bio-bigdata.net/nsdna/) is a manually curated database that provides comprehensive experimentally supported associations about nervous system diseases (NSDs) and noncoding RNAs (ncRNAs). NSDs represent a common group of disorders, some of which are characterized by high morbidity and disabilities. The pathogenesis of NSDs at the molecular level remains poorly understood. ncRNAs are a large family of functionally important RNA molecules. Increasing evidence shows that diverse ncRNAs play a critical role in various NSDs. Mining and summarizing NSD–ncRNA association data can help researchers discover useful information. Hence, we developed an NSDNA database that documents 24 713 associations between 142 NSDs and 8593 ncRNAs in 11 species, curated from more than 1300 articles. This database provides a user-friendly interface for browsing and searching and allows for data downloading flexibility. In addition, NSDNA offers a submission page for researchers to submit novel NSD–ncRNA associations. It represents an extremely useful and valuable resource for researchers who seek to understand the functions and molecular mechanisms of ncRNA involved in NSDs. PMID:27899613

  20. Workplace flexibility, work hours, and work-life conflict: finding an extra day or two.

    PubMed

    Hill, E Jeffrey; Erickson, Jenet Jacob; Holmes, Erin K; Ferris, Maria

    2010-06-01

    This study explores the influence of workplace flexibility on work-life conflict for a global sample of workers from four groups of countries. Data are from the 2007 International Business Machines Global Work and Life Issues Survey administered in 75 countries (N = 24,436). We specifically examine flexibility in where (work-at-home) and when (perceived schedule flexibility) workers engage in work-related tasks. Multivariate results indicate that work-at-home and perceived schedule flexibility are generally related to less work-life conflict. Break point analyses of sub-groups reveal that employees with workplace flexibility are able to work longer hours (often equivalent to one or two 8-hr days more per week) before reporting work-life conflict. The benefit of work-at-home is increased when combined with schedule flexibility. These findings were generally consistent across all four groups of countries, supporting the case that workplace flexibility is beneficial both to individuals (in the form of reduced work-life conflict) and to businesses (in the form of capacity for longer work hours). However, work-at-home appears less beneficial in countries with collectivist cultures. (c) 2010 APA, all rights reserved.

  1. Reactome Pengine: A web-logic API to the homo sapiens reactome.

    PubMed

    Neaves, Samuel R; Tsoka, Sophia; Millard, Louise A C

    2018-03-30

    Existing ways of accessing data from the Reactome database are limited. Either a researcher is restricted to particular queries defined by a web application programming interface (API), or they have to download the whole database. Reactome Pengine is a web service providing a logic programming based API to the human reactome. This gives researchers greater flexibility in data access than existing APIs, as users can send their own small programs (alongside queries) to Reactome Pengine. The server and an example notebook can be found at https://apps.nms.kcl.ac.uk/reactome-pengine. Source code is available at https://github.com/samwalrus/reactome-pengine and a Docker image is available at https://hub.docker.com/r/samneaves/rp4/ . samuel.neaves@kcl.ac.uk. Supplementary data are available at Bioinformatics online.

  2. [Current developments in the German Perinatal Survey. Modular analysis tools operating on a database platform].

    PubMed

    Lack, N

    2001-08-01

    The introduction of the modified data set for quality assurance in obstetrics (formerly perinatal survey) in Lower Saxony and Bavaria as early as 1999 saw the urgent requirement for a corresponding new statistical analysis of the revised data. The general outline of a new data reporting concept was originally presented by the Bavarian Commission for Perinatology and Neonatology at the Munich Perinatal Conference in November 1997. These ideas are germinal to content and layout of the new quality report for obstetrics currently in its nationwide harmonisation phase coordinated by the federal office for quality assurance in hospital care. A flexible and modular database oriented analysis tool developed in Bavaria is now in its second year of successful operation. The functionalities of this system are described in detail.

  3. New methods in iris recognition.

    PubMed

    Daugman, John

    2007-10-01

    This paper presents the following four advances in iris recognition: 1) more disciplined methods for detecting and faithfully modeling the iris inner and outer boundaries with active contours, leading to more flexible embedded coordinate systems; 2) Fourier-based methods for solving problems in iris trigonometry and projective geometry, allowing off-axis gaze to be handled by detecting it and "rotating" the eye into orthographic perspective; 3) statistical inference methods for detecting and excluding eyelashes; and 4) exploration of score normalizations, depending on the amount of iris data that is available in images and the required scale of database search. Statistical results are presented based on 200 billion iris cross-comparisons that were generated from 632500 irises in the United Arab Emirates database to analyze the normalization issues raised in different regions of receiver operating characteristic curves.

  4. [Design and development of an online system of parasite's images for training and evaluation].

    PubMed

    Yuan-Chun, Mao; Sui, Xu; Jie, Wang; Hua-Yun, Zhou; Jun, Cao

    2017-08-08

    To design and develop an online training and evaluation system for parasitic pathogen recognition. The system was based on a Parasitic Diseases Specimen Image Digitization Construction Database by using MYSQL 5.0 as the system of database development software, and PHP 5 as the interface development language. It was mainly used for online training and evaluation of parasitic pathology diagnostic techniques. The system interface was designed simple, flexible, and easy to operate for medical staff. It enabled full day and 24 hours accessible to online training study and evaluation. Thus, the system broke the time and space constraints of the traditional training models. The system provides a shared platform for the professional training of parasitic diseases, and a reference for other training tasks.

  5. Flexible carbon-based ohmic contacts for organic transistors

    NASA Technical Reports Server (NTRS)

    Brandon, Erik (Inventor)

    2007-01-01

    The present invention relates to a system and method of organic thin-film transistors (OTFTs). More specifically, the present invention relates to employing a flexible, conductive particle-polymer composite material for ohmic contacts (i.e. drain and source).

  6. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  7. A Flexible Online Metadata Editing and Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less

  8. Development of the Global Earthquake Model’s neotectonic fault database

    USGS Publications Warehouse

    Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert

    2015-01-01

    The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.

  9. RegNetwork: an integrated database of transcriptional and post-transcriptional regulatory networks in human and mouse

    PubMed Central

    Liu, Zhi-Ping; Wu, Canglin; Miao, Hongyu; Wu, Hulin

    2015-01-01

    Transcriptional and post-transcriptional regulation of gene expression is of fundamental importance to numerous biological processes. Nowadays, an increasing amount of gene regulatory relationships have been documented in various databases and literature. However, to more efficiently exploit such knowledge for biomedical research and applications, it is necessary to construct a genome-wide regulatory network database to integrate the information on gene regulatory relationships that are widely scattered in many different places. Therefore, in this work, we build a knowledge-based database, named ‘RegNetwork’, of gene regulatory networks for human and mouse by collecting and integrating the documented regulatory interactions among transcription factors (TFs), microRNAs (miRNAs) and target genes from 25 selected databases. Moreover, we also inferred and incorporated potential regulatory relationships based on transcription factor binding site (TFBS) motifs into RegNetwork. As a result, RegNetwork contains a comprehensive set of experimentally observed or predicted transcriptional and post-transcriptional regulatory relationships, and the database framework is flexibly designed for potential extensions to include gene regulatory networks for other organisms in the future. Based on RegNetwork, we characterized the statistical and topological properties of genome-wide regulatory networks for human and mouse, we also extracted and interpreted simple yet important network motifs that involve the interplays between TF-miRNA and their targets. In summary, RegNetwork provides an integrated resource on the prior information for gene regulatory relationships, and it enables us to further investigate context-specific transcriptional and post-transcriptional regulatory interactions based on domain-specific experimental data. Database URL: http://www.regnetworkweb.org PMID:26424082

  10. Analyses of balance and flexibility of obese patients undergoing bariatric surgery

    PubMed Central

    Benetti, Fernanda Antico; Bacha, Ivan Leo; Junior, Arthur Belarmino Garrido; Greve, Júlia Maria D'Andréa

    2016-01-01

    OBJECTIVE: To assess the postural control and flexibility of obese subjects before and both six and 12 months after bariatric surgery. To verify whether postural control is related to flexibility following weight reductions resulting from bariatric surgery. METHODS: The sample consisted of 16 subjects who had undergone bariatric surgery. All assessments were performed before and six and 12 months after bariatric surgery. Postural balance was assessed using an Accusuway® portable force platform, and flexibility was assessed using a standard chair sit and reach test (Wells' chair). RESULTS: With the force platform, no differences were observed in the displacement area or velocity from the center of pressure in the mediolateral and anteroposterior directions. The displacement speed from the center of pressure was decreased at the six month after the surgery; however, unchanged from baseline at 12 months post-surgery. Flexibility increased over time according to the three measurements tested. CONCLUSIONS: Static postural balance did not change. The velocity of postural adjustment responses were increased at six months after surgery. Therefore, weight loss promotes increased flexibility. Yet, improvements in flexibility are not related to improvements in balance. PMID:26934236

  11. Migration from relational to NoSQL database

    NASA Astrophysics Data System (ADS)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  12. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  13. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    PubMed

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  14. Potential energy hypersurface and molecular flexibility

    NASA Astrophysics Data System (ADS)

    Koča, Jaroslav

    1993-02-01

    The molecular flexibility phenomenon is discussed from the conformational potential energy(hyper) surface (PES) point of view. Flexibility is considered as a product of three terms: thermodynamic, kinetic and geometrical. Several expressions characterizing absolute and relative molecular flexibility are introduced, depending on a subspace studied of the entire conformational space, energy level E of PES as well as absolute temperature. Results obtained by programs DAISY, CICADA and PANIC in conjunction with molecular mechanics program MMX for flexibility analysis of isopentane, 2,2-dimethylpentane and isohexane molecules are introduced.

  15. A simple versatile solution for collecting multidimensional clinical data based on the CakePHP web application framework.

    PubMed

    Biermann, Martin

    2014-04-01

    Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  16. ANCAC: amino acid, nucleotide, and codon analysis of COGs--a tool for sequence bias analysis in microbial orthologs.

    PubMed

    Meiler, Arno; Klinger, Claudia; Kaufmann, Michael

    2012-09-08

    The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC's NUCOCOG dataset as the largest one available for that purpose thus far. Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills.

  17. ANCAC: amino acid, nucleotide, and codon analysis of COGs – a tool for sequence bias analysis in microbial orthologs

    PubMed Central

    2012-01-01

    Background The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Results Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC’s NUCOCOG dataset as the largest one available for that purpose thus far. Conclusions Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills. PMID:22958836

  18. A Contextual Behavior Science Framework for Understanding How Behavioral Flexibility Relates to Anxiety.

    PubMed

    Palm Reed, Kathleen M; Cameron, Amy Y; Ameral, Victoria E

    2017-09-01

    There is a growing literature focusing on the emerging idea that behavioral flexibility, rather than particular emotion regulation strategies per se, provides greater promise in predicting and influencing anxiety-related psychopathology. Yet this line of research and theoretical analysis appear to be plagued by its own challenges. For example, middle-level constructs, such as behavioral flexibility, are difficult to define, difficult to measure, and difficult to interpret in relation to clinical interventions. A key point that some researchers have made is that previous studies examining flexible use of emotion regulation strategies (or, more broadly, coping) have failed due to a lack of focus on context. That is, examining strategies in isolation of the context in which they are used provides limited information on the suitability, rigid adherence, or effectiveness of a given strategy in that situation. Several of these researchers have proposed the development of new models to define and measure various types of behavioral flexibility. We would like to suggest that an explanation of the phenomenon already exists and that we can go back to our behavioral roots to understand this phenomenon rather than focusing on defining and capturing a new process. Indeed, thorough contextual behavioral analyses already yield a useful account of what has been observed. We will articulate a model explaining behavioral flexibility using a functional, contextual framework, with anxiety-related disorders as an example.

  19. Experimental Evaluation of Fuzzy Logic Control of a Flexible Arm Manipulator

    DTIC Science & Technology

    1993-12-09

    temperature into a fuzzy context), and humidity is musty, Then air conditioner power is high. The database and knowledge base combine to form the...this case, the output, perhaps air conditioner power, would be medium to a degree of 50%. However, as shown in Table 3.2, Oare are more possible...OF WASHINGTON AFIT/CI/CIA-93-167 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING DEPARTMENI OF THE AIR FORCE AGENCY

  20. A versatile genome-scale PCR-based pipeline for high-definition DNA FISH.

    PubMed

    Bienko, Magda; Crosetto, Nicola; Teytelman, Leonid; Klemm, Sandy; Itzkovitz, Shalev; van Oudenaarden, Alexander

    2013-02-01

    We developed a cost-effective genome-scale PCR-based method for high-definition DNA FISH (HD-FISH). We visualized gene loci with diffraction-limited resolution, chromosomes as spot clusters and single genes together with transcripts by combining HD-FISH with single-molecule RNA FISH. We provide a database of over 4.3 million primer pairs targeting the human and mouse genomes that is readily usable for rapid and flexible generation of probes.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D. P.

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  2. Research Registries: A Tool to Advance Understanding of Rare Neuro-Ophthalmic Diseases

    PubMed Central

    Blankshain, Kimberly D; Moss, Heather E

    2016-01-01

    Background Medical research registries (MRR) are organized systems used to collect, store and analyze patient information. They are important tools for medical research with particular application to the study of rare diseases, including those seen in neuro-ophthalmic practice. Evidence Acquisition Evidence for this review was gathered from the writers’ experiences creating a comprehensive neuro-ophthalmology registry and review of the literature. Results MRR are typically observational and prospective databases of de-identified patient information. The structure is flexible and can accommodate a focus on specific diseases or treatments, surveillance of patient populations, physician quality improvement, or recruitment for future studies. They are particularly useful for the study of rare diseases. They can be integrated into the hierarchy of medical research at many levels provided their construction is well organized and they have several key characteristics including an easily manipulated database, comprehensive information on carefully selected patients and comply with human subjects regulations. MRR pertinent to neuro-ophthalmology include the UIC neuro-ophthalmology registry, Susac Syndrome Registry, Intracranial Hypertension Registry as well as larger scale patient outcome registries being developed by professional societies. Conclusion Medical research registries have a variety of forms and applications. With careful planning and clear goals, they are flexible and powerful research tools that can support multiple different study designs, and through this have the potential to advance understanding and care of neuro-ophthalmic diseases. PMID:27389624

  3. Securely and Flexibly Sharing a Biomedical Data Management System

    PubMed Central

    Wang, Fusheng; Hussels, Phillip; Liu, Peiya

    2011-01-01

    Biomedical database systems need not only to address the issues of managing complex data, but also to provide data security and access control to the system. These include not only system level security, but also instance level access control such as access of documents, schemas, or aggregation of information. The latter is becoming more important as multiple users can share a single scientific data management system to conduct their research, while data have to be protected before they are published or IP-protected. This problem is challenging as users’ needs for data security vary dramatically from one application to another, in terms of who to share with, what resources to be shared, and at what access level. We develop a comprehensive data access framework for a biomedical data management system SciPort. SciPort provides fine-grained multi-level space based access control of resources at not only object level (documents and schemas), but also space level (resources set aggregated in a hierarchy way). Furthermore, to simplify the management of users and privileges, customizable role-based user model is developed. The access control is implemented efficiently by integrating access privileges into the backend XML database, thus efficient queries are supported. The secure access approach we take makes it possible for multiple users to share the same biomedical data management system with flexible access management and high data security. PMID:21625285

  4. Research Registries: A Tool to Advance Understanding of Rare Neuro-Ophthalmic Diseases.

    PubMed

    Blankshain, Kimberly D; Moss, Heather E

    2016-09-01

    Medical research registries (MRR) are organized systems used to collect, store, and analyze patient information. They are important tools for medical research with particular application to the study of rare diseases, including those seen in neuro-ophthalmic practice. Evidence for this review was gathered from the writers' experiences creating a comprehensive neuro-ophthalmology registry and review of the literature. MRR are typically observational and prospective databases of de-identified patient information. The structure is flexible and can accommodate a focus on specific diseases or treatments, surveillance of patient populations, physician quality improvement, or recruitment for future studies. They are particularly useful for the study of rare diseases. They can be integrated into the hierarchy of medical research at many levels provided their construction is well organized and they have several key characteristics including an easily manipulated database, comprehensive information on carefully selected patients, and comply with human subjects regulations. MRR pertinent to neuro-ophthalmology include the University of Illinois at Chicago neuro-ophthalmology registry, Susac Syndrome Registry, Intracranial Hypertension Registry, and larger-scale patient outcome registries being developed by professional societies. MRR have a variety of forms and applications. With careful planning and clear goals, they are flexible and powerful research tools that can support multiple different study designs, and this can provide the potential to advance understanding and care of neuro-ophthalmic diseases.

  5. Anisotropy and non-homogeneity of an Allomyrina Dichotoma beetle hind wing membrane.

    PubMed

    Ha, N S; Jin, T L; Goo, N S; Park, H C

    2011-12-01

    Biomimetics is one of the most important paradigms as researchers seek to invent better engineering designs over human history. However, the observation of insect flight is a relatively recent work. Several researchers have tried to address the aerodynamic performance of flapping creatures and other natural properties of insects, although there are still many unsolved questions. In this study, we try to answer the questions related to the mechanical properties of a beetle's hind wing, which consists of a stiff vein structure and a flexible membrane. The membrane of a beetle's hind wing is small and flexible to the point that conventional methods cannot adequately quantify the material properties. The digital image correlation method, a non-contact displacement measurement method, is used along with a specially designed mini-tensile testing system. To reduce the end effects, we developed an experimental method that can deal with specimens with as high an aspect ratio as possible. Young's modulus varies over the area in the wing and ranges from 2.97 to 4.5 GPa in the chordwise direction and from 1.63 to 2.24 GPa in the spanwise direction. Furthermore, Poisson's ratio in the chordwise direction is 0.63-0.73 and approximately twice as large as that in the spanwise direction (0.33-0.39). From these results, we can conclude that the membrane of a beetle's hind wing is an anisotropic and non-homogeneous material. Our results will provide a better understanding of the flapping mechanism through the formulation of a fluid-structure interaction analysis or aero-elasticity analysis and meritorious data for biomaterial properties database as well as a creative design concept for a micro aerial flapper that mimics an insect.

  6. The attitudes of health care staff to information technology: a comprehensive review of the research literature.

    PubMed

    Ward, Rod; Stevens, Christine; Brentnall, Philip; Briddon, Jason

    2008-06-01

    What does the publicly available literature tell us about the attitudes of health care staff to the development of information technology in practice, including the factors which influence them and the factors which may be used to change these attitudes? Twelve databases were searched for literature published between 2000 and 2005 that identified research related to information technology (IT), health professionals and attitude. English language studies were included which described primary research relating to the attitudes of one or more health care staff groups towards IT. Letters, personal viewpoints, reflections and opinion pieces were not included. Complex factors contribute to the formation of attitudes towards IT. Many of the issues identified were around the flexibility of the systems and whether they were 'fit for purpose', along with the confidence and experience of the IT users. The literature suggests that attitudes of practitioners are a significant factor in the acceptance and efficiency of use of IT in practice. The literature also suggested that education and training was a factor for encouraging the use of IT systems. A range of key issues, such as the need for flexibility and usability, appropriate education and training and the need for the software to be 'fit for purpose', showed that organizations need to plan carefully when proposing the introduction of IT-based systems into work practices. The studies reviewed did suggest that attitudes of health care professionals can be a significant factor in the acceptance and efficiency of use of IT in practice. Further qualitative and quantitative research is needed into the approaches that have most effect on the attitudes of health care staff towards IT.

  7. Application of a flexible lattice Boltzmann method based simulation tool for modelling physico-chemical processes at different scales

    NASA Astrophysics Data System (ADS)

    Patel, Ravi A.; Perko, Janez; Jacques, Diederik

    2017-04-01

    Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.

  8. Collaboration between employers and occupational health service providers: a systematic review of key characteristics.

    PubMed

    Halonen, Jaana I; Atkins, Salla; Hakulinen, Hanna; Pesonen, Sanna; Uitti, Jukka

    2017-01-05

    Employees are major contributors to economic development, and occupational health services (OHS) can have an important role in supporting their health. Key to this is collaboration between employers and OHS. We reviewed the evidence regarding the characteristics of good collaboration between employers and OHS providers that is essential to construct more effective collaboration and services. A systematic review of the factors of good collaboration between employers and OHS providers was conducted. We searched five databases between January 2000 and March 2016 and back referenced included articles. Two reviewers evaluated 639 titles, 63 abstracts and 20 full articles, and agreed that six articles, all on qualitative studies, met the predetermined relevance and publication criteria and were included. Data were extracted by one reviewer and checked by a second reviewer and analysed using thematic analysis. Three themes and nine subthemes related to good collaboration were identified. The first theme included time, space and contract requirements for effective collaboration with three subthemes (i.e., key characteristics): flexible OHS/flexible contracts including tailor-made services accounting for the needs of the employer, geographical proximity of the stakeholders allowing easy access to services, and long-term contracts as collaboration develops over time. The second theme was related to characteristics of the dialogue in effective collaboration that consisted of shared goals, reciprocity, frequent contact and trust. According to the third theme the definition of roles of the stakeholders was important; OHS providers should have competence and knowledge about the workplace, become strategic partners with the employers as well as provide quality services. Although literature regarding collaboration between the employers and OHS providers was limited, we identified several key factors that contribute to effective collaboration. This information is useful in developing indicators of effective collaboration that will enable organisation of more effective OHS practices.

  9. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  10. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  11. Induced liquid-crystalline ordering in solutions of stiff and flexible amphiphilic macromolecules: Effect of mixture composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glagolev, Mikhail K.; Vasilevskaya, Valentina V., E-mail: vvvas@polly.phys.msu.ru; Khokhlov, Alexei R.

    Impact of mixture composition on self-organization in concentrated solutions of stiff helical and flexible macromolecules was studied by means of molecular dynamics simulation. The macromolecules were composed of identical amphiphilic monomer units but a fraction f of macromolecules had stiff helical backbones and the remaining chains were flexible. In poor solvents the compacted flexible macromolecules coexist with bundles or filament clusters from few intertwined stiff helical macromolecules. The increase of relative content f of helical macromolecules leads to increase of the length of helical clusters, to alignment of clusters with each other, and then to liquid-crystalline-like ordering along a singlemore » direction. The formation of filament clusters causes segregation of helical and flexible macromolecules and the alignment of the filaments induces effective liquid-like ordering of flexible macromolecules. A visual analysis and calculation of order parameter relaying the anisotropy of diffraction allow concluding that transition from disordered to liquid-crystalline state proceeds sharply at relatively low content of stiff components.« less

  12. Neural correlates of reappraisal considering working memory capacity and cognitive flexibility.

    PubMed

    Zaehringer, Jenny; Falquez, Rosalux; Schubert, Anna-Lena; Nees, Frauke; Barnow, Sven

    2018-01-09

    Cognitive reappraisal of emotion is strongly related to long-term mental health. Therefore, the exploration of underlying cognitive and neural mechanisms has become an essential focus of research. Considering that reappraisal and executive functions rely on a similar brain network, the question arises whether behavioral differences in executive functions modulate neural activity during reappraisal. Using functional neuroimaging, the present study aimed to analyze the role of working memory capacity (WMC) and cognitive flexibility in brain activity during down-regulation of negative emotions by reappraisal in N = 20 healthy participants. Results suggests that WMC and cognitive flexibility were negatively correlated with prefrontal activity during reappraisal condition. Here, results also revealed a negative correlation between cognitive flexibility and amygdala activation. These findings provide first hints that (1) individuals with lower WMC and lower cognitive flexibility might need more higher-order cognitive neural resources in order to down-regulate negative emotions and (2) cognitive flexibility relates to emotional reactivity during reappraisal.

  13. NALDB: nucleic acid ligand database for small molecules targeting nucleic acid.

    PubMed

    Kumar Mishra, Subodh; Kumar, Amit

    2016-01-01

    Nucleic acid ligand database (NALDB) is a unique database that provides detailed information about the experimental data of small molecules that were reported to target several types of nucleic acid structures. NALDB is the first ligand database that contains ligand information for all type of nucleic acid. NALDB contains more than 3500 ligand entries with detailed pharmacokinetic and pharmacodynamic information such as target name, target sequence, ligand 2D/3D structure, SMILES, molecular formula, molecular weight, net-formal charge, AlogP, number of rings, number of hydrogen bond donor and acceptor, potential energy along with their Ki, Kd, IC50 values. All these details at single platform would be helpful for the development and betterment of novel ligands targeting nucleic acids that could serve as a potential target in different diseases including cancers and neurological disorders. With maximum 255 conformers for each ligand entry, our database is a multi-conformer database and can facilitate the virtual screening process. NALDB provides powerful web-based search tools that make database searching efficient and simplified using option for text as well as for structure query. NALDB also provides multi-dimensional advanced search tool which can screen the database molecules on the basis of molecular properties of ligand provided by database users. A 3D structure visualization tool has also been included for 3D structure representation of ligands. NALDB offers an inclusive pharmacological information and the structurally flexible set of small molecules with their three-dimensional conformers that can accelerate the virtual screening and other modeling processes and eventually complement the nucleic acid-based drug discovery research. NALDB can be routinely updated and freely available on bsbe.iiti.ac.in/bsbe/naldb/HOME.php. Database URL: http://bsbe.iiti.ac.in/bsbe/naldb/HOME.php. © The Author(s) 2016. Published by Oxford University Press.

  14. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  15. Apparatus And Method Of Using Flexible Printed Circuit Board In Optical Transceiver Device

    DOEpatents

    Anderson, Gene R.; Armendariz, Marcelino G.; Bryan, Robert P.; Carson, Richard F.; Duckett, III, Edwin B.; McCormick, Frederick B.; Peterson, David W.; Peterson, Gary D.; Reysen, Bill H.

    2005-03-15

    This invention relates to a flexible printed circuit board that is used in connection with an optical transmitter, receiver or transceiver module. In one embodiment, the flexible printed circuit board has flexible metal layers in between flexible insulating layers, and the circuit board comprises: (1) a main body region orientated in a first direction having at least one electrical or optoelectronic device; (2) a plurality of electrical contact pads integrated into the main body region, where the electrical contact pads function to connect the flexible printed circuit board to an external environment; (3) a buckle region extending from one end of the main body region; and (4) a head region extending from one end of the buckle region, and where the head region is orientated so that it is at an angle relative to the direction of the main body region. The electrical contact pads may be ball grid arrays, solder balls or land-grid arrays, and they function to connect the circuit board to an external environment. A driver or amplifier chip may be adapted to the head region of the flexible printed circuit board. In another embodiment, a heat spreader passes along a surface of the head region of the flexible printed circuit board, and a window is formed in the head region of the flexible printed circuit board. Optoelectronic devices are adapted to the head spreader in such a manner that they are accessible through the window in the flexible printed circuit board.

  16. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  17. LincSNP 2.0: an updated database for linking disease-associated SNPs to human long non-coding RNAs and their TFBSs.

    PubMed

    Ning, Shangwei; Yue, Ming; Wang, Peng; Liu, Yue; Zhi, Hui; Zhang, Yan; Zhang, Jizhou; Gao, Yue; Guo, Maoni; Zhou, Dianshuang; Li, Xin; Li, Xia

    2017-01-04

    We describe LincSNP 2.0 (http://bioinfo.hrbmu.edu.cn/LincSNP), an updated database that is used specifically to store and annotate disease-associated single nucleotide polymorphisms (SNPs) in human long non-coding RNAs (lncRNAs) and their transcription factor binding sites (TFBSs). In LincSNP 2.0, we have updated the database with more data and several new features, including (i) expanding disease-associated SNPs in human lncRNAs; (ii) identifying disease-associated SNPs in lncRNA TFBSs; (iii) updating LD-SNPs from the 1000 Genomes Project; and (iv) collecting more experimentally supported SNP-lncRNA-disease associations. Furthermore, we developed three flexible online tools to retrieve and analyze the data. Linc-Mart is a convenient way for users to customize their own data. Linc-Browse is a tool for all data visualization. Linc-Score predicts the associations between lncRNA and disease. In addition, we provided users a newly designed, user-friendly interface to search and download all the data in LincSNP 2.0 and we also provided an interface to submit novel data into the database. LincSNP 2.0 is a continually updated database and will serve as an important resource for investigating the functions and mechanisms of lncRNAs in human diseases. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  19. Web Proxy Auto Discovery for the WLCG

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.

  20. Web Proxy Auto Discovery for the WLCG

    DOE PAGES

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...

    2017-11-23

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  1. Web Proxy Auto Discovery for the WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  2. Nanoscale electromechanical parametric amplifier

    DOEpatents

    Aleman, Benjamin Jose; Zettl, Alexander

    2016-09-20

    This disclosure provides systems, methods, and apparatus related to a parametric amplifier. In one aspect, a device includes an electron source electrode, a counter electrode, and a pumping electrode. The electron source electrode may include a conductive base and a flexible conductor. The flexible conductor may have a first end and a second end, with the second end of the flexible conductor being coupled to the conductive base. A cross-sectional dimension of the flexible conductor may be less than about 100 nanometers. The counter electrode may be disposed proximate the first end of the flexible conductor and spaced a first distance from the first end of the flexible conductor. The pumping electrode may be disposed proximate a length of the flexible conductor and spaced a second distance from the flexible conductor.

  3. Dynamics of flexible fibers and vesicles in Poiseuille flow at low Reynolds number.

    PubMed

    Farutin, Alexander; Piasecki, Tomasz; Słowicka, Agnieszka M; Misbah, Chaouqi; Wajnryb, Eligiusz; Ekiel-Jeżewska, Maria L

    2016-09-21

    The dynamics of flexible fibers and vesicles in unbounded planar Poiseuille flow at low Reynolds number is shown to exhibit similar basic features, when their equilibrium (moderate) aspect ratio is the same and vesicle viscosity contrast is relatively high. Tumbling, lateral migration, accumulation and shape evolution of these two types of flexible objects are analyzed numerically. The linear dependence of the accumulation position on relative bending rigidity, and other universal scalings are derived from the local shear flow approximation.

  4. Making geospatial data in ASF archive readily accessible

    NASA Astrophysics Data System (ADS)

    Gens, R.; Hogenson, K.; Wolf, V. G.; Drew, L.; Stern, T.; Stoner, M.; Shapran, M.

    2015-12-01

    The way geospatial data is searched, managed, processed and used has changed significantly in recent years. A data archive such as the one at the Alaska Satellite Facility (ASF), one of NASA's twelve interlinked Distributed Active Archive Centers (DAACs), used to be searched solely via user interfaces that were specifically developed for its particular archive and data sets. ASF then moved to using an application programming interface (API) that defined a set of routines, protocols, and tools for distributing the geospatial information stored in the database in real time. This provided a more flexible access to the geospatial data. Yet, it was up to user to develop the tools to get a more tailored access to the data they needed. We present two new approaches for serving data to users. In response to the recent Nepal earthquake we developed a data feed for distributing ESA's Sentinel data. Users can subscribe to the data feed and are provided with the relevant metadata the moment a new data set is available for download. The second approach was an Open Geospatial Consortium (OGC) web feature service (WFS). The WFS hosts the metadata along with a direct link from which the data can be downloaded. It uses the open-source GeoServer software (Youngblood and Iacovella, 2013) and provides an interface to include the geospatial information in the archive directly into the user's geographic information system (GIS) as an additional data layer. Both services are run on top of a geospatial PostGIS database, an open-source geographic extension for the PostgreSQL object-relational database (Marquez, 2015). Marquez, A., 2015. PostGIS essentials. Packt Publishing, 198 p. Youngblood, B. and Iacovella, S., 2013. GeoServer Beginner's Guide, Packt Publishing, 350 p.

  5. Investigation of blended learning video resources to teach health students clinical skills: An integrative review.

    PubMed

    Coyne, Elisabeth; Rands, Hazel; Frommolt, Valda; Kain, Victoria; Plugge, Melanie; Mitchell, Marion

    2018-04-01

    The aim of this review is to inform future educational strategies by synthesising research related to blended learning resources using simulation videos to teach clinical skills for health students. An integrative review methodology was used to allow for the combination of diverse research methods to better understand the research topic. This review was guided by the framework described by Whittemore and Knafl (2005), DATA SOURCES: Systematic search of the following databases was conducted in consultation with a librarian using the following databases: SCOPUS, MEDLINE, COCHRANE, PsycINFO databases. Keywords and MeSH terms: clinical skills, nursing, health, student, blended learning, video, simulation and teaching. Data extracted from the studies included author, year, aims, design, sample, skill taught, outcome measures and findings. After screening the articles, extracting project data and completing summary tables, critical appraisal of the projects was completed using the Mixed Methods Appraisal Tool (MMAT). Ten articles met all the inclusion criteria and were included in this review. The MMAT scores varied from 50% to 100%. Thematic analysis was undertaken and we identified the following three themes: linking theory to practice, autonomy of learning and challenges of developing a blended learning model. Blended learning allowed for different student learning styles, repeated viewing, and enabled links between theory and practice. The video presentation needed to be realistic and culturally appropriate and this required both time and resources to create. A blended learning model, which incorporates video-assisted online resources, may be a useful tool to teach clinical skills to students of health including nursing. Blended learning not only increases students' knowledge and skills, but is often preferred by students due to its flexibility. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Development of a national anthropogenic heating database with an extrapolation for international cities

    NASA Astrophysics Data System (ADS)

    Sailor, David J.; Georgescu, Matei; Milne, Jeffrey M.; Hart, Melissa A.

    2015-10-01

    Given increasing utility of numerical models to examine urban impacts on meteorology and climate, there exists an urgent need for accurate representation of seasonally and diurnally varying anthropogenic heating data, an important component of the urban energy budget for cities across the world. Incorporation of anthropogenic heating data as inputs to existing climate modeling systems has direct societal implications ranging from improved prediction of energy demand to health assessment, but such data are lacking for most cities. To address this deficiency we have applied a standardized procedure to develop a national database of seasonally and diurnally varying anthropogenic heating profiles for 61 of the largest cities in the United Stated (U.S.). Recognizing the importance of spatial scale, the anthropogenic heating database developed includes the city scale and the accompanying greater metropolitan area. Our analysis reveals that a single profile function can adequately represent anthropogenic heating during summer but two profile functions are required in winter, one for warm climate cities and another for cold climate cities. On average, although anthropogenic heating is 40% larger in winter than summer, the electricity sector contribution peaks during summer and is smallest in winter. Because such data are similarly required for international cities where urban climate assessments are also ongoing, we have made a simple adjustment accounting for different international energy consumption rates relative to the U.S. to generate seasonally and diurnally varying anthropogenic heating profiles for a range of global cities. The methodological approach presented here is flexible and straightforwardly applicable to cities not modeled because of presently unavailable data. Because of the anticipated increase in global urban populations for many decades to come, characterizing this fundamental aspect of the urban environment - anthropogenic heating - is an essential element toward continued progress in urban climate assessment.

  7. Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System

    PubMed Central

    Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail

    1988-01-01

    This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.

  8. 34 CFR 300.230 - SEA flexibility.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 2 2014-07-01 2013-07-01 true SEA flexibility. 300.230 Section 300.230 Education... DISABILITIES Local Educational Agency Eligibility § 300.230 SEA flexibility. (a) Adjustment to State fiscal... SEA, notwithstanding §§ 300.162 through 300.163 (related to State-level nonsupplanting and maintenance...

  9. 34 CFR 300.230 - SEA flexibility.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 2 2013-07-01 2013-07-01 false SEA flexibility. 300.230 Section 300.230 Education... DISABILITIES Local Educational Agency Eligibility § 300.230 SEA flexibility. (a) Adjustment to State fiscal... SEA, notwithstanding §§ 300.162 through 300.163 (related to State-level nonsupplanting and maintenance...

  10. 34 CFR 300.230 - SEA flexibility.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false SEA flexibility. 300.230 Section 300.230 Education... DISABILITIES Local Educational Agency Eligibility § 300.230 SEA flexibility. (a) Adjustment to State fiscal... SEA, notwithstanding §§ 300.162 through 300.163 (related to State-level nonsupplanting and maintenance...

  11. Analysis of lipid experiments (ALEX): a software framework for analysis of high-resolution shotgun lipidomics data.

    PubMed

    Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S

    2013-01-01

    Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.

  12. Compilation and physicochemical classification analysis of a diverse hERG inhibition database

    NASA Astrophysics Data System (ADS)

    Didziapetris, Remigijus; Lanevskij, Kiril

    2016-12-01

    A large and chemically diverse hERG inhibition data set comprised of 6690 compounds was constructed on the basis of ChEMBL bioactivity database and original publications dealing with experimental determination of hERG activities using patch-clamp and competitive displacement assays. The collected data were converted to binary format at 10 µM activity threshold and subjected to gradient boosting machine classification analysis using a minimal set of physicochemical and topological descriptors. The tested parameters involved lipophilicity (log P), ionization (p K a ), polar surface area, aromaticity, molecular size and flexibility. The employed approach allowed classifying the compounds with an overall 75-80 % accuracy, even though it only accounted for non-specific interactions between hERG and ligand molecules. The observed descriptor-response profiles were consistent with common knowledge about hERG ligand binding site, but also revealed several important quantitative trends, as well as slight inter-assay variability in hERG inhibition data. The results suggest that even weakly basic groups (p K a < 6) might substantially contribute to hERG inhibition potential, whereas the role of lipophilicity depends on the compound's ionization state, and the influence of log P decreases in the order of bases > zwitterions > neutrals > acids. Given its robust performance and clear physicochemical interpretation, the proposed model may provide valuable information to direct drug discovery efforts towards compounds with reduced risk of hERG-related cardiotoxicity.

  13. A Flexible Electronic Commerce Recommendation System

    NASA Astrophysics Data System (ADS)

    Gong, Songjie

    Recommendation systems have become very popular in E-commerce websites. Many of the largest commerce websites are already using recommender technologies to help their customers find products to purchase. An electronic commerce recommendation system learns from a customer and recommends products that the customer will find most valuable from among the available products. But most recommendation methods are hard-wired into the system and they support only fixed recommendations. This paper presented a framework of flexible electronic commerce recommendation system. The framework is composed by user model interface, recommendation engine, recommendation strategy model, recommendation technology group, user interest model and database interface. In the recommender strategy model, the method can be collaborative filtering, content-based filtering, mining associate rules method, knowledge-based filtering method or the mixed method. The system mapped the implementation and demand through strategy model, and the whole system would be design as standard parts to adapt to the change of the recommendation strategy.

  14. Flexible solution for interoperable cloud healthcare systems.

    PubMed

    Vida, Mihaela Marcella; Lupşe, Oana Sorina; Stoicu-Tivadar, Lăcrămioara; Bernad, Elena

    2012-01-01

    It is extremely important for the healthcare domain to have a standardized communication because will improve the quality of information and in the end the resulting benefits will improve the quality of patients' life. The standards proposed to be used are: HL7 CDA and CCD. For a better access to the medical data a solution based on cloud computing (CC) is investigated. CC is a technology that supports flexibility, seamless care, and reduced costs of the medical act. To ensure interoperability between healthcare information systems a solution creating a Web Custom Control is presented. The control shows the database tables and fields used to configure the two standards. This control will facilitate the work of the medical staff and hospital administrators, because they can configure the local system easily and prepare it for communication with other systems. The resulted information will have a higher quality and will provide knowledge that will support better patient management and diagnosis.

  15. Rule-based optimization and multicriteria decision support for packaging a truck chassis

    NASA Astrophysics Data System (ADS)

    Berger, Martin; Lindroth, Peter; Welke, Richard

    2017-06-01

    Trucks are highly individualized products where exchangeable parts are flexibly combined to suit different customer requirements, this leading to a great complexity in product development. Therefore, an optimization approach based on constraint programming is proposed for automatically packaging parts of a truck chassis by following packaging rules expressed as constraints. A multicriteria decision support system is developed where a database of truck layouts is computed, among which interactive navigation then can be performed. The work has been performed in cooperation with Volvo Group Trucks Technology (GTT), from which specific rules have been used. Several scenarios are described where the methods developed can be successfully applied and lead to less time-consuming manual work, fewer mistakes, and greater flexibility in configuring trucks. A numerical evaluation is also presented showing the efficiency and practical relevance of the methods, which are implemented in a software tool.

  16. PDBFlex: exploring flexibility in protein structures

    PubMed Central

    Hrabe, Thomas; Li, Zhanwen; Sedova, Mayya; Rotkiewicz, Piotr; Jaroszewski, Lukasz; Godzik, Adam

    2016-01-01

    The PDBFlex database, available freely and with no login requirements at http://pdbflex.org, provides information on flexibility of protein structures as revealed by the analysis of variations between depositions of different structural models of the same protein in the Protein Data Bank (PDB). PDBFlex collects information on all instances of such depositions, identifying them by a 95% sequence identity threshold, performs analysis of their structural differences and clusters them according to their structural similarities for easy analysis. The PDBFlex contains tools and viewers enabling in-depth examination of structural variability including: 2D-scaling visualization of RMSD distances between structures of the same protein, graphs of average local RMSD in the aligned structures of protein chains, graphical presentation of differences in secondary structure and observed structural disorder (unresolved residues), difference distance maps between all sets of coordinates and 3D views of individual structures and simulated transitions between different conformations, the latter displayed using JSMol visualization software. PMID:26615193

  17. Flexible querying of Web data to simulate bacterial growth in food.

    PubMed

    Buche, Patrice; Couvert, Olivier; Dibie-Barthélemy, Juliette; Hignette, Gaëlle; Mettler, Eric; Soler, Lydie

    2011-06-01

    A preliminary step in microbial risk assessment in foods is the gathering of experimental data. In the framework of the Sym'Previus project, we have designed a complete data integration system opened on the Web which allows a local database to be complemented by data extracted from the Web and annotated using a domain ontology. We focus on the Web data tables as they contain, in general, a synthesis of data published in the documents. We propose in this paper a flexible querying system using the domain ontology to scan simultaneously local and Web data, this in order to feed the predictive modeling tools available on the Sym'Previus platform. Special attention is paid on the way fuzzy annotations associated with Web data are taken into account in the querying process, which is an important and original contribution of the proposed system. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. BioMAJ: a flexible framework for databanks synchronization and processing.

    PubMed

    Filangi, Olivier; Beausse, Yoann; Assi, Anthony; Legrand, Ludovic; Larré, Jean-Marc; Martin, Véronique; Collin, Olivier; Caron, Christophe; Leroy, Hugues; Allouche, David

    2008-08-15

    Large- and medium-scale computational molecular biology projects require accurate bioinformatics software and numerous heterogeneous biological databanks, which are distributed around the world. BioMAJ provides a flexible, robust, fully automated environment for managing such massive amounts of data. The JAVA application enables automation of the data update cycle process and supervision of the locally mirrored data repository. We have developed workflows that handle some of the most commonly used bioinformatics databases. A set of scripts is also available for post-synchronization data treatment consisting of indexation or format conversion (for NCBI blast, SRS, EMBOSS, GCG, etc.). BioMAJ can be easily extended by personal homemade processing scripts. Source history can be kept via html reports containing statements of locally managed databanks. http://biomaj.genouest.org. BioMAJ is free open software. It is freely available under the CECILL version 2 license.

  19. MILSTAR's flexible substrate solar array: Lessons learned, addendum

    NASA Technical Reports Server (NTRS)

    Gibb, John

    1990-01-01

    MILSTAR's Flexible Substrate Solar Array (FSSA) is an evolutionary development of the lightweight, flexible substrate design pioneered at Lockheed during the seventies. Many of the features of the design are related to the Solar Array Flight Experiment (SAFE), flown on STS-41D in 1984. FSSA development has created a substantial technology base for future flexible substrate solar arrays such as the array for the Space Station Freedom. Lessons learned during the development of the FSSA can and should be applied to the Freedom array and other future flexible substrate designs.

  20. Sensor system for web inspection

    DOEpatents

    Sleefe, Gerard E.; Rudnick, Thomas J.; Novak, James L.

    2002-01-01

    A system for electrically measuring variations over a flexible web has a capacitive sensor including spaced electrically conductive, transmit and receive electrodes mounted on a flexible substrate. The sensor is held against a flexible web with sufficient force to deflect the path of the web, which moves relative to the sensor.

  1. Flexible Learning in Perspective.

    ERIC Educational Resources Information Center

    Further Education Unit, London (England).

    Those responsible for coordinating flexible and open learning in colleges should consider whether the time may be right to seek common ground on the issues and principles shared by the many related initiatives in Great Britain during the last two decades. Two principles that are important to flexible learning are that education's effectiveness is…

  2. 76 FR 63817 - Disclosure of Information; Privacy Act Regulations; Notice and Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-14

    ..., paper, reports of examination, work papers, and correspondence relating to such reports, to the.... Regulatory Flexibility Act The Regulatory Flexibility Act, 5 U.S.C. 601, et seq., (RFA) applies only to rules... and comment requirements of the APA, the requirement to prepare a final regulatory flexibility...

  3. Relations among conceptual knowledge, procedural knowledge, and procedural flexibility in two samples differing in prior knowledge.

    PubMed

    Schneider, Michael; Rittle-Johnson, Bethany; Star, Jon R

    2011-11-01

    Competence in many domains rests on children developing conceptual and procedural knowledge, as well as procedural flexibility. However, research on the developmental relations between these different types of knowledge has yielded unclear results, in part because little attention has been paid to the validity of the measures or to the effects of prior knowledge on the relations. To overcome these problems, we modeled the three constructs in the domain of equation solving as latent factors and tested (a) whether the predictive relations between conceptual and procedural knowledge were bidirectional, (b) whether these interrelations were moderated by prior knowledge, and (c) how both constructs contributed to procedural flexibility. We analyzed data from 2 measurement points each from two samples (Ns = 228 and 304) of middle school students who differed in prior knowledge. Conceptual and procedural knowledge had stable bidirectional relations that were not moderated by prior knowledge. Both kinds of knowledge contributed independently to procedural flexibility. The results demonstrate how changes in complex knowledge structures contribute to competence development.

  4. The paradox of cognitive flexibility in autism

    PubMed Central

    Geurts, Hilde M.; Corbett, Blythe; Solomon, Marjorie

    2017-01-01

    We present an overview of current literature addressing cognitive flexibility in autism spectrum disorders. Based on recent studies at multiple sites, using diverse methods and participants of different autism subtypes, ages and cognitive levels, no consistent evidence for cognitive flexibility deficits was found. Researchers and clinicians assume that inflexible everyday behaviors in autism are directly related to cognitive flexibility deficits as assessed by clinical and experimental measures. However, there is a large gap between the day-to-day behavioral flexibility and that measured with these cognitive flexibility tasks. To advance the field, experimental measures must evolve to reflect mechanistic models of flexibility deficits. Moreover, ecologically valid measures are required to be able to resolve the paradox between cognitive and behavioral inflexibility. PMID:19138551

  5. Flexible work arrangements, job satisfaction, and turnover intentions: the mediating role of work-to-family enrichment.

    PubMed

    McNall, Laurel A; Masuda, Aline D; Nicklin, Jessica M

    2010-01-01

    The authors examined the relation between the availability of 2 popular types of flexible work arrangements (i.e., flextime and compressed workweek) and work-to-family enrichment and, in turn, the relation between work-to-family enrichment and (a) job satisfaction and (b) turnover intentions. In a sample of 220 employed working adults, hierarchical regression analyses showed that work-to-family enrichment mediated the relation between flexible work arrangements and both job satisfaction and turnover intentions, even after controlling for gender, age, marital status, education, number of children, and hours worked. Thus, the availability of flexible work arrangements such as flextime and compressed workweek seems to help employees experience greater enrichment from work to home, which, in turn, is associated with higher job satisfaction and lower turnover intentions. The authors discuss the implications for research and practice.

  6. Ingredients for an Integrated Dinner: Parsley, Sage, Rosemary and Thyme

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    In 1966, Simon and Garfunkel combined the English traditional "Scarborough Fair" with a counter melody. This is one of the manifold techniques of the Kontrapunktik described by Bach around 1745 in "The Art of the Fugue": combining completely different and seemingly independent melodies (or motifs) into a coherent piece of music, pleasant for the audience. This achievement, transposed into Computer Science, could be of great benefit for geo services as we look at the currently disparate situation: On the one hand, we have metadata - traditionally, they are understood as being small in volume, but rich in content and semantics, and flexibly queryable through the rich body of technologies established over several decades of database research, centering around query languages like SQL. On the other hand, we have data themselves, such as remote sensing and other measured and observed data sets - they are considered difficult to interpret, semantic-poor, and only for clumsy download, as they are the main constituent of what we today call Big Data. The traditional advantages of databases, such as information integration, query flexibility, and scalability seem to be unavailable. These are the melodies that require a kontrapunctic harmonization, leading to a Holy Grail where different information categories enjoy individually tailored support, while an overall integrating framework allows seamless and convenient access and processing by the user. Most of the data categories to be integrated are well known in fact: ontologies, geospatial meshes, spatiotemporal arrays, and free text constitute major ingredients in this orchestration. For many of them, isolated solutions have been presented, and for some of them (like ontologies and text) integration has been achieved already; a complete harmonic integration, though, is still lacking as of today. In our talk, we detail our vision on such integration through query models and languages which merge established concepts and novel paradigms in a harmonic way. We present the EarthServer initiative which has set out to demonstrate flexible ad-hoc processing and filtering on massive Earth data sets.

  7. JANIS-2: An Improved Version of the NEA Java-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, N.; Henriksson, H.; Nouri, A.; Nagel, P.; Dupont, E.

    2005-05-01

    JANIS (JAva-based Nuclear Information Software) is a display program designed to facilitate the visualisation and manipulation of nuclear data. Its objective is to allow the user of nuclear data to access numerical and graphical representations without prior knowledge of the storage format. It offers maximum flexibility for the comparison of different nuclear data sets. Features included in the latest release are described such as direct access to centralised databases through JAVA Servlet technology.

  8. JANIS-2: An Improved Version of the NEA Java-based Nuclear Data Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soppera, N.; Henriksson, H.; Nagel, P.

    2005-05-24

    JANIS (JAva-based Nuclear Information Software) is a display program designed to facilitate the visualisation and manipulation of nuclear data. Its objective is to allow the user of nuclear data to access numerical and graphical representations without prior knowledge of the storage format. It offers maximum flexibility for the comparison of different nuclear data sets. Features included in the latest release are described such as direct access to centralised databases through JAVA Servlet technology.

  9. On an Allan variance approach to classify VLBI radio-sources on the basis of their astrometric stability

    NASA Astrophysics Data System (ADS)

    Gattano, C.; Lambert, S.; Bizouard, C.

    2017-12-01

    In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.

  10. Trends in the Evolution of the Public Web, 1998-2002; The Fedora Project: An Open-source Digital Object Repository Management System; State of the Dublin Core Metadata Initiative, April 2003; Preservation Metadata; How Many People Search the ERIC Database Each Day?

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Lavoie, Brian F.; Bennett, Rick; Staples, Thornton; Wayland, Ross; Payette, Sandra; Dekkers, Makx; Weibel, Stuart; Searle, Sam; Thompson, Dave; Rudner, Lawrence M.

    2003-01-01

    Includes five articles that examine key trends in the development of the public Web: size and growth, internationalization, and metadata usage; Flexible Extensible Digital Object and Repository Architecture (Fedora) for use in digital libraries; developments in the Dublin Core Metadata Initiative (DCMI); the National Library of New Zealand Te Puna…

  11. Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher

    NASA Technical Reports Server (NTRS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.

    2017-01-01

    Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.

  12. POLLUX: a program for simulated cloning, mutagenesis and database searching of DNA constructs.

    PubMed

    Dayringer, H E; Sammons, S A

    1991-04-01

    Computer support for research in biotechnology has developed rapidly and has provided several tools to aid the researcher. This report describes the capabilities of new computer software developed in this laboratory to aid in the documentation and planning of experiments in molecular biology. The program, POLLUX, provides a graphical medium for the entry, edit and manipulation of DNA constructs and a textual format for display and edit of construct descriptive data. Program operation and procedures are designed to mimic the actual laboratory experiments with respect to capability and the order in which they are performed. Flexible control over the content of the computer-generated displays and program facilities is provided by a mouse-driven menu interface. Programmed facilities for mutagenesis, simulated cloning and searching of the database from networked workstations are described.

  13. Aging Affects Dopaminergic Neural Mechanisms of Cognitive Flexibility

    DOE PAGES

    Berry, Anne S.; Shah, Vyoma D.; Baker, Suzanne L.; ...

    2016-12-14

    Aging is accompanied by profound changes in the brain’s dopamine system that affect cognitive function. Evidence of powerful individual differences in cognitive aging has sharpened focus on identifying biological factors underlying relative preservation versus vulnerability to decline. Dopamine represents a key target in these efforts. Alterations of dopamine receptors and dopamine synthesis are seen in aging, with receptors generally showing reduction and synthesis demonstrating increases. Using the PET tracer 6-[ 18F]fluoro-L- m-tyrosine, we found strong support for upregulated striatal dopamine synthesis capacity in healthy older adult humans free of amyloid pathology, relative to young people. We next used fMRI tomore » define the functional impact of elevated synthesis capacity on cognitive flexibility, a core component of executive function. We found clear evidence in young adults that low levels of synthesis capacity were suboptimal, associated with diminished cognitive flexibility and altered frontoparietal activation relative to young adults with highest synthesis values. Critically, these relationships between dopamine, performance, and activation were transformed in older adults with higher synthesis capacity. Variability in synthesis capacity was related to intrinsic frontoparietal functional connectivity across groups, suggesting that striatal dopamine synthesis influences the tuning of networks underlying cognitive flexibility. Altogether, these findings define striatal dopamine’s association with cognitive flexibility and its neural underpinnings in young adults, and reveal the alteration in dopamine-related neural processes in aging.« less

  14. Aging Affects Dopaminergic Neural Mechanisms of Cognitive Flexibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Anne S.; Shah, Vyoma D.; Baker, Suzanne L.

    Aging is accompanied by profound changes in the brain’s dopamine system that affect cognitive function. Evidence of powerful individual differences in cognitive aging has sharpened focus on identifying biological factors underlying relative preservation versus vulnerability to decline. Dopamine represents a key target in these efforts. Alterations of dopamine receptors and dopamine synthesis are seen in aging, with receptors generally showing reduction and synthesis demonstrating increases. Using the PET tracer 6-[ 18F]fluoro-L- m-tyrosine, we found strong support for upregulated striatal dopamine synthesis capacity in healthy older adult humans free of amyloid pathology, relative to young people. We next used fMRI tomore » define the functional impact of elevated synthesis capacity on cognitive flexibility, a core component of executive function. We found clear evidence in young adults that low levels of synthesis capacity were suboptimal, associated with diminished cognitive flexibility and altered frontoparietal activation relative to young adults with highest synthesis values. Critically, these relationships between dopamine, performance, and activation were transformed in older adults with higher synthesis capacity. Variability in synthesis capacity was related to intrinsic frontoparietal functional connectivity across groups, suggesting that striatal dopamine synthesis influences the tuning of networks underlying cognitive flexibility. Altogether, these findings define striatal dopamine’s association with cognitive flexibility and its neural underpinnings in young adults, and reveal the alteration in dopamine-related neural processes in aging.« less

  15. Memory for reputational trait information: is social-emotional information processing less flexible in old age?

    PubMed

    Bell, Raoul; Giang, Trang; Mund, Iris; Buchner, Axel

    2013-12-01

    How do younger and older adults remember reputational trait information about other people? In the present study, trustworthy-looking and untrustworthy-looking faces were paired with cooperation or cheating in a cooperation game. In a surprise source-memory test, participants were asked to rate the likability of the faces, and were required to remember whether the faces were associated with negative or positive outcomes. The social expectations of younger and older adults were clearly affected by a priori facial trustworthiness. Facial trustworthiness was associated with high cooperation-game investments, high likability ratings, and a tendency toward guessing that a face belonged to a cooperator instead of a cheater in both age groups. Consistent with previous results showing that emotional memory is spared from age-related decline, memory for the association between faces and emotional reputational information was well preserved in older adults. However, younger adults used a flexible encoding strategy to remember the social interaction partners. Source-memory was best for information that violated their (positive) expectations. Older adults, in contrast, showed a uniform memory bias for negative social information; their memory performance was not modulated by their expectations. This finding suggests that older adults are less likely to adjust their encoding strategies to their social expectations than younger adults. This may be in line with older adults' motivational goals to avoid risks in social interactions. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Comparison of shockwave lithotripsy and flexible ureteroscopy for the treatment of kidney stones in patients with a solitary kidney.

    PubMed

    Yuruk, Emrah; Binbay, Murat; Ozgor, Faruk; Sekerel, Levent; Berberoglu, Yalcin; Muslumanoglu, Ahmet Yaser

    2015-04-01

    To compare the outcomes of these minimally invasive procedures in this patient population. The database of our institution has been retrospectively reviewed, and medical records of urolithiasis patients with a solitary kidney who underwent flexible ureteroscopy (F-URS) or extracorporeal shock wave lithotripsy (SWL) between January 2009 and December 2012 were examined. Retreatment rates, complications, changes in estimated glomerular filtration rates (eGFRs), chronic kidney disease (CKD) stages, and stone-free rates were compared between the two groups. Stones of 48 patients (mean age: 48.8±15.4, range: 14-76) with solitary kidneys were treated with SWL (n=30, 62.5%) or F-URS (n=18, 37.5%). Patient demographics and stone related parameters were similar. The most common stone location was the pelvis in the SWL group (36.6%), whereas it was the pelvis and a calix in the F-URS group (38.8%). Complications and success rates were similar in both groups, however, patients in the SWL group needed more sessions to achieve stone clearance (2.2±0.89 vs 1.06±0.24, p=0.0001). Preoperative and postoperative eGFR and CKD stage changes were also similar. Both SWL and F-URS are effective and safe techniques, which can be used for the treatment of stones in patients with solitary kidneys. However, patients treated with SWL need more sessions to achieve stone clearance.

  17. Development of Endoscopic Diagnosis and Treatment for Chronic Unilateral Hematuria: 35 Years Experience.

    PubMed

    Tanimoto, Ryuta; Kumon, Hiromi; Bagley, Demetrius H

    2017-04-01

    Chronic unilateral hematuria (CUH) is also called lateralizing essential hematuria, benign essential hematuria, and benign lateralizing hematuria, which was defined as intermittent or continuous gross hematuria that cannot be diagnosed with standard radiology and hematology studies, together with unilateral bloody efflux by cystoscopy. CUH is rare, but sometimes confused with malignancy or life-threatening hemorrhage. Therefore, it can cause considerable anxiety to not only patients but also urologists. For this study, we summarized articles about endoscopic diagnosis and treatment of CUH, and discussed the development of endourology for CUH. We searched articles related to CUH that were indexed in the PubMed database and published in English. Key terms used were "unilateral," "lateralizing," "chronic," "benign," and "idiopathic" hematuria. We found 15 pertinent articles reporting CUH. Endoscopically, CUH can be classified into three categories: discrete lesion, diffuse lesion, or no (unidentified) lesion. Currently, endoscopic techniques for CUH are similar to the techniques for upper tract urothelial carcinoma, using semi-rigid and flexible ureteroscopes with diathermy fulguration or laser ablation for treatment. The overall success rate of endoscopic treatment for CUH, defined as resolution of gross hematuria after treatment, was 93% (190/205). The recurrence rate, defined as recurrent gross hematuria after treatment, was 10% (19/189). Advancements in endoscopic devices and techniques have enabled more accurate and less invasive diagnosis and treatment of CUH. Once CUH is defined, flexible ureteroscopy is the diagnostic and therapeutic technique of choice.

  18. Cognitive effort is modulated outside of the explicit awareness of conflict frequency: Evidence from pupillometry.

    PubMed

    Diede, Nathaniel T; Bugg, Julie M

    2017-05-01

    Classic theories of cognitive control conceptualized controlled processes as slow, strategic, and willful, with automatic processes being fast and effortless. The context-specific proportion compatibility (CSPC) effect, the reduction in the compatibility effect in a context (e.g., location) associated with a high relative to low likelihood of conflict, challenged classic theories by demonstrating fast and flexible control that appears to operate outside of conscious awareness. Two theoretical questions yet to be addressed are whether the CSPC effect is accompanied by context-dependent variation in effort, and whether the exertion of effort depends on explicit awareness of context-specific task demands. To address these questions, pupil diameter was measured during a CSPC paradigm. Stimuli were randomly presented in either a mostly compatible location or a mostly incompatible location. Replicating prior research, the CSPC effect was found. The novel finding was that pupil diameter was greater in the mostly incompatible location compared to the mostly compatible location, despite participants' lack of awareness of context-specific task demands. Additionally, this difference occurred regardless of trial type or a preceding switch in location. These patterns support the view that context (location) dictates selection of optimal attentional settings in the CSPC paradigm, and varying levels of effort and performance accompany these settings. Theoretically, these patterns imply that cognitive control may operate fast, flexibly, and outside of awareness, but not effortlessly. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. [Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].

    PubMed

    Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu

    2015-09-01

    By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.

  20. Palm-Vein Classification Based on Principal Orientation Features

    PubMed Central

    Zhou, Yujia; Liu, Yaqin; Feng, Qianjin; Yang, Feng; Huang, Jing; Nie, Yixiao

    2014-01-01

    Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database. PMID:25383715

Top