Science.gov

Sample records for model organism database

  1. dictyBase, the model organism database for Dictyostelium discoideum.

    PubMed

    Chisholm, Rex L; Gaudet, Pascale; Just, Eric M; Pilcher, Karen E; Fey, Petra; Merchant, Sohel N; Kibbe, Warren A

    2006-01-01

    dictyBase (http://dictybase.org) is the model organism database (MOD) for the social amoeba Dictyostelium discoideum. The unique biology and phylogenetic position of Dictyostelium offer a great opportunity to gain knowledge of processes not characterized in other organisms. The recent completion of the 34 MB genome sequence, together with the sizable scientific literature using Dictyostelium as a research organism, provided the necessary tools to create a well-annotated genome. dictyBase has leveraged software developed by the Saccharomyces Genome Database and the Generic Model Organism Database project. This has reduced the time required to develop a full-featured MOD and greatly facilitated our ability to focus on annotation and providing new functionality. We hope that manual curation of the Dictyostelium genome will facilitate the annotation of other genomes.

  2. Xanthusbase: adapting wikipedia principles to a model organism database.

    PubMed

    Arshinoff, Bradley I; Suen, Garret; Just, Eric M; Merchant, Sohel M; Kibbe, Warren A; Chisholm, Rex L; Welch, Roy D

    2007-01-01

    xanthusBase (http://www.xanthusbase.org) is the official model organism database (MOD) for the social bacterium Myxococcus xanthus. In many respects, M.xanthus represents the pioneer model organism (MO) for studying the genetic, biochemical, and mechanistic basis of prokaryotic multicellularity, a topic that has garnered considerable attention due to the significance of biofilms in both basic and applied microbiology research. To facilitate its utility, the design of xanthusBase incorporates open-source software, leveraging the cumulative experience made available through the Generic Model Organism Database (GMOD) project, MediaWiki (http://www.mediawiki.org), and dictyBase (http://www.dictybase.org), to create a MOD that is both highly useful and easily navigable. In addition, we have incorporated a unique Wikipedia-style curation model which exploits the internet's inherent interactivity, thus enabling M.xanthus and other myxobacterial researchers to contribute directly toward the ongoing genome annotation.

  3. Choosing a Genome Browser for a Model Organism Database (MOD): Surveying the Maize Community

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the maize genome sequencing is nearing its completion, the Maize Genetics and Genomics Database (MaizeGDB), the Model Organism Database for maize, integrated a genome browser to its already existing Web interface and database. The addition of the MaizeGDB Genome Browser to MaizeGDB will allow it ...

  4. Model organism databases: essential resources that need the support of both funders and users.

    PubMed

    Oliver, Stephen G; Lock, Antonia; Harris, Midori A; Nurse, Paul; Wood, Valerie

    2016-06-22

    Modern biomedical research depends critically on access to databases that house and disseminate genetic, genomic, molecular, and cell biological knowledge. Even as the explosion of available genome sequences and associated genome-scale data continues apace, the sustainability of professionally maintained biological databases is under threat due to policy changes by major funding agencies. Here, we focus on model organism databases to demonstrate the myriad ways in which biological databases not only act as repositories but actively facilitate advances in research. We present data that show that reducing financial support to model organism databases could prove to be not just scientifically, but also economically, unsound.

  5. MaizeGDB update: New tools, data, and interface for the maize model organism database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...

  6. MaizeGDB, the maize model organism database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    MaizeGDB is the maize research community's database for maize genetic and genomic information. In this seminar I will outline our current endeavors including a full website redesign, the status of maize genome assembly and annotation projects, and work toward genome functional annotation. Mechanis...

  7. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  8. Integrated interactions database: tissue-specific view of the human and model organism interactomes.

    PubMed

    Kotlyar, Max; Pastrello, Chiara; Sheahan, Nicholas; Jurisica, Igor

    2016-01-04

    IID (Integrated Interactions Database) is the first database providing tissue-specific protein-protein interactions (PPIs) for model organisms and human. IID covers six species (S. cerevisiae (yeast), C. elegans (worm), D. melonogaster (fly), R. norvegicus (rat), M. musculus (mouse) and H. sapiens (human)) and up to 30 tissues per species. Users query IID by providing a set of proteins or PPIs from any of these organisms, and specifying species and tissues where IID should search for interactions. If query proteins are not from the selected species, IID enables searches across species and tissues automatically by using their orthologs; for example, retrieving interactions in a given tissue, conserved in human and mouse. Interaction data in IID comprises three types of PPI networks: experimentally detected PPIs from major databases, orthologous PPIs and high-confidence computationally predicted PPIs. Interactions are assigned to tissues where their proteins pairs or encoding genes are expressed. IID is a major replacement of the I2D interaction database, with larger PPI networks (a total of 1,566,043 PPIs among 68,831 proteins), tissue annotations for interactions, and new query, analysis and data visualization capabilities. IID is available at http://ophid.utoronto.ca/iid.

  9. Protein Model Database

    SciTech Connect

    Fidelis, K; Adzhubej, A; Kryshtafovych, A; Daniluk, P

    2005-02-23

    The phenomenal success of the genome sequencing projects reveals the power of completeness in revolutionizing biological science. Currently it is possible to sequence entire organisms at a time, allowing for a systemic rather than fractional view of their organization and the various genome-encoded functions. There is an international plan to move towards a similar goal in the area of protein structure. This will not be achieved by experiment alone, but rather by a combination of efforts in crystallography, NMR spectroscopy, and computational modeling. Only a small fraction of structures are expected to be identified experimentally, the remainder to be modeled. Presently there is no organized infrastructure to critically evaluate and present these data to the biological community. The goal of the Protein Model Database project is to create such infrastructure, including (1) public database of theoretically derived protein structures; (2) reliable annotation of protein model quality, (3) novel structure analysis tools, and (4) access to the highest quality modeling techniques available.

  10. MyMpn: a database for the systems biology model organism Mycoplasma pneumoniae.

    PubMed

    Wodke, Judith A H; Alibés, Andreu; Cozzuto, Luca; Hermoso, Antonio; Yus, Eva; Lluch-Senar, Maria; Serrano, Luis; Roma, Guglielmo

    2015-01-01

    MyMpn (http://mympn.crg.eu) is an online resource devoted to studying the human pathogen Mycoplasma pneumoniae, a minimal bacterium causing lower respiratory tract infections. Due to its small size, its ability to grow in vitro, and the amount of data produced over the past decades, M. pneumoniae is an interesting model organisms for the development of systems biology approaches for unicellular organisms. Our database hosts a wealth of omics-scale datasets generated by hundreds of experimental and computational analyses. These include data obtained from gene expression profiling experiments, gene essentiality studies, protein abundance profiling, protein complex analysis, metabolic reactions and network modeling, cell growth experiments, comparative genomics and 3D tomography. In addition, the intuitive web interface provides access to several visualization and analysis tools as well as to different data search options. The availability and--even more relevant--the accessibility of properly structured and organized data are of up-most importance when aiming to understand the biology of an organism on a global scale. Therefore, MyMpn constitutes a unique and valuable new resource for the large systems biology and microbiology community.

  11. SubtiWiki 2.0--an integrated database for the model organism Bacillus subtilis.

    PubMed

    Michna, Raphael H; Zhu, Bingyao; Mäder, Ulrike; Stülke, Jörg

    2016-01-04

    To understand living cells, we need knowledge of each of their parts as well as about the interactions of these parts. To gain rapid and comprehensive access to this information, annotation databases are required. Here, we present SubtiWiki 2.0, the integrated database for the model bacterium Bacillus subtilis (http://subtiwiki.uni-goettingen.de/). SubtiWiki provides text-based access to published information about the genes and proteins of B. subtilis as well as presentations of metabolic and regulatory pathways. Moreover, manually curated protein-protein interactions diagrams are linked to the protein pages. Finally, expression data are shown with respect to gene expression under 104 different conditions as well as absolute protein quantification for cytoplasmic proteins. To facilitate the mobile use of SubtiWiki, we have now expanded it by Apps that are available for iOS and Android devices. Importantly, the App allows to link private notes and pictures to the gene/protein pages. Today, SubtiWiki has become one of the most complete collections of knowledge on a living organism in one single resource.

  12. The Zebrafish Model Organism Database: new support for human disease models, mutation details, gene expression phenotypes and searching.

    PubMed

    Howe, Douglas G; Bradford, Yvonne M; Eagle, Anne; Fashena, David; Frazer, Ken; Kalita, Patrick; Mani, Prita; Martin, Ryan; Moxon, Sierra Taylor; Paddock, Holly; Pich, Christian; Ramachandran, Sridhar; Ruzicka, Leyla; Schaper, Kevin; Shao, Xiang; Singer, Amy; Toro, Sabrina; Van Slyke, Ceri; Westerfield, Monte

    2017-01-04

    The Zebrafish Model Organism Database (ZFIN; http://zfin.org) is the central resource for zebrafish (Danio rerio) genetic, genomic, phenotypic and developmental data. ZFIN curators provide expert manual curation and integration of comprehensive data involving zebrafish genes, mutants, transgenic constructs and lines, phenotypes, genotypes, gene expressions, morpholinos, TALENs, CRISPRs, antibodies, anatomical structures, models of human disease and publications. We integrate curated, directly submitted, and collaboratively generated data, making these available to zebrafish research community. Among the vertebrate model organisms, zebrafish are superbly suited for rapid generation of sequence-targeted mutant lines, characterization of phenotypes including gene expression patterns, and generation of human disease models. The recent rapid adoption of zebrafish as human disease models is making management of these data particularly important to both the research and clinical communities. Here, we describe recent enhancements to ZFIN including use of the zebrafish experimental conditions ontology, 'Fish' records in the ZFIN database, support for gene expression phenotypes, models of human disease, mutation details at the DNA, RNA and protein levels, and updates to the ZFIN single box search.

  13. The Zebrafish Model Organism Database: new support for human disease models, mutation details, gene expression phenotypes and searching

    PubMed Central

    Howe, Douglas G.; Bradford, Yvonne M.; Eagle, Anne; Fashena, David; Frazer, Ken; Kalita, Patrick; Mani, Prita; Martin, Ryan; Moxon, Sierra Taylor; Paddock, Holly; Pich, Christian; Ramachandran, Sridhar; Ruzicka, Leyla; Schaper, Kevin; Shao, Xiang; Singer, Amy; Toro, Sabrina; Van Slyke, Ceri; Westerfield, Monte

    2017-01-01

    The Zebrafish Model Organism Database (ZFIN; http://zfin.org) is the central resource for zebrafish (Danio rerio) genetic, genomic, phenotypic and developmental data. ZFIN curators provide expert manual curation and integration of comprehensive data involving zebrafish genes, mutants, transgenic constructs and lines, phenotypes, genotypes, gene expressions, morpholinos, TALENs, CRISPRs, antibodies, anatomical structures, models of human disease and publications. We integrate curated, directly submitted, and collaboratively generated data, making these available to zebrafish research community. Among the vertebrate model organisms, zebrafish are superbly suited for rapid generation of sequence-targeted mutant lines, characterization of phenotypes including gene expression patterns, and generation of human disease models. The recent rapid adoption of zebrafish as human disease models is making management of these data particularly important to both the research and clinical communities. Here, we describe recent enhancements to ZFIN including use of the zebrafish experimental conditions ontology, ‘Fish’ records in the ZFIN database, support for gene expression phenotypes, models of human disease, mutation details at the DNA, RNA and protein levels, and updates to the ZFIN single box search. PMID:27899582

  14. Xenbase, the Xenopus model organism database; new virtualized system, data types and genomes

    PubMed Central

    Karpinka, J. Brad; Fortriede, Joshua D.; Burns, Kevin A.; James-Zorn, Christina; Ponferrada, Virgilio G.; Lee, Jacqueline; Karimi, Kamran; Zorn, Aaron M.; Vize, Peter D.

    2015-01-01

    Xenbase (http://www.xenbase.org), the Xenopus frog model organism database, integrates a wide variety of data from this biomedical model genus. Two closely related species are represented: the allotetraploid Xenopus laevis that is widely used for microinjection and tissue explant-based protocols, and the diploid Xenopus tropicalis which is used for genetics and gene targeting. The two species are extremely similar and protocols, reagents and results from each species are often interchangeable. Xenbase imports, indexes, curates and manages data from both species; all of which are mapped via unique IDs and can be queried in either a species-specific or species agnostic manner. All our services have now migrated to a private cloud to achieve better performance and reliability. We have added new content, including providing full support for morpholino reagents, used to inhibit mRNA translation or splicing and binding to regulatory microRNAs. New genomes assembled by the JGI for both species and are displayed in Gbrowse and are also available for searches using BLAST. Researchers can easily navigate from genome content to gene page reports, literature, experimental reagents and many other features using hyperlinks. Xenbase has also greatly expanded image content for figures published in papers describing Xenopus research via PubMedCentral. PMID:25313157

  15. Design, Implementation and Maintenance of a Model Organism Database for Arabidopsis thaliana

    PubMed Central

    Weems, Danforth; Miller, Neil; Garcia-Hernandez, Margarita; Huala, Eva

    2004-01-01

    The Arabidopsis Information Resource (TAIR) is a web-based community database for the model plant Arabidopsis thaliana. It provides an integrated view of genes, sequences, proteins, germplasms, clones, metabolic pathways, gene expression, ecotypes, polymorphisms, publications, maps and community information. TAIR is developed and maintained by collaboration between software developers and biologists. Biologists provide specification and use cases for the system, acquire, analyse and curate data, interact with users and test the software. Software developers design, implement and test the database and software. In this review, we briefly describe how TAIR was built and is being maintained. PMID:18629167

  16. Developing a biocuration workflow for AgBase, a non-model organism database

    PubMed Central

    Pillai, Lakshmi; Chouvarine, Philippe; Tudor, Catalina O.; Schmidt, Carl J.; Vijay-Shanker, K.; McCarthy, Fiona M.

    2012-01-01

    AgBase provides annotation for agricultural gene products using the Gene Ontology (GO) and Plant Ontology, as appropriate. Unlike model organism species, agricultural species have a body of literature that does not just focus on gene function; to improve efficiency, we use text mining to identify literature for curation. The first component of our annotation interface is the gene prioritization interface that ranks gene products for annotation. Biocurators select the top-ranked gene and mark annotation for these genes as ‘in progress’ or ‘completed’; links enable biocurators to move directly to our biocuration interface (BI). Our BI includes all current GO annotation for gene products and is the main interface to add/modify AgBase curation data. The BI also displays Extracting Genic Information from Text (eGIFT) results for each gene product. eGIFT is a web-based, text-mining tool that associates ranked, informative terms (iTerms) and the articles and sentences containing them, with genes. Moreover, iTerms are linked to GO terms, where they match either a GO term name or a synonym. This enables AgBase biocurators to rapidly identify literature for further curation based on possible GO terms. Because most agricultural species do not have standardized literature, eGIFT searches all gene names and synonyms to associate articles with genes. As many of the gene names can be ambiguous, eGIFT applies a disambiguation step to remove matches that do not correspond to this gene, and filtering is applied to remove abstracts that mention a gene in passing. The BI is linked to our Journal Database (JDB) where corresponding journal citations are stored. Just as importantly, biocurators also add to the JDB citations that have no GO annotation. The AgBase BI also supports bulk annotation upload to facilitate our Inferred from electronic annotation of agricultural gene products. All annotations must pass standard GO Consortium quality checking before release in Ag

  17. Developing a biocuration workflow for AgBase, a non-model organism database.

    PubMed

    Pillai, Lakshmi; Chouvarine, Philippe; Tudor, Catalina O; Schmidt, Carl J; Vijay-Shanker, K; McCarthy, Fiona M

    2012-01-01

    AgBase provides annotation for agricultural gene products using the Gene Ontology (GO) and Plant Ontology, as appropriate. Unlike model organism species, agricultural species have a body of literature that does not just focus on gene function; to improve efficiency, we use text mining to identify literature for curation. The first component of our annotation interface is the gene prioritization interface that ranks gene products for annotation. Biocurators select the top-ranked gene and mark annotation for these genes as 'in progress' or 'completed'; links enable biocurators to move directly to our biocuration interface (BI). Our BI includes all current GO annotation for gene products and is the main interface to add/modify AgBase curation data. The BI also displays Extracting Genic Information from Text (eGIFT) results for each gene product. eGIFT is a web-based, text-mining tool that associates ranked, informative terms (iTerms) and the articles and sentences containing them, with genes. Moreover, iTerms are linked to GO terms, where they match either a GO term name or a synonym. This enables AgBase biocurators to rapidly identify literature for further curation based on possible GO terms. Because most agricultural species do not have standardized literature, eGIFT searches all gene names and synonyms to associate articles with genes. As many of the gene names can be ambiguous, eGIFT applies a disambiguation step to remove matches that do not correspond to this gene, and filtering is applied to remove abstracts that mention a gene in passing. The BI is linked to our Journal Database (JDB) where corresponding journal citations are stored. Just as importantly, biocurators also add to the JDB citations that have no GO annotation. The AgBase BI also supports bulk annotation upload to facilitate our Inferred from electronic annotation of agricultural gene products. All annotations must pass standard GO Consortium quality checking before release in AgBase. Database URL

  18. Pancreatic Expression database: a generic model for the organization, integration and mining of complex cancer datasets

    PubMed Central

    Chelala, Claude; Hahn, Stephan A; Whiteman, Hannah J; Barry, Sayka; Hariharan, Deepak; Radon, Tomasz P; Lemoine, Nicholas R; Crnogorac-Jurcevic, Tatjana

    2007-01-01

    the progression of cancer, cross-platform meta-analysis, SNP selection for pancreatic cancer association studies, cancer gene promoter analysis as well as mining cancer ontology information. The data model is generic and can be easily extended and applied to other types of cancer. The database is available online with no restrictions for the scientific community at . PMID:18045474

  19. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms

    PubMed Central

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-01-01

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes. PMID:24296905

  20. Immediate dissemination of student discoveries to a model organism database enhances classroom-based research experiences.

    PubMed

    Wiley, Emily A; Stover, Nicholas A

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.

  1. MaizeGDB: The Maize Model Organism Database for Basic, Translational, and Applied Research

    PubMed Central

    Lawrence, Carolyn J.; Harper, Lisa C.; Schaeffer, Mary L.; Sen, Taner Z.; Seigfried, Trent E.; Campbell, Darwin A.

    2008-01-01

    In 2001 maize became the number one production crop in the world with the Food and Agriculture Organization of the United Nations reporting over 614 million tonnes produced. Its success is due to the high productivity per acre in tandem with a wide variety of commercial uses. Not only is maize an excellent source of food, feed, and fuel, but also its by-products are used in the production of various commercial products. Maize's unparalleled success in agriculture stems from basic research, the outcomes of which drive breeding and product development. In order for basic, translational, and applied researchers to benefit from others' investigations, newly generated data must be made freely and easily accessible. MaizeGDB is the maize research community's central repository for genetics and genomics information. The overall goals of MaizeGDB are to facilitate access to the outcomes of maize research by integrating new maize data into the database and to support the maize research community by coordinating group activities. PMID:18769488

  2. Use of model organism and disease databases to support matchmaking for human disease gene discovery.

    PubMed

    Mungall, Christopher J; Washington, Nicole L; Nguyen-Xuan, Jeremy; Condit, Christopher; Smedley, Damian; Köhler, Sebastian; Groza, Tudor; Shefchek, Kent; Hochheiser, Harry; Robinson, Peter N; Lewis, Suzanna E; Haendel, Melissa A

    2015-10-01

    The Matchmaker Exchange application programming interface (API) allows searching a patient's genotypic or phenotypic profiles across clinical sites, for the purposes of cohort discovery and variant disease causal validation. This API can be used not only to search for matching patients, but also to match against public disease and model organism data. This public disease data enable matching known diseases and variant-phenotype associations using phenotype semantic similarity algorithms developed by the Monarch Initiative. The model data can provide additional evidence to aid diagnosis, suggest relevant models for disease mechanism and treatment exploration, and identify collaborators across the translational divide. The Monarch Initiative provides an implementation of this API for searching multiple integrated sources of data that contextualize the knowledge about any given patient or patient family into the greater biomedical knowledge landscape. While this corpus of data can aid diagnosis, it is also the beginning of research to improve understanding of rare human diseases.

  3. SubtiWiki 2.0—an integrated database for the model organism Bacillus subtilis

    PubMed Central

    Michna, Raphael H.; Zhu, Bingyao; Mäder, Ulrike; Stülke, Jörg

    2016-01-01

    To understand living cells, we need knowledge of each of their parts as well as about the interactions of these parts. To gain rapid and comprehensive access to this information, annotation databases are required. Here, we present SubtiWiki 2.0, the integrated database for the model bacterium Bacillus subtilis (http://subtiwiki.uni-goettingen.de/). SubtiWiki provides text-based access to published information about the genes and proteins of B. subtilis as well as presentations of metabolic and regulatory pathways. Moreover, manually curated protein-protein interactions diagrams are linked to the protein pages. Finally, expression data are shown with respect to gene expression under 104 different conditions as well as absolute protein quantification for cytoplasmic proteins. To facilitate the mobile use of SubtiWiki, we have now expanded it by Apps that are available for iOS and Android devices. Importantly, the App allows to link private notes and pictures to the gene/protein pages. Today, SubtiWiki has become one of the most complete collections of knowledge on a living organism in one single resource. PMID:26433225

  4. A Database for Propagation Models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Rucker, James

    1997-01-01

    The Propagation Models Database is designed to allow the scientists and experimenters in the propagation field to process their data through many known and accepted propagation models. The database is an Excel 5.0 based software that houses user-callable propagation models of propagation phenomena. It does not contain a database of propagation data generated out of the experiments. The database not only provides a powerful software tool to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables and producing impressive and crisp hard copy for presentation and filing.

  5. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    NASA Technical Reports Server (NTRS)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  6. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  7. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani S.

    1992-01-01

    In June 1991, a paper at the fifteenth NASA Propagation Experimenters Meeting (NAPEX 15) was presented outlining the development of a database for propagation models. The database is designed to allow the scientists and experimenters in the propagation field to process their data through any known and accepted propagation model. The architecture of the database also incorporates the possibility of changing the standard models in the database to fit the scientist's or the experimenter's needs. The database not only provides powerful software to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables, and producing impressive and crisp hard copy for presentation and filing.

  8. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.

  9. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Choung

    1994-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self-explanatory; however, a sample run of the CCIR rain attenuation model is presented.

  10. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

    1995-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

  11. A database for propagation models

    NASA Astrophysics Data System (ADS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

    1995-08-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

  12. Software Engineering Laboratory (SEL) database organization and user's guide

    NASA Technical Reports Server (NTRS)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  13. Community Organizing for Database Trial Buy-In by Patrons

    ERIC Educational Resources Information Center

    Pionke, J. J.

    2015-01-01

    Database trials do not often garner a lot of feedback. Using community-organizing techniques can not only potentially increase the amount of feedback received but also deepen the relationship between the librarian and his or her constituent group. This is a case study of the use of community-organizing techniques in a series of database trials for…

  14. DEPOT: A Database of Environmental Parameters, Organizations and Tools

    SciTech Connect

    CARSON,SUSAN D.; HUNTER,REGINA LEE; MALCZYNSKI,LEONARD A.; POHL,PHILLIP I.; QUINTANA,ENRICO; SOUZA,CAROLINE A.; HIGLEY,KATHRYN; MURPHIE,WILLIAM

    2000-12-19

    The Database of Environmental Parameters, Organizations, and Tools (DEPOT) has been developed by the Department of Energy (DOE) as a central warehouse for access to data essential for environmental risk assessment analyses. Initial efforts have concentrated on groundwater and vadose zone transport data and bioaccumulation factors. DEPOT seeks to provide a source of referenced data that, wherever possible, includes the level of uncertainty associated with these parameters. Based on the amount of data available for a particular parameter, uncertainty is expressed as a standard deviation or a distribution function. DEPOT also provides DOE site-specific performance assessment data, pathway-specific transport data, and links to environmental regulations, disposal site waste acceptance criteria, other environmental parameter databases, and environmental risk assessment models.

  15. Conceptual and logical level of database modeling

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  16. Organ system heterogeneity DB: a database for the visualization of phenotypes at the organ system level.

    PubMed

    Mannil, Deepthi; Vogt, Ingo; Prinz, Jeanette; Campillos, Monica

    2015-01-01

    Perturbations of mammalian organisms including diseases, drug treatments and gene perturbations in mice affect organ systems differently. Some perturbations impair relatively few organ systems while others lead to highly heterogeneous or systemic effects. Organ System Heterogeneity DB (http://mips.helmholtz-muenchen.de/Organ_System_Heterogeneity/) provides information on the phenotypic effects of 4865 human diseases, 1667 drugs and 5361 genetically modified mouse models on 26 different organ systems. Disease symptoms, drug side effects and mouse phenotypes are mapped to the System Organ Class (SOC) level of the Medical Dictionary of Regulatory Activities (MedDRA). Then, the organ system heterogeneity value, a measurement of the systemic impact of a perturbation, is calculated from the relative frequency of phenotypic features across all SOCs. For perturbations of interest, the database displays the distribution of phenotypic effects across organ systems along with the heterogeneity value and the distance between organ system distributions. In this way, it allows, in an easy and comprehensible fashion, the comparison of the phenotypic organ system distributions of diseases, drugs and their corresponding genetically modified mouse models of associated disease genes and drug targets. The Organ System Heterogeneity DB is thus a platform for the visualization and comparison of organ system level phenotypic effects of drugs, diseases and genes.

  17. Organ system heterogeneity DB: a database for the visualization of phenotypes at the organ system level

    PubMed Central

    Mannil, Deepthi; Vogt, Ingo; Prinz, Jeanette; Campillos, Monica

    2015-01-01

    Perturbations of mammalian organisms including diseases, drug treatments and gene perturbations in mice affect organ systems differently. Some perturbations impair relatively few organ systems while others lead to highly heterogeneous or systemic effects. Organ System Heterogeneity DB (http://mips.helmholtz-muenchen.de/Organ_System_Heterogeneity/) provides information on the phenotypic effects of 4865 human diseases, 1667 drugs and 5361 genetically modified mouse models on 26 different organ systems. Disease symptoms, drug side effects and mouse phenotypes are mapped to the System Organ Class (SOC) level of the Medical Dictionary of Regulatory Activities (MedDRA). Then, the organ system heterogeneity value, a measurement of the systemic impact of a perturbation, is calculated from the relative frequency of phenotypic features across all SOCs. For perturbations of interest, the database displays the distribution of phenotypic effects across organ systems along with the heterogeneity value and the distance between organ system distributions. In this way, it allows, in an easy and comprehensible fashion, the comparison of the phenotypic organ system distributions of diseases, drugs and their corresponding genetically modified mouse models of associated disease genes and drug targets. The Organ System Heterogeneity DB is thus a platform for the visualization and comparison of organ system level phenotypic effects of drugs, diseases and genes. PMID:25313158

  18. Dynamic publication model for neurophysiology databases.

    PubMed

    Gardner, D; Abato, M; Knuth, K H; DeBellis, R; Erde, S M

    2001-08-29

    We have implemented a pair of database projects, one serving cortical electrophysiology and the other invertebrate neurones and recordings. The design for each combines aspects of two proven schemes for information interchange. The journal article metaphor determined the type, scope, organization and quantity of data to comprise each submission. Sequence databases encouraged intuitive tools for data viewing, capture, and direct submission by authors. Neurophysiology required transcending these models with new datatypes. Time-series, histogram and bivariate datatypes, including illustration-like wrappers, were selected by their utility to the community of investigators. As interpretation of neurophysiological recordings depends on context supplied by metadata attributes, searches are via visual interfaces to sets of controlled-vocabulary metadata trees. Neurones, for example, can be specified by metadata describing functional and anatomical characteristics. Permanence is advanced by data model and data formats largely independent of contemporary technology or implementation, including Java and the XML standard. All user tools, including dynamic data viewers that serve as a virtual oscilloscope, are Java-based, free, multiplatform, and distributed by our application servers to any contemporary networked computer. Copyright is retained by submitters; viewer displays are dynamic and do not violate copyright of related journal figures. Panels of neurophysiologists view and test schemas and tools, enhancing community support.

  19. Database design using NIAM (Nijssen Information Analysis Method) modeling

    SciTech Connect

    Stevens, N.H.

    1989-01-01

    The Nissjen Information Analysis Method (NIAM) is an information modeling technique based on semantics and founded in set theory. A NIAM information model is a graphical representation of the information requirements for some universe of discourse. Information models facilitate data integration and communication within an organization about data semantics. An information model is sometimes referred to as the semantic model or the conceptual schema. It helps in the logical and physical design and implementation of databases. NIAM information modeling is used at Sandia National Laboratories to design and implement relational databases containing engineering information which meet the users' information requirements. The paper focuses on the design of one database which satisfied the data needs of four disjoint but closely related applications. The applications as they existed before did not talk to each other even though they stored much of the same data redundantly. NIAM was used to determine the information requirements and design the integrated database. 6 refs., 7 figs.

  20. Data-based mechanistic modeling of dissolved organic carbon load through storms using continuous 15-minute resolution observations within UK upland watersheds

    NASA Astrophysics Data System (ADS)

    Jones, T.; Chappell, N. A.

    2013-12-01

    Few watershed modeling studies have addressed DOC dynamics through storm hydrographs (notable exceptions include Boyer et al., 1997 Hydrol Process; Jutras et al., 2011 Ecol Model; Xu et al., 2012 Water Resour Res). In part this has been a consequence of an incomplete understanding of the biogeochemical processes leading to DOC export to streams (Neff & Asner, 2001, Ecosystems) & an insufficient frequency of DOC monitoring to capture sometimes complex time-varying relationships between DOC & storm hydrographs (Kirchner et al., 2004, Hydrol Process). We present the results of a new & ongoing UK study that integrates two components - 1/ New observations of DOC concentrations (& derived load) continuously monitored at 15 minute intervals through multiple seasons for replicated watersheds; & 2/ A dynamic modeling technique that is able to quantify storage-decay effects, plus hysteretic, nonlinear, lagged & non-stationary relationships between DOC & controlling variables (including rainfall, streamflow, temperature & specific biogeochemical variables e.g., pH, nitrate). DOC concentration is being monitored continuously using the latest generation of UV spectrophotometers (i.e. S::CAN spectro::lysers) with in situ calibrations to laboratory analyzed DOC. The controlling variables are recorded simultaneously at the same stream stations. The watersheds selected for study are among the most intensively studied basins in the UK uplands, namely the Plynlimon & Llyn Brianne experimental basins. All contain areas of organic soils, with three having improved grasslands & three conifer afforested. The dynamic response characteristics (DRCs) that describe detailed DOC behaviour through sequences of storms are simulated using the latest identification routines for continuous time transfer function (CT-TF) models within the Matlab-based CAPTAIN toolbox (some incorporating nonlinear components). To our knowledge this is the first application of CT-TFs to modelling DOC processes

  1. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  2. Spatial Database Organization for Multi-attribute Sensor Data Representation

    NASA Astrophysics Data System (ADS)

    Gouveia, Feliz R.; Barthes, Jean-Paul A.

    1990-03-01

    This paper surveys spatial database organization and modelling as it is becoming a crucial issue for an ever increasing number of geometric data manipulation systems. We are here interested in efficient representation and storage structures for rapid processing of large sets of geometric data, as required by robotics applications, Very Large Scale Integration (VLSI) layout design, cartography, Computer Aided Design (CAD), or geographic information systems (GIS), where frequent operations involve spatial reasoning over that data. Existing database systems lack expressiveness to store some kinds of information which are inherently present in a geometric reasoning process, such as metric information, e.g. proximity, parallelism; or topological information, e.g. inclusion, intersection, contiguity, crossing. Geometric databases (GDB) alleviate this problem by providing an explicit representation for the spatial layout of the world in terms of empty and occupied space, together with a complete description of each object in it. Access to the data is done in an associative manner, that is, by specifying values over some usually small (sub)set of attributes, e.g. the coordinates of physical space. Manipulating data in GDB systems involves often spatially localized operations, i.e., locations, and consequently objects, which are accessed in the present are likely to be accessed again in a near future; this locality of reference which Hegron [24] calls temporal coherence, is due mainly to real world physical constraints. Indeed if accesses are caused for example by a sensor module which inspects its surroundings, then it is reasonable to suppose that successive scanned territories are not very far apart.

  3. Hierarchical clustering techniques for image database organization and summarization

    NASA Astrophysics Data System (ADS)

    Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    This paper investigates clustering techniques as a method of organizing image databases to support popular visual management functions such as searching, browsing and navigation. Different types of hierarchical agglomerative clustering techniques are studied as a method of organizing features space as well as summarizing image groups by the selection of a few appropriate representatives. Retrieval performance using both single and multiple level hierarchies are experimented with and the algorithms show an interesting relationship between the top k correct retrievals and the number of comparisons required. Some arguments are given to support the use of such cluster-based techniques for managing distributed image databases.

  4. Web resources for model organism studies.

    PubMed

    Tang, Bixia; Wang, Yanqing; Zhu, Junwei; Zhao, Wenming

    2015-02-01

    An ever-growing number of resources on model organisms have emerged with the continued development of sequencing technologies. In this paper, we review 13 databases of model organisms, most of which are reported by the National Institutes of Health of the United States (NIH; http://www.nih.gov/science/models/). We provide a brief description for each database, as well as detail its data source and types, functions, tools, and availability of access. In addition, we also provide a quality assessment about these databases. Significantly, the organism databases instituted in the early 1990s--such as the Mouse Genome Database (MGD), Saccharomyces Genome Database (SGD), and FlyBase--have developed into what are now comprehensive, core authority resources. Furthermore, all of the databases mentioned here update continually according to user feedback and with advancing technologies.

  5. Organic materials database: An open-access online database for data mining

    PubMed Central

    Geilhufe, R. Matthias; Balatsky, Alexander V.

    2017-01-01

    We present an organic materials database (OMDB) hosting thousands of Kohn-Sham electronic band structures, which is freely accessible online at http://omdb.diracmaterials.org. The OMDB focus lies on electronic structure, density of states and other properties for purely organic and organometallic compounds that are known to date. The electronic band structures are calculated using density functional theory for the crystal structures contained in the Crystallography Open Database. The OMDB web interface allows users to retrieve materials with specified target properties using non-trivial queries about their electronic structure. We illustrate the use of the OMDB and how it can become an organic part of search and prediction of novel functional materials via data mining techniques. As a specific example, we provide data mining results for metals and semiconductors, which are known to be rare in the class of organic materials. PMID:28182744

  6. Organic materials database: An open-access online database for data mining.

    PubMed

    Borysov, Stanislav S; Geilhufe, R Matthias; Balatsky, Alexander V

    2017-01-01

    We present an organic materials database (OMDB) hosting thousands of Kohn-Sham electronic band structures, which is freely accessible online at http://omdb.diracmaterials.org. The OMDB focus lies on electronic structure, density of states and other properties for purely organic and organometallic compounds that are known to date. The electronic band structures are calculated using density functional theory for the crystal structures contained in the Crystallography Open Database. The OMDB web interface allows users to retrieve materials with specified target properties using non-trivial queries about their electronic structure. We illustrate the use of the OMDB and how it can become an organic part of search and prediction of novel functional materials via data mining techniques. As a specific example, we provide data mining results for metals and semiconductors, which are known to be rare in the class of organic materials.

  7. MODBASE, a database of annotated comparative protein structure models.

    PubMed

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C; Ilyin, Valentin A; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10(-4)) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server.

  8. Development and Mining of a Volatile Organic Compound Database

    PubMed Central

    Abdullah, Azian Azamimi; Altaf-Ul-Amin, Md.; Ono, Naoaki; Sato, Tetsuo; Sugiura, Tadao; Morita, Aki Hirai; Katsuragi, Tetsuo; Muto, Ai; Nishioka, Takaaki; Kanaya, Shigehiko

    2015-01-01

    Volatile organic compounds (VOCs) are small molecules that exhibit high vapor pressure under ambient conditions and have low boiling points. Although VOCs contribute only a small proportion of the total metabolites produced by living organisms, they play an important role in chemical ecology specifically in the biological interactions between organisms and ecosystems. VOCs are also important in the health care field as they are presently used as a biomarker to detect various human diseases. Information on VOCs is scattered in the literature until now; however, there is still no available database describing VOCs and their biological activities. To attain this purpose, we have developed KNApSAcK Metabolite Ecology Database, which contains the information on the relationships between VOCs and their emitting organisms. The KNApSAcK Metabolite Ecology is also linked with the KNApSAcK Core and KNApSAcK Metabolite Activity Database to provide further information on the metabolites and their biological activities. The VOC database can be accessed online. PMID:26495281

  9. The Arabidopsis Information Resource (TAIR): a model organism database providing a centralized, curated gateway to Arabidopsis biology, research materials and community.

    PubMed

    Rhee, Seung Yon; Beavis, William; Berardini, Tanya Z; Chen, Guanghong; Dixon, David; Doyle, Aisling; Garcia-Hernandez, Margarita; Huala, Eva; Lander, Gabriel; Montoya, Mary; Miller, Neil; Mueller, Lukas A; Mundodi, Suparna; Reiser, Leonore; Tacklind, Julie; Weems, Dan C; Wu, Yihe; Xu, Iris; Yoo, Daniel; Yoon, Jungwon; Zhang, Peifen

    2003-01-01

    Arabidopsis thaliana is the most widely-studied plant today. The concerted efforts of over 11 000 researchers and 4000 organizations around the world are generating a rich diversity and quantity of information and materials. This information is made available through a comprehensive on-line resource called the Arabidopsis Information Resource (TAIR) (http://arabidopsis.org), which is accessible via commonly used web browsers and can be searched and downloaded in a number of ways. In the last two years, efforts have been focused on increasing data content and diversity, functionally annotating genes and gene products with controlled vocabularies, and improving data retrieval, analysis and visualization tools. New information include sequence polymorphisms including alleles, germplasms and phenotypes, Gene Ontology annotations, gene families, protein information, metabolic pathways, gene expression data from microarray experiments and seed and DNA stocks. New data visualization and analysis tools include SeqViewer, which interactively displays the genome from the whole chromosome down to 10 kb of nucleotide sequence and AraCyc, a metabolic pathway database and map tool that allows overlaying expression data onto the pathway diagrams. Finally, we have recently incorporated seed and DNA stock information from the Arabidopsis Biological Resource Center (ABRC) and implemented a shopping-cart style on-line ordering system.

  10. An information model based weld schedule database

    SciTech Connect

    Kleban, S.D.; Knorovsky, G.A.; Hicken, G.K.; Gershanok, G.A.

    1997-08-01

    As part of a computerized system (SmartWeld) developed at Sandia National Laboratories to facilitate agile manufacturing of welded assemblies, a weld schedule database (WSDB) was also developed. SmartWeld`s overall goals are to shorten the design-to-product time frame and to promote right-the-first-time weldment design and manufacture by providing welding process selection guidance to component designers. The associated WSDB evolved into a substantial subproject by itself. At first, it was thought that the database would store perhaps 50 parameters about a weld schedule. This was a woeful underestimate: the current WSDB has over 500 parameters defined in 73 tables. This includes data bout the weld, the piece parts involved, the piece part geometry, and great detail about the schedule and intervals involved in performing the weld. This complex database was built using information modeling techniques. Information modeling is a process that creates a model of objects and their roles for a given domain (i.e. welding). The Natural-Language Information Analysis methodology (NIAM) technique was used, which is characterized by: (1) elementary facts being stated in natural language by the welding expert, (2) determinism (the resulting model is provably repeatable, i.e. it gives the same answer every time), and (3) extensibility (the model can be added to without changing existing structure). The information model produced a highly normalized relational schema that was translated to Oracle{trademark} Relational Database Management Systems for implementation.

  11. The database for reaching experiments and models.

    PubMed

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  12. Hydroacoustic forcing function modeling using DNS database

    NASA Technical Reports Server (NTRS)

    Zawadzki, I.; Gershfield, J. L.; Na, Y.; Wang, M.

    1996-01-01

    A wall pressure frequency spectrum model (Blake 1971 ) has been evaluated using databases from Direct Numerical Simulations (DNS) of a turbulent boundary layer (Na & Moin 1996). Good agreement is found for moderate to strong adverse pressure gradient flows in the absence of separation. In the separated flow region, the model underpredicts the directly calculated spectra by an order of magnitude. The discrepancy is attributed to the violation of the model assumptions in that part of the flow domain. DNS computed coherence length scales and the normalized wall pressure cross-spectra are compared with experimental data. The DNS results are consistent with experimental observations.

  13. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe

    PubMed Central

    Aksoy, Ece

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they’re collected from the “Land Use/Cover Area frame Statistical Survey” (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and “Soil Transformations in European Catchments” (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960–1990 and 2000–2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas

  14. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe.

    PubMed

    Aksoy, Ece; Yigini, Yusuf; Montanarella, Luca

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they're collected from the "Land Use/Cover Area frame Statistical Survey" (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and "Soil Transformations in European Catchments" (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960-1990 and 2000-2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas; agricultural

  15. Assessment of the SFC database for analysis and modeling

    NASA Technical Reports Server (NTRS)

    Centeno, Martha A.

    1994-01-01

    SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.

  16. Spatial Database Modeling for Indoor Navigation Systems

    NASA Astrophysics Data System (ADS)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  17. Asteroid models from the Lowell photometric database

    NASA Astrophysics Data System (ADS)

    Ďurech, J.; Hanuš, J.; Oszkiewicz, D.; Vančo, R.

    2016-03-01

    Context. Information about shapes and spin states of individual asteroids is important for the study of the whole asteroid population. For asteroids from the main belt, most of the shape models available now have been reconstructed from disk-integrated photometry by the lightcurve inversion method. Aims: We want to significantly enlarge the current sample (~350) of available asteroid models. Methods: We use the lightcurve inversion method to derive new shape models and spin states of asteroids from the sparse-in-time photometry compiled in the Lowell Photometric Database. To speed up the time-consuming process of scanning the period parameter space through the use of convex shape models, we use the distributed computing project Asteroids@home, running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. This way, the period-search interval is divided into hundreds of smaller intervals. These intervals are scanned separately by different volunteers and then joined together. We also use an alternative, faster, approach when searching the best-fit period by using a model of triaxial ellipsoid. By this, we can independently confirm periods found with convex models and also find rotation periods for some of those asteroids for which the convex-model approach gives too many solutions. Results: From the analysis of Lowell photometric data of the first 100 000 numbered asteroids, we derived 328 new models. This almost doubles the number of available models. We tested the reliability of our results by comparing models that were derived from purely Lowell data with those based on dense lightcurves, and we found that the rate of false-positive solutions is very low. We also present updated plots of the distribution of spin obliquities and pole ecliptic longitudes that confirm previous findings about a non-uniform distribution of spin axes. However, the models reconstructed from noisy sparse data are heavily biased towards more elongated bodies with high

  18. A Model Based Mars Climate Database for the Mission Design

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access

  19. Modeling and Simulation Terrain Database Management

    DTIC Science & Technology

    2005-07-01

    data for use in combat modeling. The SVDR is still under development by the Science Applications International Corporation (SAIC) in conjunction with...DOD organization for comunication between Unmanned Systems. But they have not worked on how to communicate terrain between systems. Hope this helps

  20. Integrated Space Asset Management Database and Modeling

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  1. A computational framework for a database of terrestrial biosphere models

    NASA Astrophysics Data System (ADS)

    Metzler, Holger; Müller, Markus; Ceballos-Núñez, Verónika; Sierra, Carlos A.

    2016-04-01

    Most terrestrial biosphere models consist of a set of coupled ordinary first order differential equations. Each equation represents a pool containing carbon with a certain turnover rate. Although such models share some basic mathematical structures, they can have very different properties such as number of pools, cycling rates, and internal fluxes. We present a computational framework that helps analyze the structure and behavior of terrestrial biosphere models using as an example the process of soil organic matter decomposition. The same framework can also be used for other sub-processes such as carbon fixation or allocation. First, the models have to be fed into a database consisting of simple text files with a common structure. Then they are read in using Python and transformed into an internal 'Model Class' that can be used to automatically create an overview stating the model's structure, state variables, internal and external fluxes. SymPy, a Python library for symbolic mathematics, helps to also calculate the Jacobian matrix at possibly given steady states and the eigenvalues of this matrix. If complete parameter sets are available, the model can also be run using R to simulate its behavior under certain conditions and to support a deeper stability analysis. In this case, the framework is also able to provide phase-plane plots if appropriate. Furthermore, an overview of all the models in the database can be given to help identify their similarities and differences.

  2. Synthesized Population Databases: A US Geospatial Database for Agent-Based Models.

    PubMed

    Wheaton, William D; Cajka, James C; Chasteen, Bernadette M; Wagener, Diane K; Cooley, Philip C; Ganapathi, Laxminarayana; Roberts, Douglas J; Allpress, Justine L

    2009-05-01

    Agent-based models simulate large-scale social systems. They assign behaviors and activities to "agents" (individuals) within the population being modeled and then allow the agents to interact with the environment and each other in complex simulations. Agent-based models are frequently used to simulate infectious disease outbreaks, among other uses.RTI used and extended an iterative proportional fitting method to generate a synthesized, geospatially explicit, human agent database that represents the US population in the 50 states and the District of Columbia in the year 2000. Each agent is assigned to a household; other agents make up the household occupants.For this database, RTI developed the methods for generating synthesized households and personsassigning agents to schools and workplaces so that complex interactions among agents as they go about their daily activities can be taken into accountgenerating synthesized human agents who occupy group quarters (military bases, college dormitories, prisons, nursing homes).In this report, we describe both the methods used to generate the synthesized population database and the final data structure and data content of the database. This information will provide researchers with the information they need to use the database in developing agent-based models.Portions of the synthesized agent database are available to any user upon request. RTI will extract a portion (a county, region, or state) of the database for users who wish to use this database in their own agent-based models.

  3. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    SciTech Connect

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  4. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms

    PubMed Central

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S.

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. ‘Fish Karyome’ database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome PMID:26980518

  5. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.

    PubMed

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome.

  6. Sequence modelling and an extensible data model for genomic database

    SciTech Connect

    Li, Peter Wei-Der |

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  7. Sequence modelling and an extensible data model for genomic database

    SciTech Connect

    Li, Peter Wei-Der Lawrence Berkeley Lab., CA )

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  8. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  9. Solid Waste Projection Model: Database (Version 1. 3)

    SciTech Connect

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement.

  10. Object-Oriented Geographical Database Model

    NASA Technical Reports Server (NTRS)

    Johnson, M. L.; Bryant, N.; Sapounas, D.

    1996-01-01

    Terbase is an Object-Oriented database system under development at the Jet Propulsion Laboratory (JPL). Terbase is designed for flexibility, reusability, maintenace ease, multi-user collaboration and independence, and efficiency. This paper details the design and development of Terbase as a geographic data server...

  11. EPA's Drinking Water Treatability Database and Treatment Cost Models

    EPA Science Inventory

    USEPA Drinking Water Treatability Database and Drinking Water Treatment Cost Models are valuable tools for determining the effectiveness and cost of treatment for contaminants of emerging concern. The models will be introduced, explained, and demonstrated.

  12. Integrated Space Asset Management Database and Modeling

    NASA Astrophysics Data System (ADS)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  13. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet.

  14. FEMA Database Requirements Assessment and Resource Directory Model.

    DTIC Science & Technology

    1982-05-01

    Appropriate Databases, Publicly Available Online (Chosen for Online Testing) 1. Agricultural Online Access ( AGRICOLA )-- Worldwide coverage of the...the synthesis and application of organic chemical compounds. * Mechanical Engineering (MechEn) -- Covers the literature of machinery , mechanical...False1 tations(les4 Cbnfidence Level) DATABASE Vendor Tested Citations Drops false drops) (for Column E) + AGRICOLA BRS 545 22.06% 425 - 6.4% (460-390

  15. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  16. Expanding on Successful Concepts, Models, and Organization

    EPA Science Inventory

    If the goal of the AEP framework was to replace existing exposure models or databases for organizing exposure data with a concept, we would share Dr. von Göetz concerns. Instead, the outcome we promote is broader use of an organizational framework for exposure science. The f...

  17. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  18. GIS-based Conceptual Database Model for Planetary Geoscientific Mapping

    NASA Astrophysics Data System (ADS)

    van Gasselt, Stephan; Nass, Andrea; Neukum, Gerhard

    2010-05-01

    concerning, e.g., map products (product and cartograpic representation), sensor-data products, stratigraphy definitions for each planet (facies, formation, ...), and mapping units. Domains and subtypes as well as a set of two dozens relationships define their interaction and allow a high level of constraints that aid to limit errors by domain- and topologic boundary conditions without limiting the abilitiy of the mapper to perform his/her task. The geodatabase model is part of a data model currently under development and design in the context of providing tools and definitions for mapping, cartographic representations and data exploitation. The database model as an integral part is designed for portability with respect to geoscientific mapping tasks in general and can be applied to every GIS project dealing with terrestrial planetary objects. It will be accompanied by definitions and representations on the cartographic level as well as tools and utilities for providing easy accessible workflows focussing on query, organization, maintainance, integration of planetary data and meta information. The data model's layout is modularized with individual components dealing with symbol representations (geology and geomorphology), metadata accessibility and modification, definition of stratigraphic entitites and their relationships as well as attribute domains, extensions for planetary mapping and analysis tasks as well as integration of data information on the level of vector representations for easy accessible querying, data processing in connection with ISIS/GDAL and data integration.

  19. Imprecision and Uncertainty in the UFO Database Model.

    ERIC Educational Resources Information Center

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  20. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  1. Flood forecasting for River Mekong with data-based models

    NASA Astrophysics Data System (ADS)

    Shahzad, Khurram M.; Plate, Erich J.

    2014-09-01

    In many regions of the world, the task of flood forecasting is made difficult because only a limited database is available for generating a suitable forecast model. This paper demonstrates that in such cases parsimonious data-based hydrological models for flood forecasting can be developed if the special conditions of climate and topography are used to advantage. As an example, the middle reach of River Mekong in South East Asia is considered, where a database of discharges from seven gaging stations on the river and 31 rainfall stations on the subcatchments between gaging stations is available for model calibration. Special conditions existing for River Mekong are identified and used in developing first a network connecting all discharge gages and then models for forecasting discharge increments between gaging stations. Our final forecast model (Model 3) is a linear combination of two structurally different basic models: a model (Model 1) using linear regressions for forecasting discharge increments, and a model (Model 2) using rainfall-runoff models. Although the model based on linear regressions works reasonably well for short times, better results are obtained with rainfall-runoff modeling. However, forecast accuracy of Model 2 is limited by the quality of rainfall forecasts. For best results, both models are combined by taking weighted averages to form Model 3. Model quality is assessed by means of both persistence index PI and standard deviation of forecast error.

  2. Performance modeling for large database systems

    NASA Astrophysics Data System (ADS)

    Schaar, Stephen; Hum, Frank; Romano, Joe

    1997-02-01

    One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.

  3. Examining the Factors That Contribute to Successful Database Application Implementation Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Nworji, Alexander O.

    2013-01-01

    Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…

  4. A Database Model for Medical Consultation.

    ERIC Educational Resources Information Center

    Anvari, Morteza

    1991-01-01

    Describes a relational data model that can be used for knowledge representation and manipulation in rule-based medical consultation systems. Fuzzy queries or attribute values and fuzzy set theory are discussed, functional dependencies are described, and an example is presented of a system for diagnosing causes of eye inflammation. (15 references)…

  5. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  6. Spatial-temporal database model based on geodatabase

    NASA Astrophysics Data System (ADS)

    Zhu, Hongmei; Luo, Yu

    2009-10-01

    Entities in the real world have non-spatial attributes, as well as spatial and temporal features. A spatial-temporal data model aims at describing appropriately these intrinsic characteristics within the entities and model them on a conceptual level so that the model can present both static information and dynamic information that occurs over time. In this paper, we devise a novel spatial-temporal data model which is based on Geodatabase. The model employs object-oriented analysis method, combining object concept with event. The entity is defined as a feature class encapsulating attributes and operations. The operations detect change and store the changes automatically in a historic database in Geodatabase. Furthermore, the model takes advantage of the existing strengths of the relational database at the bottom level of Geodatabase, such as trigger and constraint, to monitor events on the attributes or locations and respond to the events correctly. A case of geographic database for Kunming municipal sewerage geographic information system is implemented by the model. The database reveals excellent performance on managing data and tracking the details of change. It provides a perfect data platform for querying, recurring history and predicting the trend of future. The instance demonstrates the spatial-temporal data model is efficient and practicable.

  7. BioProject and BioSample databases at NCBI: facilitating capture and organization of metadata.

    PubMed

    Barrett, Tanya; Clark, Karen; Gevorgyan, Robert; Gorelenkov, Vyacheslav; Gribov, Eugene; Karsch-Mizrachi, Ilene; Kimelman, Michael; Pruitt, Kim D; Resenchuk, Sergei; Tatusova, Tatiana; Yaschenko, Eugene; Ostell, James

    2012-01-01

    As the volume and complexity of data sets archived at NCBI grow rapidly, so does the need to gather and organize the associated metadata. Although metadata has been collected for some archival databases, previously, there was no centralized approach at NCBI for collecting this information and using it across databases. The BioProject database was recently established to facilitate organization and classification of project data submitted to NCBI, EBI and DDBJ databases. It captures descriptive information about research projects that result in high volume submissions to archival databases, ties together related data across multiple archives and serves as a central portal by which to inform users of data availability. Concomitantly, the BioSample database is being developed to capture descriptive information about the biological samples investigated in projects. BioProject and BioSample records link to corresponding data stored in archival repositories. Submissions are supported by a web-based Submission Portal that guides users through a series of forms for input of rich metadata describing their projects and samples. Together, these databases offer improved ways for users to query, locate, integrate and interpret the masses of data held in NCBI's archival repositories. The BioProject and BioSample databases are available at http://www.ncbi.nlm.nih.gov/bioproject and http://www.ncbi.nlm.nih.gov/biosample, respectively.

  8. Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape

    SciTech Connect

    XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.

    2001-11-01

    The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.

  9. 3MdB: the Mexican Million Models database

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  10. Visual Analysis of Residuals from Data-Based Models in Complex Industrial Processes

    NASA Astrophysics Data System (ADS)

    Ordoñez, Daniel G.; Cuadrado, Abel A.; Díaz, Ignacio; García, Francisco J.; Díez, Alberto B.; Fuertes, Juan J.

    2012-10-01

    The use of data-based models for visualization purposes in an industrial background is discussed. Results using Self-Organizing Maps (SOM) show how through a good design of the model and a proper visualization of the residuals generated by the model itself, the behavior of essential parameters of the process can be easily tracked in a visual way. Real data from a cold rolling facility have been used to prove the advantages of these techniques.

  11. Materials Database Development for Ballistic Impact Modeling

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael

    2007-01-01

    A set of experimental data is being generated under the Fundamental Aeronautics Program Supersonics project to help create and validate accurate computational impact models of jet engine impact events. The data generated will include material property data generated at a range of different strain rates, from 1x10(exp -4)/sec to 5x10(exp 4)/sec, over a range of temperatures. In addition, carefully instrumented ballistic impact tests will be conducted on flat plates and curved structures to provide material and structural response information to help validate the computational models. The material property data and the ballistic impact data will be generated using materials from the same lot, as far as possible. It was found in preliminary testing that the surface finish of test specimens has an effect on measured high strain rate tension response of AL2024. Both the maximum stress and maximum elongation are greater on specimens with a smoother finish. This report gives an overview of the testing that is being conducted and presents results of preliminary testing of the surface finish study.

  12. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  13. VIDA: a virus database system for the organization of animal virus genome open reading frames.

    PubMed

    Albà, M M; Lee, D; Pearl, F M; Shepherd, A J; Martin, N; Orengo, C A; Kellam, P

    2001-01-01

    VIDA is a new virus database that organizes open reading frames (ORFs) from partial and complete genomic sequences from animal viruses. Currently VIDA includes all sequences from GenBank for Herpesviridae, Coronaviridae and Arteriviridae. The ORFs are organized into homologous protein families, which are identified on the basis of sequence similarity relationships. Conserved sequence regions of potential functional importance are identified and can be retrieved as sequence alignments. We use a controlled taxonomical and functional classification for all the proteins and protein families in the database. When available, protein structures that are related to the families have also been included. The database is available for online search and sequence information retrieval at http://www.biochem.ucl.ac.uk/bsm/virus_database/ VIDA.html.

  14. VIDA: a virus database system for the organization of animal virus genome open reading frames

    PubMed Central

    Albà, M. Mar; Lee, David; Pearl, Frances M. G.; Shepherd, Adrian J.; Martin, Nigel; Orengo, Christine A.; Kellam, Paul

    2001-01-01

    VIDA is a new virus database that organizes open reading frames (ORFs) from partial and complete genomic sequences from animal viruses. Currently VIDA includes all sequences from GenBank for Herpesviridae, Coronaviridae and Arteriviridae. The ORFs are organized into homologous protein families, which are identified on the basis of sequence similarity relationships. Conserved sequence regions of potential functional importance are identified and can be retrieved as sequence alignments. We use a controlled taxonomical and functional classification for all the proteins and protein families in the database. When available, protein structures that are related to the families have also been included. The database is available for online search and sequence information retrieval at http://www.biochem.ucl.ac.uk/bsm/virus_database/VIDA.html. PMID:11125070

  15. Database integration in a multimedia-modeling environment

    SciTech Connect

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include tools to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.

  16. An Approach to Query Cost Modelling in Numeric Databases.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo

    1989-01-01

    Examines factors that determine user charges based on query processing costs in numeric databases, and analyzes the problem of estimating such charges in advance. An approach to query cost estimation is presented which is based on the relational data model and the query optimization, cardinality estimation, and file design techniques developed in…

  17. Technical Work Plan for: Thermodynamic Database for Chemical Modeling

    SciTech Connect

    C.F. Jovecolon

    2006-09-07

    The objective of the work scope covered by this Technical Work Plan (TWP) is to correct and improve the Yucca Mountain Project (YMP) thermodynamic databases, to update their documentation, and to ensure reasonable consistency among them. In addition, the work scope will continue to generate database revisions, which are organized and named so as to be transparent to internal and external users and reviewers. Regarding consistency among databases, it is noted that aqueous speciation and mineral solubility data for a given system may differ according to how solubility was determined, and the method used for subsequent retrieval of thermodynamic parameter values from measured data. Of particular concern are the details of the determination of ''infinite dilution'' constants, which involve the use of specific methods for activity coefficient corrections. That is, equilibrium constants developed for a given system for one set of conditions may not be consistent with constants developed for other conditions, depending on the species considered in the chemical reactions and the methods used in the reported studies. Hence, there will be some differences (for example in log K values) between the Pitzer and ''B-dot'' database parameters for the same reactions or species.

  18. NGNP Risk Management Database: A Model for Managing Risk

    SciTech Connect

    John Collins

    2009-09-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft® Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool’s design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  19. Comparing global soil models to soil carbon profile databases

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Harden, J. W.; He, Y.; Lawrence, D. M.; Nave, L. E.; O'Donnell, J. A.; Treat, C.; Sulman, B. N.; Kane, E. S.

    2015-12-01

    As global soil models begin to consider the dynamics of carbon below the surface layers, it is crucial to assess the realism of these models. We focus on the vertical profiles of soil C predicted across multiple biomes form the Community Land Model (CLM4.5), using different values for a parameter that controls the rate of decomposition at depth versus at the surface, and compare these to observationally-derived diagnostics derived from the International Soil Carbon Database (ISCN) to assess the realism of model predictions of carbon depthattenuation, and the ability of observations to provide a constraint on rates of decomposition at depth.

  20. Artificial intelligence techniques for modeling database user behavior

    NASA Technical Reports Server (NTRS)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  1. Accelerating Information Retrieval from Profile Hidden Markov Model Databases

    PubMed Central

    Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases. PMID:27875548

  2. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    PubMed

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  3. INTERCOMPARISON OF ALTERNATIVE VEGETATION DATABASES FOR REGIONAL AIR QUALITY MODELING

    EPA Science Inventory

    Vegetation cover data are used to characterize several regional air quality modeling processes, including the calculation of heat, moisture, and momentum fluxes with the Mesoscale Meteorological Model (MM5) and the estimate of biogenic volatile organic compound and nitric oxide...

  4. SAPling: a Scan-Add-Print barcoding database system to label and track asexual organisms

    PubMed Central

    Thomas, Michael A.; Schötz, Eva-Maria

    2011-01-01

    SUMMARY We have developed a ‘Scan-Add-Print’ database system, SAPling, to track and monitor asexually reproducing organisms. Using barcodes to uniquely identify each animal, we can record information on the life of the individual in a computerized database containing its entire family tree. SAPling has enabled us to carry out large-scale population dynamics experiments with thousands of planarians and keep track of each individual. The database stores information such as family connections, birth date, division date and generation. We show that SAPling can be easily adapted to other asexually reproducing organisms and has a strong potential for use in large-scale and/or long-term population and senescence studies as well as studies of clonal diversity. The software is platform-independent, designed for reliability and ease of use, and provided open source from our webpage to allow project-specific customization. PMID:21993779

  5. Using the Cambridge Structural Database to Teach Molecular Geometry Concepts in Organic Chemistry

    ERIC Educational Resources Information Center

    Wackerly, Jay Wm.; Janowicz, Philip A.; Ritchey, Joshua A.; Caruso, Mary M.; Elliott, Erin L.; Moore, Jeffrey S.

    2009-01-01

    This article reports a set of two homework assignments that can be used in a second-year undergraduate organic chemistry class. These assignments were designed to help reinforce concepts of molecular geometry and to give students the opportunity to use a technological database and data mining to analyze experimentally determined chemical…

  6. ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database

    SciTech Connect

    Parisien, Lia

    2016-01-31

    This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.

  7. Fitting the Balding-Nichols model to forensic databases.

    PubMed

    Rohlfs, Rori V; Aguiar, Vitor R C; Lohmueller, Kirk E; Castro, Amanda M; Ferreira, Alessandro C S; Almeida, Vanessa C O; Louro, Iuri D; Nielsen, Rasmus

    2015-11-01

    Large forensic databases provide an opportunity to compare observed empirical rates of genotype matching with those expected under forensic genetic models. A number of researchers have taken advantage of this opportunity to validate some forensic genetic approaches, particularly to ensure that estimated rates of genotype matching between unrelated individuals are indeed slight overestimates of those observed. However, these studies have also revealed systematic error trends in genotype probability estimates. In this analysis, we investigate these error trends and show how they result from inappropriate implementation of the Balding-Nichols model in the context of database-wide matching. Specifically, we show that in addition to accounting for increased allelic matching between individuals with recent shared ancestry, studies must account for relatively decreased allelic matching between individuals with more ancient shared ancestry.

  8. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    SciTech Connect

    Hagan, Ross F.

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularly for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.

  9. ASGARD: an open-access database of annotated transcriptomes for emerging model arthropod species.

    PubMed

    Zeng, Victor; Extavour, Cassandra G

    2012-01-01

    The increased throughput and decreased cost of next-generation sequencing (NGS) have shifted the bottleneck genomic research from sequencing to annotation, analysis and accessibility. This is particularly challenging for research communities working on organisms that lack the basic infrastructure of a sequenced genome, or an efficient way to utilize whatever sequence data may be available. Here we present a new database, the Assembled Searchable Giant Arthropod Read Database (ASGARD). This database is a repository and search engine for transcriptomic data from arthropods that are of high interest to multiple research communities but currently lack sequenced genomes. We demonstrate the functionality and utility of ASGARD using de novo assembled transcriptomes from the milkweed bug Oncopeltus fasciatus, the cricket Gryllus bimaculatus and the amphipod crustacean Parhyale hawaiensis. We have annotated these transcriptomes to assign putative orthology, coding region determination, protein domain identification and Gene Ontology (GO) term annotation to all possible assembly products. ASGARD allows users to search all assemblies by orthology annotation, GO term annotation or Basic Local Alignment Search Tool. User-friendly features of ASGARD include search term auto-completion suggestions based on database content, the ability to download assembly product sequences in FASTA format, direct links to NCBI data for predicted orthologs and graphical representation of the location of protein domains and matches to similar sequences from the NCBI non-redundant database. ASGARD will be a useful repository for transcriptome data from future NGS studies on these and other emerging model arthropods, regardless of sequencing platform, assembly or annotation status. This database thus provides easy, one-stop access to multi-species annotated transcriptome information. We anticipate that this database will be useful for members of multiple research communities, including developmental

  10. MtDB: a database for personalized data mining of the model legume Medicago truncatula transcriptome.

    PubMed

    Lamblin, Anne-Françoise J; Crow, John A; Johnson, James E; Silverstein, Kevin A T; Kunau, Timothy M; Kilian, Alan; Benz, Diane; Stromvik, Martina; Endré, Gabriella; VandenBosch, Kathryn A; Cook, Douglas R; Young, Nevin D; Retzel, Ernest F

    2003-01-01

    In order to identify the genes and gene functions that underlie key aspects of legume biology, researchers have selected the cool season legume Medicago truncatula (Mt) as a model system for legume research. A set of >170 000 Mt ESTs has been assembled based on in-depth sampling from various developmental stages and pathogen-challenged tissues. MtDB is a relational database that integrates Mt transcriptome data and provides a wide range of user-defined data mining options. The database is interrogated through a series of interfaces with 58 options grouped into two filters. In addition, the user can select and compare unigene sets generated by different assemblers: Phrap, Cap3 and Cap4. Sequence identifiers from all public Mt sites (e.g. IDs from GenBank, CCGB, TIGR, NCGR, INRA) are fully cross-referenced to facilitate comparisons between different sites, and hypertext links to the appropriate database records are provided for all queries' results. MtDB's goal is to provide researchers with the means to quickly and independently identify sequences that match specific research interests based on user-defined criteria. The underlying database and query software have been designed for ease of updates and portability to other model organisms. Public access to the database is at http://www.medicago.org/MtDB.

  11. Organizing the Extremely Large LSST Database forReal-Time Astronomical Processing

    SciTech Connect

    Becla, Jacek; Lim, Kian-Tat; Monkewitz, Serge; Nieto-Santisteban, Maria; Thakar, Ani; /Johns Hopkins U.

    2007-11-07

    The Large Synoptic Survey Telescope (LSST) will catalog billions of astronomical objects and trillions of sources, all of which will be stored and managed by a database management system. One of the main challenges is real-time alert generation. To generate alerts, up to 100K new difference detections have to be cross-correlated with the huge historical catalogs, and then further processed to prune false alerts. This paper explains the challenges, the implementation of the LSST Association Pipeline and the database organization strategies we are planning to use to meet the real-time requirements, including data partitioning, parallelization, and pre-loading.

  12. GADB: A database facility for modelling naturally occurring geophysical fields

    NASA Technical Reports Server (NTRS)

    Dampney, C. N. G.

    1983-01-01

    In certain kinds of geophysical surveys, the fields are continua, but measured at discrete points referenced by their position or time of measurement. Systems of this kind are better modelled by databases built from basic data structures attuned to representing traverses across continua that are not of pre-defined fixed length. The general Array DataBase is built on arrays (ordered sequencies of data) with each array holding data elements of one type. The arrays each occupy their own physical data set, in turn inter-related by a hierarchy to other arrays over the same space/time reference points. The GADB illustrates the principle that a data facility should reflect the fundamental properties of its data, and support retrieval based on the application's view. The GADB is being tested by its use in NASA's project MAGSAT.

  13. NGNP Risk Management Database: A Model for Managing Risk

    SciTech Connect

    John Collins; John M. Beck

    2011-11-01

    The Next Generation Nuclear Plant (NGNP) Risk Management System (RMS) is a database used to maintain the project risk register. The RMS also maps risk reduction activities to specific identified risks. Further functionality of the RMS includes mapping reactor suppliers Design Data Needs (DDNs) to risk reduction tasks and mapping Phenomena Identification Ranking Table (PIRTs) to associated risks. This document outlines the basic instructions on how to use the RMS. This document constitutes Revision 1 of the NGNP Risk Management Database: A Model for Managing Risk. It incorporates the latest enhancements to the RMS. The enhancements include six new custom views of risk data - Impact/Consequence, Tasks by Project Phase, Tasks by Status, Tasks by Project Phase/Status, Tasks by Impact/WBS, and Tasks by Phase/Impact/WBS.

  14. Organization's Orderly Interest Exploration: Inception, Development and Insights of AIAA's Topics Database

    NASA Technical Reports Server (NTRS)

    Marshall, Jospeh R.; Morris, Allan T.

    2007-01-01

    Since 2003, AIAA's Computer Systems and Software Systems Technical Committees (TCs) have developed a database that aids technical committee management to map technical topics to their members. This Topics/Interest (T/I) database grew out of a collection of charts and spreadsheets maintained by the TCs. Since its inception, the tool has evolved into a multi-dimensional database whose dimensions include the importance, interest and expertise of TC members and whether or not a member and/or a TC is actively involved with the topic. In 2005, the database was expanded to include the TCs in AIAA s Information Systems Group and then expanded further to include all AIAA TCs. It was field tested at an AIAA Technical Activities Committee (TAC) Workshop in early 2006 through live access by over 80 users. Through the use of the topics database, TC and program committee (PC) members can accomplish relevant tasks such as: to identify topic experts (for Aerospace America articles or external contacts), to determine the interest of its members, to identify overlapping topics between diverse TCs and PCs, to guide new member drives and to reveal emerging topics. This paper will describe the origins, inception, initial development, field test and current version of the tool as well as elucidate the benefits and insights gained by using the database to aid the management of various TC functions. Suggestions will be provided to guide future development of the database for the purpose of providing dynamics and system level benefits to AIAA that currently do not exist in any technical organization.

  15. Teaching biology with model organisms

    NASA Astrophysics Data System (ADS)

    Keeley, Dolores A.

    The purpose of this study is to identify and use model organisms that represent each of the kingdoms biologists use to classify organisms, while experiencing the process of science through guided inquiry. The model organisms will be the basis for studying the four high school life science core ideas as identified by the Next Generation Science Standards (NGSS): LS1-From molecules to organisms, LS2-Ecosystems, LS3- Heredity, and LS4- Biological Evolution. NGSS also have identified four categories of science and engineering practices which include developing and using models and planning and carrying out investigations. The living organisms will be utilized to increase student interest and knowledge within the discipline of Biology. Pre-test and posttest analysis utilizing student t-test analysis supported the hypothesis. This study shows increased student learning as a result of using living organisms as models for classification and working in an inquiry-based learning environment.

  16. Post-transplant lymphoproliferative disorder after pancreas transplantation: a United Network for Organ Sharing database analysis.

    PubMed

    Jackson, K; Ruppert, K; Shapiro, R

    2013-01-01

    There are not a great deal of data on post-transplant lymphoproliferative disorder (PTLD) following pancreas transplantation. We analyzed the United Network for Organ Sharing national database of pancreas transplants to identify predictors of PTLD development. A univariate Cox model was generated for each potential predictor, and those at least marginally associated (p < 0.15) with PTLD were entered into a multivariable Cox model. PTLD developed in 43 patients (1.0%) of 4205 pancreas transplants. Mean follow-up time was 4.9 ± 2.2 yr. In the multivariable Cox model, recipient EBV seronegativity (HR 5.52, 95% CI: 2.99-10.19, p < 0.001), not having tacrolimus in the immunosuppressive regimen (HR 6.02, 95% CI: 2.74-13.19, p < 0.001), recipient age (HR 0.96, 95% CI: 0.92-0.99, p = 0.02), non-white ethnicity (HR 0.11, 95% CI: 0.02-0.84, p = 0.03), and HLA mismatching (HR 0.80, 95% CI: 0.67-0.97, p = 0.02) were significantly associated with the development of PTLD. Patient survival was significantly decreased in patients with PTLD, with a one-, three-, and five-yr survival of 91%, 76%, and 70%, compared with 97%, 93%, and 88% in patients without PTLD (p < 0.001). PTLD is an uncommon but potentially lethal complication following pancreas transplantation. Patients with the risk factors identified should be monitored closely for the development of PTLD.

  17. An atmospheric tritium release database for model comparisons. Revision 1

    SciTech Connect

    Murphy, C.E. Jr.; Wortham, G.R.

    1995-01-01

    A database of vegetation, soil, and air tritium concentrations at gridded coordinate locations following nine accidental atmospheric releases is described. While none of the releases caused a significant dose to the public, the data collected are valuable for comparison with the results of tritium transport models used for risk assessment. The largest, potential, individual off-site dose from any of the releases was calculated to be 1.6 mrem. The population dose from this same release was 46 person-rem which represents 0.04% of the natural background radiation dose to the population in the path of the release.

  18. Database and Interim Glass Property Models for Hanford HLW Glasses

    SciTech Connect

    Hrma, Pavel R.; Piepel, Gregory F.; Vienna, John D.; Cooley, Scott K.; Kim, Dong-Sang; Russell, Renee L.

    2001-07-24

    The purpose of this report is to provide a methodology for an increase in the efficiency and a decrease in the cost of vitrifying high-level waste (HLW) by optimizing HLW glass formulation. This methodology consists in collecting and generating a database of glass properties that determine HLW glass processability and acceptability and relating these properties to glass composition. The report explains how the property-composition models are developed, fitted to data, used for glass formulation optimization, and continuously updated in response to changes in HLW composition estimates and changes in glass processing technology. Further, the report reviews the glass property-composition literature data and presents their preliminary critical evaluation and screening. Finally the report provides interim property-composition models for melt viscosity, for liquidus temperature (with spinel and zircon primary crystalline phases), and for the product consistency test normalized releases of B, Na, and Li. Models were fitted to a subset of the screened database deemed most relevant for the current HLW composition region.

  19. On the Perceptual Organization of Image Databases Using Cognitive Discriminative Biplots

    NASA Astrophysics Data System (ADS)

    Theoharatos, Christos; Laskaris, Nikolaos A.; Economou, George; Fotopoulos, Spiros

    2006-12-01

    A human-centered approach to image database organization is presented in this study. The management of a generic image database is pursued using a standard psychophysical experimental procedure followed by a well-suited data analysis methodology that is based on simple geometrical concepts. The end result is a cognitive discriminative biplot, which is a visualization of the intrinsic organization of the image database best reflecting the user's perception. The discriminating power of the introduced cognitive biplot constitutes an appealing tool for image retrieval and a flexible interface for visual data mining tasks. These ideas were evaluated in two ways. First, the separability of semantically distinct image classes was measured according to their reduced representations on the biplot. Then, a nearest-neighbor retrieval scheme was run on the emerged low-dimensional terrain to measure the suitability of the biplot for performing content-based image retrieval (CBIR). The achieved organization performance when compared with the performance of a contemporary system was found superior. This promoted the further discussion of packing these ideas into a realizable algorithmic procedure for an efficient and effective personalized CBIR system.

  20. Developing High-resolution Soil Database for Regional Crop Modeling in East Africa

    NASA Astrophysics Data System (ADS)

    Han, E.; Ines, A. V. M.

    2014-12-01

    The most readily available soil data for regional crop modeling in Africa is the World Inventory of Soil Emission potentials (WISE) dataset, which has 1125 soil profiles for the world, but does not extensively cover countries Ethiopia, Kenya, Uganda and Tanzania in East Africa. Another dataset available is the HC27 (Harvest Choice by IFPRI) in a gridded format (10km) but composed of generic soil profiles based on only three criteria (texture, rooting depth, and organic carbon content). In this paper, we present a development and application of a high-resolution (1km), gridded soil database for regional crop modeling in East Africa. Basic soil information is extracted from Africa Soil Information Service (AfSIS), which provides essential soil properties (bulk density, soil organic carbon, soil PH and percentages of sand, silt and clay) for 6 different standardized soil layers (5, 15, 30, 60, 100 and 200 cm) in 1km resolution. Soil hydraulic properties (e.g., field capacity and wilting point) are derived from the AfSIS soil dataset using well-proven pedo-transfer functions and are customized for DSSAT-CSM soil data requirements. The crop model is used to evaluate crop yield forecasts using the new high resolution soil database and compared with WISE and HC27. In this paper we will present also the results of DSSAT loosely coupled with a hydrologic model (VIC) to assimilate root-zone soil moisture. Creating a grid-based soil database, which provides a consistent soil input for two different models (DSSAT and VIC) is a critical part of this work. The created soil database is expected to contribute to future applications of DSSAT crop simulation in East Africa where food security is highly vulnerable.

  1. Human Exposure Modeling - Databases to Support Exposure Modeling

    EPA Pesticide Factsheets

    Human exposure modeling relates pollutant concentrations in the larger environmental media to pollutant concentrations in the immediate exposure media. The models described here are available on other EPA websites.

  2. Modeling, Measurements, and Fundamental Database Development for Nonequilibrium Hypersonic Aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Bose, Deepak

    2012-01-01

    The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above

  3. A Thermal Model Preprocessor For Graphics And Material Database Generation

    NASA Astrophysics Data System (ADS)

    Jones, Jack C.; Gonda, Teresa G.

    1989-08-01

    The process of developing a physical description of a target for thermal models is a time consuming and tedious task. The problem is one of data collection, data manipulation, and data storage. Information on targets can come from many sources and therefore could be in any form (2-D drawings, 3-D wireframe or solid model representations, etc.). TACOM has developed a preprocessor that decreases the time involved in creating a faceted target representation. This program allows the user to create the graphics for the vehicle and to assign the material properties to the graphics. The vehicle description file is then automatically generated by the preprocessor. By containing all the information in one database, the modeling process is made more accurate and data tracing can be done easily. A bridge to convert other graphics packages (such as BRL-CAD) to a faceted representation is being developed. When the bridge is finished, this preprocessor will be used to manipulate the converted data.

  4. The OTP-model applied to the Aklim site database

    NASA Astrophysics Data System (ADS)

    Mraini, Kamilia; Jabiri, Abdelhadi; Benkhaldoun, Zouhair; Bounhir, Aziza; Hach, Youssef; Sabil, Mohammed; Habib, Abdelfettah

    2014-08-01

    Within the framework of the site prospection for the future European Extremely Large Telescope (E-ELT), a wide site characterization was achieved. Aklim site located at an altitude of 2350 m at the geographical coordinates: lat.= 30°07'38" N, long.= 8°18'31" W , in the Moroccan Middle Atlas Mountains, was one of the candidate sites chosen by the Framework Programme VI (FP6) of the European Union. To complete the fulfilled study ([19]; [21]), we have used the ModelOTP (model of optical turbulence profiles) established by [15] and improved by [6]. This model allows getting the built-in profiles of the optical turbulence under various conditions. In this paper, we present an overview of the Aklim database results, in the boundary layers and in the free atmosphere separately and we make a comparison with Cerro Pachon result [15].

  5. MetRxn: a knowledgebase of metabolites and reactions spanning metabolic models and databases

    PubMed Central

    2012-01-01

    Background Increasingly, metabolite and reaction information is organized in the form of genome-scale metabolic reconstructions that describe the reaction stoichiometry, directionality, and gene to protein to reaction associations. A key bottleneck in the pace of reconstruction of new, high-quality metabolic models is the inability to directly make use of metabolite/reaction information from biological databases or other models due to incompatibilities in content representation (i.e., metabolites with multiple names across databases and models), stoichiometric errors such as elemental or charge imbalances, and incomplete atomistic detail (e.g., use of generic R-group or non-explicit specification of stereo-specificity). Description MetRxn is a knowledgebase that includes standardized metabolite and reaction descriptions by integrating information from BRENDA, KEGG, MetaCyc, Reactome.org and 44 metabolic models into a single unified data set. All metabolite entries have matched synonyms, resolved protonation states, and are linked to unique structures. All reaction entries are elementally and charge balanced. This is accomplished through the use of a workflow of lexicographic, phonetic, and structural comparison algorithms. MetRxn allows for the download of standardized versions of existing genome-scale metabolic models and the use of metabolic information for the rapid reconstruction of new ones. Conclusions The standardization in description allows for the direct comparison of the metabolite and reaction content between metabolic models and databases and the exhaustive prospecting of pathways for biotechnological production. This ever-growing dataset currently consists of over 76,000 metabolites participating in more than 72,000 reactions (including unresolved entries). MetRxn is hosted on a web-based platform that uses relational database models (MySQL). PMID:22233419

  6. MOSAIC: An organic geochemical and sedimentological database for marine surface sediments

    NASA Astrophysics Data System (ADS)

    Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy

    2015-04-01

    Modern ocean sediments serve as the interface between the biosphere and the geosphere, play a key role in biogeochemical cycles and provide a window on how contemporary processes are written into the sedimentary record. Research over past decades has resulted in a wealth of information on the content and composition of organic matter in marine sediments, with ever-more sophisticated techniques continuing to yield information of greater detail and as an accelerating pace. However, there has been no attempt to synthesize this wealth of information. We are establishing a new database that incorporates information relevant to local, regional and global-scale assessment of the content, source and fate of organic materials accumulating in contemporary marine sediments. In the MOSAIC (Modern Ocean Sediment Archive and Inventory of Carbon) database, particular emphasis is placed on molecular and isotopic information, coupled with relevant contextual information (e.g., sedimentological properties) relevant to elucidating factors that influence the efficiency and nature of organic matter burial. The main features of MOSAIC include: (i) Emphasis on continental margin sediments as major loci of carbon burial, and as the interface between terrestrial and oceanic realms; (ii) Bulk to molecular-level organic geochemical properties and parameters, including concentration and isotopic compositions; (iii) Inclusion of extensive contextual data regarding the depositional setting, in particular with respect to sedimentological and redox characteristics. The ultimate goal is to create an open-access instrument, available on the web, to be utilized for research and education by the international community who can both contribute to, and interrogate the database. The submission will be accomplished by means of a pre-configured table available on the MOSAIC webpage. The information on the filled tables will be checked and eventually imported, via the Structural Query Language (SQL), into

  7. Mouse Genome Database: From sequence to phenotypes and disease models

    PubMed Central

    Richardson, Joel E.; Kadin, James A.; Smith, Cynthia L.; Blake, Judith A.; Bult, Carol J.

    2015-01-01

    Summary The Mouse Genome Database (MGD, www.informatics.jax.org) is the international scientific database for genetic, genomic, and biological data on the laboratory mouse to support the research requirements of the biomedical community. To accomplish this goal, MGD provides broad data coverage, serves as the authoritative standard for mouse nomenclature for genes, mutants, and strains, and curates and integrates many types of data from literature and electronic sources. Among the key data sets MGD supports are: the complete catalog of mouse genes and genome features, comparative homology data for mouse and vertebrate genes, the authoritative set of Gene Ontology (GO) annotations for mouse gene functions, a comprehensive catalog of mouse mutations and their phenotypes, and a curated compendium of mouse models of human diseases. Here, we describe the data acquisition process, specifics about MGD's key data areas, methods to access and query MGD data, and outreach and user help facilities. genesis 53:458–473, 2015. © 2015 The Authors. Genesis Published by Wiley Periodicals, Inc. PMID:26150326

  8. The eukaryotic promoter database in its 30th year: focus on non-vertebrate organisms

    PubMed Central

    Dreos, René; Ambrosini, Giovanna; Groux, Romain; Cavin Périer, Rouaïda; Bucher, Philipp

    2017-01-01

    We present an update of the Eukaryotic Promoter Database EPD (http://epd.vital-it.ch), more specifically on the EPDnew division, which contains comprehensive organisms-specific transcription start site (TSS) collections automatically derived from next generation sequencing (NGS) data. Thanks to the abundant release of new high-throughput transcript mapping data (CAGE, TSS-seq, GRO-cap) the database could be extended to plant and fungal species. We further report on the expansion of the mass genome annotation (MGA) repository containing promoter-relevant chromatin profiling data and on improvements for the EPD entry viewers. Finally, we present a new data access tool, ChIP-Extract, which enables computational biologists to extract diverse types of promoter-associated data in numerical table formats that are readily imported into statistical analysis platforms such as R. PMID:27899657

  9. A future of the model organism model

    PubMed Central

    Rine, Jasper

    2014-01-01

    Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. However, researchers must take special care and implement new resources to enable the newest members of the community to engage fully with the remarkable legacy of information in these fields. PMID:24577733

  10. LANL High-Level Model (HLM) database development letter report

    SciTech Connect

    1995-10-01

    Traditional methods of evaluating munitions have been able to successfully compare like munition`s capabilities. On the modern battlefield, however, many different types of munitions compete for the same set of targets. Assessing the overall stockpile capability and proper mix of these weapons is not a simple task, as their use depends upon the specific geographic region of the world, the threat capabilities, the tactics and operational strategy used by both the US and Threat commanders, and of course the type and quantity of munitions available to the CINC. To sort out these types of issues, a hierarchical set of dynamic, two-sided combat simulations are generally used. The DoD has numerous suitable models for this purpose, but rarely are the models focused on munitions expenditures. Rather, they are designed to perform overall platform assessments and force mix evaluations. However, in some cases, the models could be easily adapted to provide this information, since it is resident in the model`s database. Unfortunately, these simulations` complexity (their greatest strength) precludes quick turnaround assessments of the type and scope required by senior decision-makers.

  11. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    SciTech Connect

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division; Purdue Univ.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  12. An atmospheric tritium release database for model comparisons

    SciTech Connect

    Murphy, C.E. Jr.; Wortham, G.R.

    1997-10-13

    A database of vegetation, soil, and air tritium concentrations at gridded coordinate locations following nine accidental atmospheric releases is described. The concentration data is supported by climatological data taken during and immediately after the releases. In six cases, the release data is supplemented with meteorological data taken at seven towers scattered throughout the immediate area of the releases and data from a single television tower instrumented at eight heights. While none of the releases caused a significant dose to the public, the data collected is valuable for comparison with the results of tritium transport models used for risk assessment. The largest, potential off-site dose from any of the releases was calculated to be 1.6 mrem. The population dose from this same release was 46 person-rem which represents 0.04 percent of the natural background dose to the population in the path of the release.

  13. What makes a model organism?

    PubMed

    Leonelli, Sabina; Ankeny, Rachel A

    2013-12-01

    This article explains the key role of model organisms within contemporary research, while at the same time acknowledging their limitations as biological models. We analyse the epistemic and social characteristics of model organism biology as a form of "big science", which includes the development of large, centralised infrastructures, a shared ethos and a specific long-term vision about the "right way" to do research. In order to make wise use of existing resources, researchers now find themselves committed to carrying out this vision with its accompanying assumptions. By clarifying the specific characteristics of model organism work, we aim to provide a framework to assess how much funding should be allocated to such research. On the one hand, it is imperative to exploit the resources and knowledge accumulated using these models to study more diverse groups of organisms. On the other hand, this type of research may be inappropriate for research programmes where the processes of interest are much more delimited, can be usefully studied in isolation and/or are simply not captured by model organism biology.

  14. dSED: A database tool for modeling sediment early diagenesis

    NASA Astrophysics Data System (ADS)

    Katsev, S.; Rancourt, D. G.; L'Heureux, I.

    2003-04-01

    Sediment early diagenesis reaction transport models (RTMs) are becoming powerful tools in providing kinetic descriptions of the metal and nutrient diagenetic cycling in marine, lacustrine, estuarine, and other aquatic sediments, as well as of exchanges with the water column. Whereas there exist several good database/program combinations for thermodynamic equilibrium calculations in aqueous systems, at present there exist no database tools for classification and analysis of the kinetic data essential to RTM development. We present a database tool that is intended to serve as an online resource for information about chemical reactions, solid phase and solute reactants, sorption reactions, transport mechanisms, and kinetic and equilibrium parameters that are relevant to sediment diagenesis processes. The list of reactive substances includes but is not limited to organic matter, Fe and Mn oxides and oxyhydroxides, sulfides and sulfates, calcium, iron, and manganese carbonates, phosphorus-bearing minerals, and silicates. Aqueous phases include dissolved carbon dioxide, oxygen, methane, hydrogen sulfide, sulfate, nitrate, phosphate, some organic compounds, and dissolved metal species. A number of filters allow extracting information according to user-specified criteria, e.g., about a class of substances contributing to the cycling of iron. The database also includes bibliographic information about published diagenetic models and the reactions and processes that they consider. At the time of preparing this abstract, dSED contained 128 reactions and 12 pre-defined filters. dSED is maintained by the Lake Sediment Structure and Evolution (LSSE) group at the University of Ottawa (www.science.uottawa.ca/LSSE/dSED) and we invite input from the geochemical community.

  15. Determination of urban volatile organic compound emission ratios and comparison with an emissions database

    NASA Astrophysics Data System (ADS)

    Warneke, C.; McKeen, S. A.; de Gouw, J. A.; Goldan, P. D.; Kuster, W. C.; Holloway, J. S.; Williams, E. J.; Lerner, B. M.; Parrish, D. D.; Trainer, M.; Fehsenfeld, F. C.; Kato, S.; Atlas, E. L.; Baker, A.; Blake, D. R.

    2007-05-01

    During the NEAQS-ITCT2k4 campaign in New England, anthropogenic VOCs and CO were measured downwind from New York City and Boston. The emission ratios of VOCs relative to CO and acetylene were calculated using a method in which the ratio of a VOC with acetylene is plotted versus the photochemical age. The intercept at the photochemical age of zero gives the emission ratio. The so determined emission ratios were compared to other measurement sets, including data from the same location in 2002, canister samples collected inside New York City and Boston, aircraft measurements from Los Angeles in 2002, and the average urban composition of 39 U.S. cities. All the measurements generally agree within a factor of two. The measured emission ratios also agree for most compounds within a factor of two with vehicle exhaust data indicating that a major source of VOCs in urban areas is automobiles. A comparison with an anthropogenic emission database shows less agreement. Especially large discrepancies were found for the C2-C4 alkanes and most oxygenated species. As an example, the database overestimated toluene by almost a factor of three, which caused an air quality forecast model (WRF-CHEM) using this database to overpredict the toluene mixing ratio by a factor of 2.5 as well. On the other hand, the overall reactivity of the measured species and the reactivity of the same compounds in the emission database were found to agree within 30%.

  16. Podiform chromite deposits--database and grade and tonnage models

    USGS Publications Warehouse

    Mosier, Dan L.; Singer, Donald A.; Moring, Barry C.; Galloway, John P.

    2012-01-01

    Chromite ((Mg, Fe++)(Cr, Al, Fe+++)2O4) is the only source for the metallic element chromium, which is used in the metallurgical, chemical, and refractory industries. Podiform chromite deposits are small magmatic chromite bodies formed in the ultramafic section of an ophiolite complex in the oceanic crust. These deposits have been found in midoceanic ridge, off-ridge, and suprasubduction tectonic settings. Most podiform chromite deposits are found in dunite or peridotite near the contact of the cumulate and tectonite zones in ophiolites. We have identified 1,124 individual podiform chromite deposits, based on a 100-meter spatial rule, and have compiled them in a database. Of these, 619 deposits have been used to create three new grade and tonnage models for podiform chromite deposits. The major podiform chromite model has a median tonnage of 11,000 metric tons and a mean grade of 45 percent Cr2O3. The minor podiform chromite model has a median tonnage of 100 metric tons and a mean grade of 43 percent Cr2O3. The banded podiform chromite model has a median tonnage of 650 metric tons and a mean grade of 42 percent Cr2O3. Observed frequency distributions are also given for grades of rhodium, iridium, ruthenium, palladium, and platinum. In resource assessment applications, both major and minor podiform chromite models may be used for any ophiolite complex regardless of its tectonic setting or ophiolite zone. Expected sizes of undiscovered podiform chromite deposits, with respect to degree of deformation or ore-forming process, may determine which model is appropriate. The banded podiform chromite model may be applicable for ophiolites in both suprasubduction and midoceanic ridge settings.

  17. Cascade fuzzy ART: a new extensible database for model-based object recognition

    NASA Astrophysics Data System (ADS)

    Hung, Hai-Lung; Liao, Hong-Yuan M.; Lin, Shing-Jong; Lin, Wei-Chung; Fan, Kuo-Chin

    1996-02-01

    In this paper, we propose a cascade fuzzy ART (CFART) neural network which can be used as an extensible database in a model-based object recognition system. The proposed CFART networks can accept both binary and continuous inputs. Besides, it preserves the prominent characteristics of a fuzzy ART network and extends the fuzzy ART's capability toward a hierarchical class representation of input patterns. The learning processes of the proposed network are unsupervised and self-organizing, which include coupled top-down searching and bottom-up learning processes. In addition, a global searching tree is built to speed up the learning and recognition processes.

  18. Generic models of deep formation water calculated with PHREEQC using the "gebo"-database

    NASA Astrophysics Data System (ADS)

    Bozau, E.; van Berk, W.

    2012-04-01

    To identify processes during the use of formation waters for geothermal energy production an extended hydrogeochemical thermodynamic database (named "gebo"-database) for the well known and commonly used software PHREEQC has been developed by collecting and inserting data from literature. The following solution master species: Fe(+2), Fe(+3), S(-2), C(-4), Si, Zn, Pb, and Al are added to the database "pitzer.dat" which is provided with the code PHREEQC. According to the solution master species the necessary solution species and phases (solid phases and gases) are implemented. Furthermore, temperature and pressure adaptations of the mass action law constants, Pitzer parameters for the calculation of activity coefficients in waters of high ionic strength and solubility equilibria among gaseous and aqueous species of CO2, methane, and hydrogen sulphide are implemented into the "gebo"-database. Combined with the "gebo"-database the code PHREEQC can be used to test the behaviour of highly concentrated solutions (e.g. formation waters, brines). Chemical changes caused by temperature and pressure gradients as well as the exposure of the water to the atmosphere and technical equipments can be modelled. To check the plausibility of additional and adapted data/parameters experimental solubility data from literature (e.g. sulfate and carbonate minerals) are compared to modelled mineral solubilities at elevated levels of Total Dissolved Solids (TDS), temperature, and pressure. First results show good matches between modelled and experimental mineral solubility for barite, celestite, anhydrite, and calcite in high TDS waters indicating the plausibility of additional and adapted data and parameters. Furthermore, chemical parameters of geothermal wells in the North German Basin are used to test the "gebo"-database. The analysed water composition (starting with the main cations and anions) is calculated by thermodynamic equilibrium reactions of pure water with the minerals found in

  19. Share and enjoy: anatomical models database--generating and sharing cardiovascular model data using web services.

    PubMed

    Kerfoot, Eric; Lamata, Pablo; Niederer, Steve; Hose, Rod; Spaan, Jos; Smith, Nic

    2013-11-01

    Sharing data between scientists and with clinicians in cardiac research has been facilitated significantly by the use of web technologies. The potential of this technology has meant that information sharing has been routinely promoted through databases that have encouraged stakeholder participation in communities around these services. In this paper we discuss the Anatomical Model Database (AMDB) (Gianni et al. Functional imaging and modeling of the heart. Springer, Heidelberg, 2009; Gianni et al. Phil Trans Ser A Math Phys Eng Sci 368:3039-3056, 2010) which both facilitate a database-centric approach to collaboration, and also extends this framework with new capabilities for creating new mesh data. AMDB currently stores cardiac geometric models described in Gianni et al. (Functional imaging and modelling of the heart. Springer, Heidelberg, 2009), a number of additional cardiac models describing geometry and functional properties, and most recently models generated using a web service. The functional models represent data from simulations in geometric form, such as electrophysiology or mechanics, many of which are present in AMDB as part of a benchmark study. Finally, the heartgen service has been added for producing left or bi-ventricle models derived from binary image data using the methods described in Lamata et al. (Med Image Anal 15:801-813, 2011). The results can optionally be hosted on AMDB alongside other community-provided anatomical models. AMDB is, therefore, a unique database storing geometric data (rather than abstract models or image data) combined with a powerful web service for generating new geometric models.

  20. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    SciTech Connect

    Olson, R.J.

    2002-03-19

    }-grid cells for which inventory, modeling, or remote-sensing tools were used to scale up the point measurements. Documentation of the content and organization of the EMDI databases are provided.

  1. An Access Path Model for Physical Database Design.

    DTIC Science & Technology

    1979-12-28

    target system. 4.1 Algebraic Structure for Physical Design For the purposes of implementation-oriented design, we shall use the logical access paths...subsection, we present an algorithm for gen- erating a maximal labelling that specifies superior support for the access paths most heavily travelled. Assume...A.C.M. SIGMOD Conf., (May 79). [CARD731 Cardenas , A. F., "Evaluation and Selection of File Organization - A Model and a System," Comm. A.C.M., V 16, N

  2. Data model and relational database design for the New England Water-Use Data System (NEWUDS)

    USGS Publications Warehouse

    Tessler, Steven

    2001-01-01

    The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  3. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  4. GIS-based hydrogeological databases and groundwater modelling

    NASA Astrophysics Data System (ADS)

    Gogu, Radu Constantin; Carabin, Guy; Hallet, Vincent; Peters, Valerie; Dassargues, Alain

    2001-12-01

    Reliability and validity of groundwater analysis strongly depend on the availability of large volumes of high-quality data. Putting all data into a coherent and logical structure supported by a computing environment helps ensure validity and availability and provides a powerful tool for hydrogeological studies. A hydrogeological geographic information system (GIS) database that offers facilities for groundwater-vulnerability analysis and hydrogeological modelling has been designed in Belgium for the Walloon region. Data from five river basins, chosen for their contrasting hydrogeological characteristics, have been included in the database, and a set of applications that have been developed now allow further advances. Interest is growing in the potential for integrating GIS technology and groundwater simulation models. A "loose-coupling" tool was created between the spatial-database scheme and the groundwater numerical model interface GMS (Groundwater Modelling System). Following time and spatial queries, the hydrogeological data stored in the database can be easily used within different groundwater numerical models. Résumé. La validité et la reproductibilité de l'analyse d'un aquifère dépend étroitement de la disponibilité de grandes quantités de données de très bonne qualité. Le fait de mettre toutes les données dans une structure cohérente et logique soutenue par les logiciels nécessaires aide à assurer la validité et la disponibilité et fournit un outil puissant pour les études hydrogéologiques. Une base de données pour un système d'information géographique (SIG) hydrogéologique qui offre toutes les facilités pour l'analyse de la vulnérabilité des eaux souterraines et la modélisation hydrogéologique a été établi en Belgique pour la région Wallonne. Les données de cinq bassins de rivières, choisis pour leurs caractéristiques hydrogéologiques différentes, ont été introduites dans la base de données, et un ensemble d

  5. Incorporation of the CrossFire Beilstein Database into the Organic Chemistry Curriculum at the Royal Danish School of Pharmacy

    NASA Astrophysics Data System (ADS)

    Brøgger Christensen, S.; Franzyk, Henrik; Frølund, Bente; Jaroszewski, Jerzy W.; Stærk, Dan; Vedsø, Per

    2002-06-01

    The CrossFire Beilstein database has been incorporated into the organic chemistry curriculum at the Royal Danish School of Pharmacy as a powerful pedagogic tool. During a laboratory course in organic synthesis the database enables the students to get comprehensive overviews of known synthetic methods for a given compound. During a laboratory course in identification and as a part of an applied course in organic spectroscopy the students use the database for obtaining lists of all recorded isomeric compounds, facilitating an exhaustive identification. The main entrances for identification purposes are molecular formulas deduced either from titrations or from mass spectra combined with partial structures identified by chemical tests, or by interpretation of spectra. Thus, identifications made using the CrossFire Beilstein database will exclude some possibilities and point to correct structures from a selection of existing compounds. This appears to help the learning process considerably.

  6. Filling a missing link between biogeochemical, climate and ecosystem studies: a global database of atmospheric water-soluble organic nitrogen

    NASA Astrophysics Data System (ADS)

    Cornell, Sarah

    2015-04-01

    It is time to collate a global community database of atmospheric water-soluble organic nitrogen deposition. Organic nitrogen (ON) has long been known to be globally ubiquitous in atmospheric aerosol and precipitation, with implications for air and water quality, climate, biogeochemical cycles, ecosystems and human health. The number of studies of atmospheric ON deposition has increased steadily in recent years, but to date there is no accessible global dataset, for either bulk ON or its major components. Improved qualitative and quantitative understanding of the organic nitrogen component is needed to complement the well-established knowledge base pertaining to other components of atmospheric deposition (cf. Vet et al 2014). Without this basic information, we are increasingly constrained in addressing the current dynamics and potential interactions of atmospheric chemistry, climate and ecosystem change. To see the full picture we need global data synthesis, more targeted data gathering, and models that let us explore questions about the natural and anthropogenic dynamics of atmospheric ON. Collectively, our research community already has a substantial amount of atmospheric ON data. Published reports extend back over a century and now have near-global coverage. However, datasets available from the literature are very piecemeal and too often lack crucially important information that would enable aggregation or re-use. I am initiating an open collaborative process to construct a community database, so we can begin to systematically synthesize these datasets (generally from individual studies at a local and temporally limited scale) to increase their scientific usability and statistical power for studies of global change and anthropogenic perturbation. In drawing together our disparate knowledge, we must address various challenges and concerns, not least about the comparability of analysis and sampling methodologies, and the known complexity of composition of ON. We

  7. The mouse genome database: genotypes, phenotypes, and models of human disease.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2013-01-01

    The laboratory mouse is the premier animal model for studying human biology because all life stages can be accessed experimentally, a completely sequenced reference genome is publicly available and there exists a myriad of genomic tools for comparative and experimental research. In the current era of genome scale, data-driven biomedical research, the integration of genetic, genomic and biological data are essential for realizing the full potential of the mouse as an experimental model. The Mouse Genome Database (MGD; http://www.informatics.jax.org), the community model organism database for the laboratory mouse, is designed to facilitate the use of the laboratory mouse as a model system for understanding human biology and disease. To achieve this goal, MGD integrates genetic and genomic data related to the functional and phenotypic characterization of mouse genes and alleles and serves as a comprehensive catalog for mouse models of human disease. Recent enhancements to MGD include the addition of human ortholog details to mouse Gene Detail pages, the inclusion of microRNA knockouts to MGD's catalog of alleles and phenotypes, the addition of video clips to phenotype images, providing access to genotype and phenotype data associated with quantitative trait loci (QTL) and improvements to the layout and display of Gene Ontology annotations.

  8. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation

    SciTech Connect

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat

    2012-09-15

    Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360 Degree-Sign through anthropomorphic voxelized female chest and head (0 Degree-Sign and 30 Degree-Sign tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI{sub vol} and multiplying by a physical CTDI{sub vol} measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30 Degree-Sign relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate

  9. Modeling of heavy organic deposition

    SciTech Connect

    Chung, F.T.H.

    1992-01-01

    Organic deposition is often a major problem in petroleum production and processing. This problem is manifested by current activities in gas flooding and heavy oil production. The need for understanding the nature of asphaltenes and asphaltics and developing solutions to the deposition problem is well recognized. Prediction technique is crucial to solution development. In the past 5 years, some progress in modeling organic deposition has been made. A state-of-the-art review of methods for modeling organic deposition is presented in this report. Two new models were developed in this work; one based on a thermodynamic equilibrium principle and the other on the colloidal stability theory. These two models are more general and realistic than others previously reported. Because experimental results on the characteristics of asphaltene are inconclusive, it is still not well known whether the asphaltenes is crude oil exist as a true solution or as a colloidal suspension. Further laboratory work which is designed to study the solubility properties of asphaltenes and to provide additional information for model development is proposed. Some experimental tests have been conducted to study the mechanisms of CO{sub 2}-induced asphaltene precipitation. Coreflooding experiments show that asphaltene precipitation occurs after gas breakthrough. The mechanism of CO{sub 2}-induced asphaltene precipitation is believed to occur by hydrocarbon extraction which causes change in oil composition. Oil swelling due to CO{sub 2} solubilization does not induce asphaltene precipitation.

  10. MOAtox: A comprehensive mode of action and acute aquatic toxicity database for predictive model development.

    PubMed

    Barron, M G; Lilavois, C R; Martin, T M

    2015-04-01

    The mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity and as an alternative to chemical class-based predictive toxicity modeling. However, the development of quantitative structure activity relationship (QSAR) and other models has been limited by the availability of comprehensive high quality MOA and toxicity databases. The current study developed a dataset of MOA assignments for 1213 chemicals that included a diversity of metals, pesticides, and other organic compounds that encompassed six broad and 31 specific MOAs. MOA assignments were made using a combination of high confidence approaches that included international consensus classifications, QSAR predictions, and weight of evidence professional judgment based on an assessment of structure and literature information. A toxicity database of 674 acute values linked to chemical MOA was developed for fish and invertebrates. Additionally, species-specific measured or high confidence estimated acute values were developed for the four aquatic species with the most reported toxicity values: rainbow trout (Oncorhynchus mykiss), fathead minnow (Pimephales promelas), bluegill (Lepomis macrochirus), and the cladoceran (Daphnia magna). Measured acute toxicity values met strict standardization and quality assurance requirements. Toxicity values for chemicals with missing species-specific data were estimated using established interspecies correlation models and procedures (Web-ICE; http://epa.gov/ceampubl/fchain/webice/), with the highest confidence values selected. The resulting dataset of MOA assignments and paired toxicity values are provided in spreadsheet format as a comprehensive standardized dataset available for predictive aquatic toxicology model development.

  11. Avibase – a database system for managing and organizing taxonomic concepts

    PubMed Central

    Lepage, Denis; Vaidya, Gaurav; Guralnick, Robert

    2014-01-01

    Abstract Scientific names of biological entities offer an imperfect resolution of the concepts that they are intended to represent. Often they are labels applied to entities ranging from entire populations to individual specimens representing those populations, even though such names only unambiguously identify the type specimen to which they were originally attached. Thus the real-life referents of names are constantly changing as biological circumscriptions are redefined and thereby alter the sets of individuals bearing those names. This problem is compounded by other characteristics of names that make them ambiguous identifiers of biological concepts, including emendations, homonymy and synonymy. Taxonomic concepts have been proposed as a way to address issues related to scientific names, but they have yet to receive broad recognition or implementation. Some efforts have been made towards building systems that address these issues by cataloguing and organizing taxonomic concepts, but most are still in conceptual or proof-of-concept stage. We present the on-line database Avibase as one possible approach to organizing taxonomic concepts. Avibase has been successfully used to describe and organize 844,000 species-level and 705,000 subspecies-level taxonomic concepts across every major bird taxonomic checklist of the last 125 years. The use of taxonomic concepts in place of scientific names, coupled with efficient resolution services, is a major step toward addressing some of the main deficiencies in the current practices of scientific name dissemination and use. PMID:25061375

  12. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  13. FOAM (Functional Ontology Assignments for Metagenomes): A Hidden Markov Model (HMM) database with environmental focus

    SciTech Connect

    Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; Ta , Neslihan; Lamendella, Regina; Dvornik, Jill; Mackelprang, Rachel; Myrold, David D.; Jumpponen, Ari; Tringe, Susannah G.; Holman, Elizabeth; Mavromatis, Konstantinos; Jansson, Janet K.

    2014-09-26

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associated functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.

  14. Solid Waste Projection Model: Database (Version 1.3). Technical reference manual

    SciTech Connect

    Blackburn, C.L.

    1991-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.3 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement.

  15. Expanding on Successful Concepts, Models, and Organization

    SciTech Connect

    Teeguarden, Justin G.; Tan, Yu-Mei; Edwards, Stephen W.; Leonard, Jeremy A.; Anderson, Kim A.; Corley, Richard A.; Kile, Molly L.; L. Massey Simonich, Staci; Stone, David; Tanguay, Robert L.; Waters, Katrina M.; Harper, Stacey L.; Williams, David E.

    2016-09-06

    In her letter to the editor1 regarding our recent Feature Article “Completing the Link between Exposure Science and Toxicology for Improved Environmental Health Decision Making: The Aggregate Exposure Pathway Framework” 2, Dr. von Göetz expressed several concerns about terminology, and the perception that we propose the replacement of successful approaches and models for exposure assessment with a concept. We are glad to have the opportunity to address these issues here. If the goal of the AEP framework was to replace existing exposure models or databases for organizing exposure data with a concept, we would share Dr. von Göetz concerns. Instead, the outcome we promote is broader use of an organizational framework for exposure science. The framework would support improved generation, organization, and interpretation of data as well as modeling and prediction, not replacement of models. The field of toxicology has seen the benefits of wide use of one or more organizational frameworks (e.g., mode and mechanism of action, adverse outcome pathway). These frameworks influence how experiments are designed, data are collected, curated, stored and interpreted and ultimately how data are used in risk assessment. Exposure science is poised to similarly benefit from broader use of a parallel organizational framework, which Dr. von Göetz correctly points out, is currently used in the exposure modeling community. In our view, the concepts used so effectively in the exposure modeling community, expanded upon in the AEP framework, could see wider adoption by the field as a whole. The value of such a framework was recognized by the National Academy of Sciences.3 Replacement of models, databases, or any application with the AEP framework was not proposed in our article. The positive role broader more consistent use of such a framework might have in enabling and advancing “general activities such as data acquisition, organization…,” and exposure modeling was discussed

  16. Inorganic bromine in organic molecular crystals: Database survey and four case studies

    NASA Astrophysics Data System (ADS)

    Nemec, Vinko; Lisac, Katarina; Stilinović, Vladimir; Cinčić, Dominik

    2017-01-01

    We present a Cambridge Structural Database and experimental study of multicomponent molecular crystals containing bromine. The CSD study covers supramolecular behaviour of bromide and tribromide anions as well as halogen bonded dibromine molecules in crystal structures of organic salts and cocrystals, and a study of the geometries and complexities in polybromide anion systems. In addition, we present four case studies of organic structures with bromide, tribromide and polybromide anions as well as the neutral dibromine molecule. These include the first observed crystal with diprotonated phenazine, a double salt of phenazinium bromide and tribromide, a cocrystal of 4-methoxypyridine with the neutral dibromine molecule as a halogen bond donor, as well as bis(4-methoxypyridine)bromonium polybromide. Structural features of the four case studies are in the most part consistent with the statistically prevalent behaviour indicated by the CSD study for given bromine species, although they do exhibit some unorthodox structural features and in that indicate possible supramolecular causes for aberrations from the statistically most abundant (and presumably most favourable) geometries.

  17. Solid Waste Projection Model: Database (Version 1.4). Technical reference manual

    SciTech Connect

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User`s Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193).

  18. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    SciTech Connect

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L.; Loftis, J.P.; Shipe, P.C.; Truett, L.F.

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  19. PCoM-DB Update: A Protein Co-Migration Database for Photosynthetic Organisms.

    PubMed

    Takabayashi, Atsushi; Takabayashi, Saeka; Takahashi, Kaori; Watanabe, Mai; Uchida, Hiroko; Murakami, Akio; Fujita, Tomomichi; Ikeuchi, Masahiko; Tanaka, Ayumi

    2016-12-22

    The identification of protein complexes is important for the understanding of protein structure and function and the regulation of cellular processes. We used blue-native PAGE and tandem mass spectrometry to identify protein complexes systematically, and built a web database, the protein co-migration database (PCoM-DB, http://pcomdb.lowtem.hokudai.ac.jp/proteins/top), to provide prediction tools for protein complexes. PCoM-DB provides migration profiles for any given protein of interest, and allows users to compare them with migration profiles of other proteins, showing the oligomeric states of proteins and thus identifying potential interaction partners. The initial version of PCoM-DB (launched in January 2013) included protein complex data for Synechocystis whole cells and Arabidopsis thaliana thylakoid membranes. Here we report PCoM-DB version 2.0, which includes new data sets and analytical tools. Additional data are included from whole cells of the pelagic marine picocyanobacterium Prochlorococcus marinus, the thermophilic cyanobacterium Thermosynechococcus elongatus, the unicellular green alga Chlamydomonas reinhardtii and the bryophyte Physcomitrella patens. The Arabidopsis protein data now include data for intact mitochondria, intact chloroplasts, chloroplast stroma and chloroplast envelopes. The new tools comprise a multiple-protein search form and a heat map viewer for protein migration profiles. Users can compare migration profiles of a protein of interest among different organelles or compare migration profiles among different proteins within the same sample. For Arabidopsis proteins, users can compare migration profiles of a protein of interest with putative homologous proteins from non-Arabidopsis organisms. The updated PCoM-DB will help researchers find novel protein complexes and estimate their evolutionary changes in the green lineage.

  20. Functional Analysis and Discovery of Microbial Genes Transforming Metallic and Organic Pollutants: Database and Experimental Tools

    SciTech Connect

    Lawrence P. Wackett; Lynda B.M. Ellis

    2004-12-09

    Microbial functional genomics is faced with a burgeoning list of genes which are denoted as unknown or hypothetical for lack of any knowledge about their function. The majority of microbial genes encode enzymes. Enzymes are the catalysts of metabolism; catabolism, anabolism, stress responses, and many other cell functions. A major problem facing microbial functional genomics is proposed here to derive from the breadth of microbial metabolism, much of which remains undiscovered. The breadth of microbial metabolism has been surveyed by the PIs and represented according to reaction types on the University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD): http://umbbd.ahc.umn.edu/search/FuncGrps.html The database depicts metabolism of 49 chemical functional groups, representing most of current knowledge. Twice that number of chemical groups are proposed here to be metabolized by microbes. Thus, at least 50% of the unique biochemical reactions catalyzed by microbes remain undiscovered. This further suggests that many unknown and hypothetical genes encode functions yet undiscovered. This gap will be partly filled by the current proposal. The UM-BBD will be greatly expanded as a resource for microbial functional genomics. Computational methods will be developed to predict microbial metabolism which is not yet discovered. Moreover, a concentrated effort to discover new microbial metabolism will be conducted. The research will focus on metabolism of direct interest to DOE, dealing with the transformation of metals, metalloids, organometallics and toxic organics. This is precisely the type of metabolism which has been characterized most poorly to date. Moreover, these studies will directly impact functional genomic analysis of DOE-relevant genomes.

  1. Bayesian statistical modeling of disinfection byproduct (DBP) bromine incorporation in the ICR database.

    PubMed

    Francis, Royce A; Vanbriesen, Jeanne M; Small, Mitchell J

    2010-02-15

    Statistical models are developed for bromine incorporation in the trihalomethane (THM), trihaloacetic acids (THAA), dihaloacetic acid (DHAA), and dihaloacetonitrile (DHAN) subclasses of disinfection byproducts (DBPs) using distribution system samples from plants applying only free chlorine as a primary or residual disinfectant in the Information Collection Rule (ICR) database. The objective of this study is to characterize the effect of water quality conditions before, during, and post-treatment on distribution system bromine incorporation into DBP mixtures. Bayesian Markov Chain Monte Carlo (MCMC) methods are used to model individual DBP concentrations and estimate the coefficients of the linear models used to predict the bromine incorporation fraction for distribution system DBP mixtures in each of the four priority DBP classes. The bromine incorporation models achieve good agreement with the data. The most important predictors of bromine incorporation fraction across DBP classes are alkalinity, specific UV absorption (SUVA), and the bromide to total organic carbon ratio (Br:TOC) at the first point of chlorine addition. Free chlorine residual in the distribution system, distribution system residence time, distribution system pH, turbidity, and temperature only slightly influence bromine incorporation. The bromide to applied chlorine (Br:Cl) ratio is not a significant predictor of the bromine incorporation fraction (BIF) in any of the four classes studied. These results indicate that removal of natural organic matter and the location of chlorine addition are important treatment decisions that have substantial implications for bromine incorporation into disinfection byproduct in drinking waters.

  2. Modeling BVOC isoprene emissions based on a GIS and remote sensing database

    NASA Astrophysics Data System (ADS)

    Wong, Man Sing; Sarker, Md. Latifur Rahman; Nichol, Janet; Lee, Shun-cheng; Chen, Hongwei; Wan, Yiliang; Chan, P. W.

    2013-04-01

    This paper presents a geographic information systems (GIS) model to relate biogenic volatile organic compounds (BVOCs) isoprene emissions to ecosystem type, as well as environmental drivers such as light intensity, temperature, landscape factor and foliar density. Data and techniques have recently become available which can permit new improved estimates of isoprene emissions over Hong Kong. The techniques are based on Guenther et al.'s (1993, 1999) model. The spatially detailed mapping of isoprene emissions over Hong Kong at a resolution of 100 m and a database has been constructed for retrieval of the isoprene maps from February 2007 to January 2008. This approach assigns emission rates directly to ecosystem types not to individual species, since unlike in temperate regions where one or two single species may dominate over large regions, Hong Kong's vegetation is extremely diverse with up to 300 different species in 1 ha. Field measurements of emissions by canister sampling obtained a range of ambient emissions according to different climatic conditions for Hong Kong's main ecosystem types in both urban and rural areas, and these were used for model validation. Results show the model-derived isoprene flux to have high to moderate correlations with field observations (i.e. r2 = 0.77, r2 = 0.63, r2 = 0.37 for all 24 field measurements, subset for summer, and winter data, respectively) which indicate the robustness of the approach when applied to tropical forests at detailed level, as well as the promising role of remote sensing in isoprene mapping. The GIS model and raster database provide a simple and low cost estimation of the BVOC isoprene in Hong Kong at detailed level. City planners and environmental authorities may use the derived models for estimating isoprene transportation, and its interaction with anthropogenic pollutants in urban areas.

  3. Generating a mortality model from a pediatric ICU (PICU) database utilizing knowledge discovery.

    PubMed Central

    Kennedy, Curtis E.; Aoki, Noriaki

    2002-01-01

    Current models for predicting outcomes are limited by biases inherent in a priori hypothesis generation. Knowledge discovery algorithms generate models directly from databases, minimizing such limitations. Our objective was to generate a mortality model from a PICU database utilizing knowledge discovery techniques. The database contained 5067 records with 192 clinically relevant variables. It was randomly split into training (75%) and validation (25%) groups. We used decision tree induction to generate a mortality model from the training data, and validated its performance on the validation data. The original PRISM algorithm was used for comparison. The decision tree model contained 25 variables and predicted 53/88 deaths; 29 correctly (Sens:33%, Spec:98%, PPV:54%). PRISM predicted 27/88 deaths correctly (Sens:30%, Spec:98%, PPV:51%). Performance difference between models was not significant. We conclude that knowledge discovery algorithms can generate a mortality model from a PICU database, helping establish validity of such tools in the clinical medical domain. PMID:12463850

  4. Beyond emotion archetypes: databases for emotion modelling using neural networks.

    PubMed

    Cowie, Roddy; Douglas-Cowie, Ellen; Cox, Cate

    2005-05-01

    There has been rapid development in conceptions of the kind of database that is needed for emotion research. Familiar archetypes are still influential, but the state of the art has moved beyond them. There is concern to capture emotion as it occurs in action and interaction ('pervasive emotion') as well as in short episodes dominated by emotion, and therefore in a range of contexts, which shape the way it is expressed. Context links to modality-different contexts favour different modalities. The strategy of using acted data is not suited to those aims, and has been supplemented by work on both fully natural emotion and emotion induced by various technique that allow more controlled records. Applications for that kind of work go far beyond the 'trouble shooting' that has been the focus for application: 'really natural language processing' is a key goal. The descriptions included in such a database ideally cover quality, emotional content, emotion-related signals and signs, and context. Several schemes are emerging as candidates for describing pervasive emotion. The major contemporary databases are listed, emphasising those which are naturalistic or induced, multimodal, and influential.

  5. DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS TO DATABASES FOR BUILDING STRUCTURE-TOXICITY PREDICTION MODELS

    EPA Science Inventory

    DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
    Ann M. Richard
    US Environmental Protection Agency, Research Triangle Park, NC, USA

    Distributed: Decentralized set of standardized, field-delimited databases,...

  6. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  7. Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints

    ERIC Educational Resources Information Center

    Philip, George C.

    2007-01-01

    This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…

  8. A Database for Propagation Models and Conversion to C++ Programming Language

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    In the past few years, a computer program was produced to contain propagation models and the necessary prediction methods of most propagation phenomena. The propagation model database described here creates a user friendly environment that makes using the database easy for experienced users and novices alike. The database is designed to pass data through the desired models easily and generate relevant results quickly. The database already contains many of the propagation phenomena models accepted by the propagation community and every year new models are added. The major sources of models included are the NASA Propagation Effects Handbook or the International Radio Consultive Committee (CCIR) or publications such as the Institute for Electrical and Electronic Engineers (IEEE).

  9. Developing a comprehensive database management system for organization and evaluation of mammography datasets.

    PubMed

    Wu, Yirong; Rubin, Daniel L; Woods, Ryan W; Elezaby, Mai; Burnside, Elizabeth S

    2014-01-01

    We aimed to design and develop a comprehensive mammography database system (CMDB) to collect clinical datasets for outcome assessment and development of decision support tools. A Health Insurance Portability and Accountability Act (HIPAA) compliant CMDB was created to store multi-relational datasets of demographic risk factors and mammogram results using the Breast Imaging Reporting and Data System (BI-RADS) lexicon. The CMDB collected both biopsy pathology outcomes, in a breast pathology lexicon compiled by extending BI-RADS, and our institutional breast cancer registry. The audit results derived from the CMDB were in accordance with Mammography Quality Standards Act (MQSA) audits and national benchmarks. The CMDB has managed the challenges of multi-level organization demanded by the complexity of mammography practice and lexicon development in pathology. We foresee that the CMDB will be useful for efficient quality assurance audits and development of decision support tools to improve breast cancer diagnosis. Our procedure of developing the CMDB provides a framework to build a detailed data repository for breast imaging quality control and research, which has the potential to augment existing resources.

  10. Developing a Comprehensive Database Management System for Organization and Evaluation of Mammography Datasets

    PubMed Central

    Wu, Yirong; Rubin, Daniel L; Woods, Ryan W; Elezaby, Mai; Burnside, Elizabeth S

    2014-01-01

    We aimed to design and develop a comprehensive mammography database system (CMDB) to collect clinical datasets for outcome assessment and development of decision support tools. A Health Insurance Portability and Accountability Act (HIPAA) compliant CMDB was created to store multi-relational datasets of demographic risk factors and mammogram results using the Breast Imaging Reporting and Data System (BI-RADS) lexicon. The CMDB collected both biopsy pathology outcomes, in a breast pathology lexicon compiled by extending BI-RADS, and our institutional breast cancer registry. The audit results derived from the CMDB were in accordance with Mammography Quality Standards Act (MQSA) audits and national benchmarks. The CMDB has managed the challenges of multi-level organization demanded by the complexity of mammography practice and lexicon development in pathology. We foresee that the CMDB will be useful for efficient quality assurance audits and development of decision support tools to improve breast cancer diagnosis. Our procedure of developing the CMDB provides a framework to build a detailed data repository for breast imaging quality control and research, which has the potential to augment existing resources. PMID:25368510

  11. Knowledge discovery in clinical databases based on variable precision rough set model.

    PubMed Central

    Tsumoto, S.; Ziarko, W.; Shan, N.; Tanaka, H.

    1995-01-01

    Since a large amount of clinical data are being stored electronically, discovery of knowledge from such clinical databases is one of the important growing research area in medical informatics. For this purpose, we develop KDD-R (a system for Knowledge Discovery in Databases using Rough sets), an experimental system for knowledge discovery and machine learning research using variable precision rough sets (VPRS) model, which is an extension of original rough set model. This system works in the following steps. First, it preprocesses databases and translates continuous data into discretized ones. Second, KDD-R checks dependencies between attributes and reduces spurious data. Third, the system computes rules from reduced databases. Finally, fourth, it evaluates decision making. For evaluation, this system is applied to a clinical database of meningoencephalitis, whose computational results show that several new findings are obtained. PMID:8563283

  12. Organ Impairment—Drug–Drug Interaction Database: A Tool for Evaluating the Impact of Renal or Hepatic Impairment and Pharmacologic Inhibition on the Systemic Exposure of Drugs

    PubMed Central

    Yeung, CK; Yoshida, K; Kusama, M; Zhang, H; Ragueneau-Majlessi, I; Argon, S; Li, L; Chang, P; Le, CD; Zhao, P; Zhang, L; Sugiyama, Y; Huang, S-M

    2015-01-01

    The organ impairment and drug–drug interaction (OI-DDI) database is the first rigorously assembled database of pharmacokinetic drug exposure data from publicly available renal and hepatic impairment studies presented together with the maximum change in drug exposure from drug interaction inhibition studies. The database was used to conduct a systematic comparison of the effect of renal/hepatic impairment and pharmacologic inhibition on drug exposure. Additional applications are feasible with the public availability of this database. PMID:26380158

  13. Solid Waste Projection Model: Database User`s Guide. Version 1.4

    SciTech Connect

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established.

  14. Relational-database model for improving quality assurance and process control in a composite manufacturing environment

    NASA Astrophysics Data System (ADS)

    Gentry, Jeffery D.

    2000-05-01

    A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.

  15. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis

    PubMed Central

    Henley, B.C.; Shin, D.C.; Zhang, R.; Marmarelis, V.Z.

    2015-01-01

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results. PMID:26900535

  16. SynechoNET: integrated protein-protein interaction database of a model cyanobacterium Synechocystis sp. PCC 6803

    PubMed Central

    Kim, Woo-Yeon; Kang, Sungsoo; Kim, Byoung-Chul; Oh, Jeehyun; Cho, Seongwoong; Bhak, Jong; Choi, Jong-Soon

    2008-01-01

    Background Cyanobacteria are model organisms for studying photosynthesis, carbon and nitrogen assimilation, evolution of plant plastids, and adaptability to environmental stresses. Despite many studies on cyanobacteria, there is no web-based database of their regulatory and signaling protein-protein interaction networks to date. Description We report a database and website SynechoNET that provides predicted protein-protein interactions. SynechoNET shows cyanobacterial domain-domain interactions as well as their protein-level interactions using the model cyanobacterium, Synechocystis sp. PCC 6803. It predicts the protein-protein interactions using public interaction databases that contain mutually complementary and redundant data. Furthermore, SynechoNET provides information on transmembrane topology, signal peptide, and domain structure in order to support the analysis of regulatory membrane proteins. Such biological information can be queried and visualized in user-friendly web interfaces that include the interactive network viewer and search pages by keyword and functional category. Conclusion SynechoNET is an integrated protein-protein interaction database designed to analyze regulatory membrane proteins in cyanobacteria. It provides a platform for biologists to extend the genomic data of cyanobacteria by predicting interaction partners, membrane association, and membrane topology of Synechocystis proteins. SynechoNET is freely available at or directly at . PMID:18315852

  17. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  18. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    PubMed

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  19. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    USGS Publications Warehouse

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  20. Combining computational models, semantic annotations and simulation experiments in a graph database.

    PubMed

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models' structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/

  1. CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database

    PubMed Central

    Jia, Baofeng; Raphenya, Amogelang R.; Alcock, Brian; Waglechner, Nicholas; Guo, Peiyao; Tsang, Kara K.; Lago, Briony A.; Dave, Biren M.; Pereira, Sheldon; Sharma, Arjun N.; Doshi, Sachin; Courtot, Mélanie; Lo, Raymond; Williams, Laura E.; Frye, Jonathan G.; Elsayegh, Tariq; Sardar, Daim; Westman, Erin L.; Pawlowski, Andrew C.; Johnson, Timothy A.; Brinkman, Fiona S.L.; Wright, Gerard D.; McArthur, Andrew G.

    2017-01-01

    The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins and mutations involved in AMR. CARD is ontologically structured, model centric, and spans the breadth of AMR drug classes and resistance mechanisms, including intrinsic, mutation-driven and acquired resistance. It is built upon the Antibiotic Resistance Ontology (ARO), a custom built, interconnected and hierarchical controlled vocabulary allowing advanced data sharing and organization. Its design allows the development of novel genome analysis tools, such as the Resistance Gene Identifier (RGI) for resistome prediction from raw genome sequence. Recent improvements include extensive curation of additional reference sequences and mutations, development of a unique Model Ontology and accompanying AMR detection models to power sequence analysis, new visualization tools, and expansion of the RGI for detection of emergent AMR threats. CARD curation is updated monthly based on an interplay of manual literature curation, computational text mining, and genome analysis. PMID:27789705

  2. CARD 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database.

    PubMed

    Jia, Baofeng; Raphenya, Amogelang R; Alcock, Brian; Waglechner, Nicholas; Guo, Peiyao; Tsang, Kara K; Lago, Briony A; Dave, Biren M; Pereira, Sheldon; Sharma, Arjun N; Doshi, Sachin; Courtot, Mélanie; Lo, Raymond; Williams, Laura E; Frye, Jonathan G; Elsayegh, Tariq; Sardar, Daim; Westman, Erin L; Pawlowski, Andrew C; Johnson, Timothy A; Brinkman, Fiona S L; Wright, Gerard D; McArthur, Andrew G

    2017-01-04

    The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins and mutations involved in AMR. CARD is ontologically structured, model centric, and spans the breadth of AMR drug classes and resistance mechanisms, including intrinsic, mutation-driven and acquired resistance. It is built upon the Antibiotic Resistance Ontology (ARO), a custom built, interconnected and hierarchical controlled vocabulary allowing advanced data sharing and organization. Its design allows the development of novel genome analysis tools, such as the Resistance Gene Identifier (RGI) for resistome prediction from raw genome sequence. Recent improvements include extensive curation of additional reference sequences and mutations, development of a unique Model Ontology and accompanying AMR detection models to power sequence analysis, new visualization tools, and expansion of the RGI for detection of emergent AMR threats. CARD curation is updated monthly based on an interplay of manual literature curation, computational text mining, and genome analysis.

  3. Publication Trends in Model Organism Research

    PubMed Central

    Dietrich, Michael R.; Ankeny, Rachel A.; Chen, Patrick M.

    2014-01-01

    In 1990, the National Institutes of Health (NIH) gave some organisms special status as designated model organisms. This article documents publication trends for these NIH-designated model organisms over the past 40 years. We find that being designated a model organism by the NIH does not guarantee an increasing publication trend. An analysis of model and nonmodel organisms included in GENETICS since 1960 does reveal a sharp decline in the number of publications using nonmodel organisms yet no decline in the overall species diversity. We suggest that organisms with successful publication records tend to share critical characteristics, such as being well developed as standardized, experimental systems and being used by well-organized communities with good networks of exchange and methods of communication. PMID:25381363

  4. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  5. Database/Template Protocol to Automate Development of Complex Environmental Input Models

    SciTech Connect

    COLLARD, LEONARD

    2004-11-10

    At the U.S. Department of Energy Savannah River Site, complex environmental models were required to analyze the performance of a suite of radionuclides, including decay chains consisting of multiple radionuclides. To facilitate preparation of the model for each radionuclide a sophisticated protocol was established to link a database containing material information with a template. The protocol consists of data and special commands in the template, control information in the database and key selection information in the database. A preprocessor program reads a template, incorporates the appropriate information from the database and generates the final model. In effect, the database/template protocol forms a command language. That command language typically allows the user to perform multiple independent analyses merely by setting environmental variables to identify the nuclides to be analyzed and having the template reference those environmental variables. The environmental variables ca n be set by a batch or script that serves as a shell to analyze each radionuclide in a separate subdirectory (if desired) and to conduct any preprocessing and postprocessing functions. The user has complete control to generate the database and how it interacts with the template. This protocol was valuable for analyzing multiple radionuclides for a single disposal unit. It can easily be applied for other disposal units, to uncertainty studies, and to sensitivity studies. The protocol can be applied to any type of model input for any computer program. A primary advantage of this protocol is that it does not require any programming or compiling while providing robust applicability.

  6. Examination of the U.S. EPA's vapor intrusion database based on models.

    PubMed

    Yao, Yijun; Shen, Rui; Pennell, Kelly G; Suuberg, Eric M

    2013-02-05

    In the United States Environmental Protection Agency (U.S. EPA)'s vapor intrusion (VI) database, there appears to be a trend showing an inverse relationship between the indoor air concentration attenuation factor and the subsurface source vapor concentration. This is inconsistent with the physical understanding in current vapor intrusion models. This article explores possible reasons for this apparent discrepancy. Soil vapor transport processes occur independently of the actual building entry process and are consistent with the trends in the database results. A recent EPA technical report provided a list of factors affecting vapor intrusion, and the influence of some of these are explored in the context of the database results.

  7. Guide on Data Models in the Selection and Use of Database Management Systems. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard J.; Draper, Jesse M.

    A tutorial introduction to data models in general is provided, with particular emphasis on the relational and network models defined by the two proposed ANSI (American National Standards Institute) database language standards. Examples based on the network and relational models include specific syntax and semantics, while examples from the other…

  8. Combining computational models, semantic annotations and simulation experiments in a graph database

    PubMed Central

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  9. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  10. Analysis of the Properties of Working Substances for the Organic Rankine Cycle based Database "REFPROP"

    NASA Astrophysics Data System (ADS)

    Galashov, Nikolay; Tsibulskiy, Svyatoslav; Serova, Tatiana

    2016-02-01

    The object of the study are substances that are used as a working fluid in systems operating on the basis of an organic Rankine cycle. The purpose of research is to find substances with the best thermodynamic, thermal and environmental properties. Research conducted on the basis of the analysis of thermodynamic and thermal properties of substances from the base "REFPROP" and with the help of numerical simulation of combined-cycle plant utilization triple cycle, where the lower cycle is an organic Rankine cycle. Base "REFPROP" describes and allows to calculate the thermodynamic and thermophysical parameters of most of the main substances used in production processes. On the basis of scientific publications on the use of working fluids in an organic Rankine cycle analysis were selected ozone-friendly low-boiling substances: ammonia, butane, pentane and Freon: R134a, R152a, R236fa and R245fa. For these substances have been identified and tabulated molecular weight, temperature of the triple point, boiling point, at atmospheric pressure, the parameters of the critical point, the value of the derivative of the temperature on the entropy of the saturated vapor line and the potential ozone depletion and global warming. It was also identified and tabulated thermodynamic and thermophysical parameters of the steam and liquid substances in a state of saturation at a temperature of 15 °C. This temperature is adopted as the minimum temperature of heat removal in the Rankine cycle when working on the water. Studies have shown that the best thermodynamic, thermal and environmental properties of the considered substances are pentane, butane and R245fa. For a more thorough analysis based on a gas turbine plant NK-36ST it has developed a mathematical model of combined cycle gas turbine (CCGT) triple cycle, where the lower cycle is an organic Rankine cycle, and is used as the air cooler condenser. Air condenser allows stating material at a temperature below 0 °C. Calculation of the

  11. Modeling and Measuring Organization Capital

    ERIC Educational Resources Information Center

    Atkeson, Andrew; Kehoe, Patrick J.

    2005-01-01

    Manufacturing plants have a clear life cycle: they are born small, grow substantially with age, and eventually die. Economists have long thought that this life cycle is driven by organization capital, the accumulation of plant-specific knowledge. The location of plants in the life cycle determines the size of the payments, or organization rents,…

  12. Tree-Structured Digital Organisms Model

    NASA Astrophysics Data System (ADS)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  13. Challenges of Country Modeling with Databases, Newsfeeds, and Expert Surveys

    DTIC Science & Technology

    2008-01-01

    they experience emotion. That is, we are interested in discovering what socia -cognitive agents can offer to the study of specific countrieS or social...Economy Model (Harrod-Domar model [Harrod, 1960]) Black Market Undeclared Market [LewiS, 1954; Schneider and Enste, 2000] Formal Capital Economy... markets , jobs, banking), educational system, the health system, the judicial system, the police and security forces, the utilities/infrastructure (e.g

  14. FOAM (Functional Ontology Assignments for Metagenomes): A Hidden Markov Model (HMM) database with environmental focus

    DOE PAGES

    Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; ...

    2014-09-26

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associatedmore » functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.« less

  15. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  16. Modeling and database for melt-water interfacial heat transfer

    SciTech Connect

    Farmer, M.T.; Spencer, B.W.; Schneider, J.P.; Bonomo, B.; Theofanous, G.

    1992-04-01

    A mechanistic model is developed to predict the transition superficial gas velocity between bulk cooldown and crust-limited heat transfer regimes in a sparged molten pool with a coolant overlayer. The model has direct applications in the analysis of ex-vessel severe accidents, where molten corium interacts with concrete, thereby producing sparging concrete decomposition gases. The analysis approach embodies thermal, mechanical, and hydrodynamic aspects associated with incipient crust formation at the melt/coolant interface. The model is validated against experiment data obtained with water (melt) and liquid nitrogen (coolant) simulants. Predictions are then made for the critical gas velocity at which crust formation will occur for core material interacting with concrete in the presence of water.

  17. Modeling and database for melt-water interfacial heat transfer

    SciTech Connect

    Farmer, M.T.; Spencer, B.W. ); Schneider, J.P. ); Bonomo, B. ); Theofanous, G. )

    1992-01-01

    A mechanistic model is developed to predict the transition superficial gas velocity between bulk cooldown and crust-limited heat transfer regimes in a sparged molten pool with a coolant overlayer. The model has direct applications in the analysis of ex-vessel severe accidents, where molten corium interacts with concrete, thereby producing sparging concrete decomposition gases. The analysis approach embodies thermal, mechanical, and hydrodynamic aspects associated with incipient crust formation at the melt/coolant interface. The model is validated against experiment data obtained with water (melt) and liquid nitrogen (coolant) simulants. Predictions are then made for the critical gas velocity at which crust formation will occur for core material interacting with concrete in the presence of water.

  18. [Estimation of China soil organic carbon storage and density based on 1:1,000,000 soil database].

    PubMed

    Yu, Dongsheng; Shi, Xuezheng; Sun, Weixia; Wang, Hongjie; Liu, Qinghua; Zhao, Yongcun

    2005-12-01

    Based on 1:1,000,000 soil database, and employing the methods of spatial expression, this paper estimated the soil organic carbon storage (SOCS) and density (SOCD) of China. The database consists of 1:1,000,000 digital soil map, soil profile attribution database, and soil reference system. The digital soil map contained 926 soil mapping units, 690 soil families, and 94 000 or more polygons, while the soil profile attribution database collected 7292 soil profiles, including 81 attribution fields. The SOCDs of soil profiles were calculated and linked to the soil polygons in the digital soil map by the method of "GIS linkage based on soil type", resulting in a vector map of 1:1,000,000 China SOCD. The SOCS of the country or of a soil could be estimated by summing up the SOCS of all polygons or the polygons of a soil, and their SOCD were the SOCS of them derived by their areas. The estimated SOCS and SOCD of the country was 89. 14 Pg (1 Pg = 10(15) g) and 9.60 kg m(-2), respectively, covered all the soils with a total area of 928.10 x 10(4) km2, which might be considered closest to the real value.

  19. Database Design Learning: A Project-Based Approach Organized through a Course Management System

    ERIC Educational Resources Information Center

    Dominguez, Cesar; Jaime, Arturo

    2010-01-01

    This paper describes an active method for database design learning through practical tasks development by student teams in a face-to-face course. This method integrates project-based learning, and project management techniques and tools. Some scaffolding is provided at the beginning that forms a skeleton that adapts to a great variety of…

  20. Database Needs for Modeling and Simulation of Plasma Processing.

    DTIC Science & Technology

    1996-01-01

    structure codes as well as semiempirical methods, should be encouraged. 2. A spectrum of plasma models should be developed, aimed at a variety of uses...One set of codes should be developed to provide a compact, relatively fast simulation that addresses plasma and surface kinetics and is useful for...process engineers. Convenient user interfaces would be important for this set of codes . A second set of codes would include more sophisticated algorithms

  1. Functional Decomposition of Modeling and Simulation Terrain Database Generation Process

    DTIC Science & Technology

    2008-09-19

    Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE...with ArcGIS by Environmental Systems Research Institute ( ESRI ) and TerraTools by TerraSim, respec- tively. ER D C /TEC SR -08-1 5...Common Data Model Framework (CDMF) contains a set of tools for creating and analyzing EDMs. CDMF is a government-off-the-shelf technology designed and

  2. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  3. Modeling Powered Aerodynamics for the Orion Launch Abort Vehicle Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Chan, David T.; Walker, Eric L.; Robinson, Philip E.; Wilson, Thomas M.

    2011-01-01

    Modeling the aerodynamics of the Orion Launch Abort Vehicle (LAV) has presented many technical challenges to the developers of the Orion aerodynamic database. During a launch abort event, the aerodynamic environment around the LAV is very complex as multiple solid rocket plumes interact with each other and the vehicle. It is further complicated by vehicle separation events such as between the LAV and the launch vehicle stack or between the launch abort tower and the crew module. The aerodynamic database for the LAV was developed mainly from wind tunnel tests involving powered jet simulations of the rocket exhaust plumes, supported by computational fluid dynamic simulations. However, limitations in both methods have made it difficult to properly capture the aerodynamics of the LAV in experimental and numerical simulations. These limitations have also influenced decisions regarding the modeling and structure of the aerodynamic database for the LAV and led to compromises and creative solutions. Two database modeling approaches are presented in this paper (incremental aerodynamics and total aerodynamics), with examples showing strengths and weaknesses of each approach. In addition, the unique problems presented to the database developers by the large data space required for modeling a launch abort event illustrate the complexities of working with multi-dimensional data.

  4. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  5. Extracting protein alignment models from the sequence database.

    PubMed Central

    Neuwald, A F; Liu, J S; Lipman, D J; Lawrence, C E

    1997-01-01

    Biologists often gain structural and functional insights into a protein sequence by constructing a multiple alignment model of the family. Here a program called Probe fully automates this process of model construction starting from a single sequence. Central to this program is a powerful new method to locate and align only those, often subtly, conserved patterns essential to the family as a whole. When applied to randomly chosen proteins, Probe found on average about four times as many relationships as a pairwise search and yielded many new discoveries. These include: an obscure subfamily of globins in the roundworm Caenorhabditis elegans ; two new superfamilies of metallohydrolases; a lipoyl/biotin swinging arm domain in bacterial membrane fusion proteins; and a DH domain in the yeast Bud3 and Fus2 proteins. By identifying distant relationships and merging families into superfamilies in this way, this analysis further confirms the notion that proteins evolved from relatively few ancient sequences. Moreover, this method automatically generates models of these ancient conserved regions for rapid and sensitive screening of sequences. PMID:9108146

  6. Hydraulic fracture propagation modeling and data-based fracture identification

    NASA Astrophysics Data System (ADS)

    Zhou, Jing

    Successful shale gas and tight oil production is enabled by the engineering innovation of horizontal drilling and hydraulic fracturing. Hydraulically induced fractures will most likely deviate from the bi-wing planar pattern and generate complex fracture networks due to mechanical interactions and reservoir heterogeneity, both of which render the conventional fracture simulators insufficient to characterize the fractured reservoir. Moreover, in reservoirs with ultra-low permeability, the natural fractures are widely distributed, which will result in hydraulic fractures branching and merging at the interface and consequently lead to the creation of more complex fracture networks. Thus, developing a reliable hydraulic fracturing simulator, including both mechanical interaction and fluid flow, is critical in maximizing hydrocarbon recovery and optimizing fracture/well design and completion strategy in multistage horizontal wells. A novel fully coupled reservoir flow and geomechanics model based on the dual-lattice system is developed to simulate multiple nonplanar fractures' propagation in both homogeneous and heterogeneous reservoirs with or without pre-existing natural fractures. Initiation, growth, and coalescence of the microcracks will lead to the generation of macroscopic fractures, which is explicitly mimicked by failure and removal of bonds between particles from the discrete element network. This physics-based modeling approach leads to realistic fracture patterns without using the empirical rock failure and fracture propagation criteria required in conventional continuum methods. Based on this model, a sensitivity study is performed to investigate the effects of perforation spacing, in-situ stress anisotropy, rock properties (Young's modulus, Poisson's ratio, and compressive strength), fluid properties, and natural fracture properties on hydraulic fracture propagation. In addition, since reservoirs are buried thousands of feet below the surface, the

  7. Can simple population genetic models reconcile partial match frequencies observed in large forensic databases?

    PubMed

    Mueller, Laurence D

    2008-08-01

    A recent study of partial matches in the Arizona offender database of DNA profiles has revealed a large number of nine and ten locus matches. I use simple models that incorporate the product rule, population substructure, and relatedness to predict the expected number of matches in large databases. I find that there is a relatively narrow window of parameter values that can plausibly describe the Arizona results. Further research could help determine if the Arizona samples are congruent with some of the models presented here or whether fundamental assumptions for predicting these match frequencies requires adjustments.

  8. A Relational Database Model and Data Migration Plan for the Student Services Department at the Marine Corps Institute

    DTIC Science & Technology

    1997-09-01

    response to MCI’s request. It investigates data modeling and database design using the Integration Definition for Information Modeling ( IDEFiX ) methodology...and the relational model. It also addresses the migration of data and databases from legacy to open systems. The application of the IDEFiX model

  9. Query Monitoring and Analysis for Database Privacy - A Security Automata Model Approach.

    PubMed

    Kumar, Anand; Ligatti, Jay; Tu, Yi-Cheng

    2015-11-01

    Privacy and usage restriction issues are important when valuable data are exchanged or acquired by different organizations. Standard access control mechanisms either restrict or completely grant access to valuable data. On the other hand, data obfuscation limits the overall usability and may result in loss of total value. There are no standard policy enforcement mechanisms for data acquired through mutual and copyright agreements. In practice, many different types of policies can be enforced in protecting data privacy. Hence there is the need for an unified framework that encapsulates multiple suites of policies to protect the data. We present our vision of an architecture named security automata model (SAM) to enforce privacy-preserving policies and usage restrictions. SAM analyzes the input queries and their outputs to enforce various policies, liberating data owners from the burden of monitoring data access. SAM allows administrators to specify various policies and enforces them to monitor queries and control the data access. Our goal is to address the problems of data usage control and protection through privacy policies that can be defined, enforced, and integrated with the existing access control mechanisms using SAM. In this paper, we lay out the theoretical foundation of SAM, which is based on an automata named Mandatory Result Automata. We also discuss the major challenges of implementing SAM in a real-world database environment as well as ideas to meet such challenges.

  10. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    USGS Publications Warehouse

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  11. A New Global River Network Database for Macroscale Hydrologic modeling

    SciTech Connect

    Wu, Huan; Kimball, John S.; Li, Hongyi; Huang, Maoyi; Leung, Lai-Yung R.; Adler, Robert F.

    2012-09-28

    Coarse resolution (upscaled) river networks are critical inputs for runoff routing in macroscale hydrologic models. Recently, Wu et al. (2011) developed a hierarchical Dominant River Tracing (DRT) algorithm for automated extraction and spatial upscaling of basin flow directions and river networks using fine-scale hydrography inputs (e.g., flow direction, river networks, and flow accumulation). The DRT was initially applied using HYDRO1K baseline fine-scale hydrography inputs and the resulting upscaled global hydrography maps were produced at several spatial scales, and verified against other available regional and global datasets. New baseline fine-scale hydrography data from HydroSHEDS are now available for many regions and provide superior scale and quality relative to HYDRO1K. However, HydroSHEDS does not cover regions above 60°N. In this study, we applied the DRT algorithms using combined HydroSHEDS and HYDRO1K global fine-scale hydrography inputs, and produced a new series of upscaled global river network data at multiple (1/16° to 2°) spatial resolutions in a consistent (WGS84) projection. The new upscaled river networks are internally consistent and congruent with the baseline fine-scale inputs. The DRT results preserve baseline fine-scale river networks independent of spatial scales, with consistency in river network, basin shape, basin area, river length, and basin internal drainage structure between upscaled and baseline fine-scale hydrography. These digital data are available online for public access (ftp://ftp.ntsg.umt.edu/pub/data/DRT/) and should facilitate improved regional to global scale hydrological simulations, including runoff routing and river discharge calculations.

  12. Modeling and implementing a database on drugs into a hospital intranet.

    PubMed

    François, M; Joubert, M; Fieschi, D; Fieschi, M

    1998-09-01

    Our objective was to develop a drug information service, implementing a database on drugs in our university hospitals information system. Thériaque is a database, maintained by a group of pharmacists and physicians, on all the drugs available in France. Before its implementation we modeled its content (chemical classes, active components, excipients, indications, contra-indications, side effects, and so on) according to an object-oriented method. Then we designed HTML pages whose appearance translates the structure of classes of objects of the model. Fields in pages are dynamically fulfilled by the results of queries to a relational database in which information on drugs is stored. This allowed a fast implementation and did not imply to port a client application on the thousands of workstations over the network. The interface provides end-users with an easy-to-use and natural way to access information related to drugs in an internet environment.

  13. Crystal Plasticity Modeling of Microstructure Evolution and Mechanical Fields During Processing of Metals Using Spectral Databases

    NASA Astrophysics Data System (ADS)

    Knezevic, Marko; Kalidindi, Surya R.

    2017-02-01

    This article reviews the advances made in the development and implementation of a novel approach to speeding up crystal plasticity simulations of metal processing by one to three orders of magnitude when compared with the conventional approaches, depending on the specific details of implementation. This is mainly accomplished through the use of spectral crystal plasticity (SCP) databases grounded in the compact representation of the functions central to crystal plasticity computations. A key benefit of the databases is that they allow for a noniterative retrieval of constitutive solutions for any arbitrary plastic stretching tensor (i.e., deformation mode) imposed on a crystal of arbitrary orientation. The article emphasizes the latest developments in terms of embedding SCP databases within implicit finite elements. To illustrate the potential of these novel implementations, the results from several process modeling applications including equichannel angular extrusion and rolling are presented and compared with experimental measurements and predictions from other models.

  14. Empirical evaluation of analytical models for parallel relational data-base queries. Master's thesis

    SciTech Connect

    Denham, M.C.

    1990-12-01

    This thesis documents the design and implementation of three parallel join algorithms to be used in the verification of analytical models developed by Kearns. Kearns developed a set of analytical models for a variety of relational database queries. These models serve as tools for the design of parallel relational database system. Each of Kearns' models is classified as either single step or multiple step. The single step models reflect queries that require only one operation while the multiple step models reflect queries that require multiple operations. Three parallel join algorithms were implemented based upon Kearns' models. Two are based upon single step join models and one is based upon a multiple step join model. They are implemented on an Intel iPSC/1 parallel computer. The single step join algorithms include the parallel nested-loop join and the bucket (or hash) join. The multiple step algorithm that was implemented is a pipelined version of the bucket join. The results show that within the constraints of the test cases run, the three models are all at least accurate to within about 8.5% and they should prove useful in the design of parallel relational database systems.

  15. Microporoelastic Modeling of Organic-Rich Shales

    NASA Astrophysics Data System (ADS)

    Khosh Sokhan Monfared, S.; Abedi, S.; Ulm, F. J.

    2014-12-01

    Organic-rich shale is an extremely complex, naturally occurring geo-composite. The heterogeneous nature of organic-rich shale and its anisotropic behavior pose grand challenges for characterization, modeling and engineering design The intricacy of organic-rich shale, in the context of its mechanical and poromechanical properties, originates in the presence of organic/inorganic constituents and their interfaces as well as the occurrence of porosity and elastic anisotropy, at multiple length scales. To capture the contributing mechanisms, of 1st order, responsible for organic-rich shale complex behavior, we introduce an original approach for micromechanical modeling of organic-rich shales which accounts for the effect of maturity of organics on the overall elasticity through morphology considerations. This morphology contribution is captured by means of an effective media theory that bridges the gap between immature and mature systems through the choice of system's microtexture; namely a matrix-inclusion morphology (Mori-Tanaka) for immature systems and a polycrystal/granular morphology for mature systems. Also, we show that interfaces play a role on the effective elasticity of mature, organic-rich shales. The models are calibrated by means of ultrasonic pulse velocity measurements of elastic properties and validated by means of nanoindentation results. Sensitivity analyses using Spearman's Partial Rank Correlation Coefficient shows the importance of porosity and Total Organic Carbon (TOC) as key input parameters for accurate model predictions. These modeling developments pave the way to reach a "unique" set of clay properties and highlight the importance of depositional environment, burial and diagenetic processes on overall mechanical and poromechanical behavior of organic-rich shale. These developments also emphasize the importance of understanding and modeling clay elasticity and organic maturity on the overall rock behavior which is of critical importance for a

  16. Primate Models in Organ Transplantation

    PubMed Central

    Anderson, Douglas J.; Kirk, Allan D.

    2013-01-01

    Large animal models have long served as the proving grounds for advances in transplantation, bridging the gap between inbred mouse experimentation and human clinical trials. Although a variety of species have been and continue to be used, the emergence of highly targeted biologic- and antibody-based therapies has required models to have a high degree of homology with humans. Thus, the nonhuman primate has become the model of choice in many settings. This article will provide an overview of nonhuman primate models of transplantation. Issues of primate genetics and care will be introduced, and a brief overview of technical aspects for various transplant models will be discussed. Finally, several prominent immunosuppressive and tolerance strategies used in primates will be reviewed. PMID:24003248

  17. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  18. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  19. Modeling the High Speed Research Cycle 2B Longitudinal Aerodynamic Database Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.; Proffitt, M. S.

    1999-01-01

    The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.

  20. The Subject-Object Relationship Interface Model in Database Management Systems.

    ERIC Educational Resources Information Center

    Yannakoudakis, Emmanuel J.; Attar-Bashi, Hussain A.

    1989-01-01

    Describes a model that displays structures necessary to map between the conceptual and external levels in database management systems, using an algorithm that maps the syntactic representations of tuples onto semantic representations. A technique for translating tuples into natural language sentences is introduced, and a system implemented in…

  1. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  2. The Transporter Classification Database

    PubMed Central

    Saier, Milton H.; Reddy, Vamsee S.; Tamang, Dorjee G.; Västermark, Åke

    2014-01-01

    The Transporter Classification Database (TCDB; http://www.tcdb.org) serves as a common reference point for transport protein research. The database contains more than 10 000 non-redundant proteins that represent all currently recognized families of transmembrane molecular transport systems. Proteins in TCDB are organized in a five level hierarchical system, where the first two levels are the class and subclass, the second two are the family and subfamily, and the last one is the transport system. Superfamilies that contain multiple families are included as hyperlinks to the five tier TC hierarchy. TCDB includes proteins from all types of living organisms and is the only transporter classification system that is both universal and recognized by the International Union of Biochemistry and Molecular Biology. It has been expanded by manual curation, contains extensive text descriptions providing structural, functional, mechanistic and evolutionary information, is supported by unique software and is interconnected to many other relevant databases. TCDB is of increasing usefulness to the international scientific community and can serve as a model for the expansion of database technologies. This manuscript describes an update of the database descriptions previously featured in NAR database issues. PMID:24225317

  3. The LAILAPS search engine: a feature model for relevance ranking in life science databases.

    PubMed

    Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe

    2010-03-25

    Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.

  4. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  5. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-04

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  6. Examination of the U.S. EPA’s vapor intrusion database based on models

    PubMed Central

    Yao, Yijun; Shen, Rui; Pennell, Kelly G.; Suuberg, Eric M.

    2013-01-01

    In the United States Environmental Protection Agency (U.S. EPA)’s vapor intrusion (VI) database, there appears to be a trend showing an inverse relationship between the indoor air concentration attenuation factor and the subsurface source vapor concentration. This is inconsistent with the physical understanding in current vapor intrusion models. This paper explores possible reasons for this apparent discrepancy. Soil vapor transport processes occur independently of the actual building entry process, and are consistent with the trends in the database results. A recent EPA technical report provided a list of factors affecting vapor intrusion, and the influence of some of these are explored in the context of the database results. PMID:23293835

  7. BriX: a database of protein building blocks for structural analysis, modeling and design.

    PubMed

    Vanhee, Peter; Verschueren, Erik; Baeten, Lies; Stricher, Francois; Serrano, Luis; Rousseau, Frederic; Schymkowitz, Joost

    2011-01-01

    High-resolution structures of proteins remain the most valuable source for understanding their function in the cell and provide leads for drug design. Since the availability of sufficient protein structures to tackle complex problems such as modeling backbone moves or docking remains a problem, alternative approaches using small, recurrent protein fragments have been employed. Here we present two databases that provide a vast resource for implementing such fragment-based strategies. The BriX database contains fragments from over 7000 non-homologous proteins from the Astral collection, segmented in lengths from 4 to 14 residues and clustered according to structural similarity, summing up to a content of 2 million fragments per length. To overcome the lack of loops classified in BriX, we constructed the Loop BriX database of non-regular structure elements, clustered according to end-to-end distance between the regular residues flanking the loop. Both databases are available online (http://brix.crg.es) and can be accessed through a user-friendly web-interface. For high-throughput queries a web-based API is provided, as well as full database downloads. In addition, two exciting applications are provided as online services: (i) user-submitted structures can be covered on the fly with BriX classes, representing putative structural variation throughout the protein and (ii) gaps or low-confidence regions in these structures can be bridged with matching fragments.

  8. Database assessment of CMIP5 and hydrological models to determine flood risk areas

    NASA Astrophysics Data System (ADS)

    Limlahapun, Ponthip; Fukui, Hiromichi

    2016-11-01

    Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.

  9. A C library for retrieving specific reactions from the BioModels database

    PubMed Central

    Neal, M. L.; Galdzicki, M.; Gallimore, J. T.; Sauro, H. M.

    2014-01-01

    Summary: We describe libSBMLReactionFinder, a C library for retrieving specific biochemical reactions from the curated systems biology markup language models contained in the BioModels database. The library leverages semantic annotations in the database to associate reactions with human-readable descriptions, making the reactions retrievable through simple string searches. Our goal is to provide a useful tool for quantitative modelers who seek to accelerate modeling efforts through the reuse of previously published representations of specific chemical reactions. Availability and implementation: The library is open-source and dual licensed under the Mozilla Public License Version 2.0 and GNU General Public License Version 2.0. Project source code, downloads and documentation are available at http://code.google.com/p/lib-sbml-reaction-finder. Contact: mneal@uw.edu PMID:24078714

  10. A database and tool for boundary conditions for regional air quality modeling: description and evaluation

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.

    2013-09-01

    Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying Lateral Boundary Conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2000-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complimented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone vertical profiles. The results show performance is largely within uncertainty estimates for the Tropospheric Emission Spectrometer (TES) with some exceptions. The major difference shows a high bias in the upper troposphere along the southern boundary in January. This publication documents the global simulation database, the tool for conversion to LBC, and the fidelity of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.

  11. LegumeIP: an integrative database for comparative genomics and transcriptomics of model legumes.

    PubMed

    Li, Jun; Dai, Xinbin; Liu, Tingsong; Zhao, Patrick Xuechun

    2012-01-01

    Legumes play a vital role in maintaining the nitrogen cycle of the biosphere. They conduct symbiotic nitrogen fixation through endosymbiotic relationships with bacteria in root nodules. However, this and other characteristics of legumes, including mycorrhization, compound leaf development and profuse secondary metabolism, are absent in the typical model plant Arabidopsis thaliana. We present LegumeIP (http://plantgrn.noble.org/LegumeIP/), an integrative database for comparative genomics and transcriptomics of model legumes, for studying gene function and genome evolution in legumes. LegumeIP compiles gene and gene family information, syntenic and phylogenetic context and tissue-specific transcriptomic profiles. The database holds the genomic sequences of three model legumes, Medicago truncatula, Glycine max and Lotus japonicus plus two reference plant species, A. thaliana and Populus trichocarpa, with annotations based on UniProt, InterProScan, Gene Ontology and the Kyoto Encyclopedia of Genes and Genomes databases. LegumeIP also contains large-scale microarray and RNA-Seq-based gene expression data. Our new database is capable of systematic synteny analysis across M. truncatula, G. max, L. japonicas and A. thaliana, as well as construction and phylogenetic analysis of gene families across the five hosted species. Finally, LegumeIP provides comprehensive search and visualization tools that enable flexible queries based on gene annotation, gene family, synteny and relative gene expression.

  12. Modeling personnel turnover in the parametric organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.

  13. Creating a model to detect dairy cattle farms with poor welfare using a national database.

    PubMed

    Krug, C; Haskell, M J; Nunes, T; Stilwell, G

    2015-12-01

    The objective of this study was to determine whether dairy farms with poor cow welfare could be identified using a national database for bovine identification and registration that monitors cattle deaths and movements. The welfare of dairy cattle was assessed using the Welfare Quality(®) protocol (WQ) on 24 Portuguese dairy farms and on 1930 animals. Five farms were classified as having poor welfare and the other 19 were classified as having good welfare. Fourteen million records from the national cattle database were analysed to identify potential welfare indicators for dairy farms. Fifteen potential national welfare indicators were calculated based on that database, and the link between the results on the WQ evaluation and the national cattle database was made using the identification code of each farm. Within the potential national welfare indicators, only two were significantly different between farms with good welfare and poor welfare, 'proportion of on-farm deaths' (p<0.01) and 'female/male birth ratio' (p<0.05). To determine whether the database welfare indicators could be used to distinguish farms with good welfare from farms with poor welfare, we created a model using the classifier J48 of Waikato Environment for Knowledge Analysis. The model was a decision tree based on two variables, 'proportion of on-farm deaths' and 'calving-to-calving interval', and it was able to correctly identify 70% and 79% of the farms classified as having poor and good welfare, respectively. The national cattle database analysis could be useful in helping official veterinary services in detecting farms that have poor welfare and also in determining which welfare indicators are poor on each particular farm.

  14. Cardiac electromechanical models: from cell to organ.

    PubMed

    Trayanova, Natalia A; Rice, John Jeremy

    2011-01-01

    The heart is a multiphysics and multiscale system that has driven the development of the most sophisticated mathematical models at the frontiers of computational physiology and medicine. This review focuses on electromechanical (EM) models of the heart from the molecular level of myofilaments to anatomical models of the organ. Because of the coupling in terms of function and emergent behaviors at each level of biological hierarchy, separation of behaviors at a given scale is difficult. Here, a separation is drawn at the cell level so that the first half addresses subcellular/single-cell models and the second half addresses organ models. At the subcellular level, myofilament models represent actin-myosin interaction and Ca-based activation. The discussion of specific models emphasizes the roles of cooperative mechanisms and sarcomere length dependence of contraction force, considered to be the cellular basis of the Frank-Starling law. A model of electrophysiology and Ca handling can be coupled to a myofilament model to produce an EM cell model, and representative examples are summarized to provide an overview of the progression of the field. The second half of the review covers organ-level models that require solution of the electrical component as a reaction-diffusion system and the mechanical component, in which active tension generated by the myocytes produces deformation of the organ as described by the equations of continuum mechanics. As outlined in the review, different organ-level models have chosen to use different ionic and myofilament models depending on the specific application; this choice has been largely dictated by compromises between model complexity and computational tractability. The review also addresses application areas of EM models such as cardiac resynchronization therapy and the role of mechano-electric coupling in arrhythmias and defibrillation.

  15. Cardiac Electromechanical Models: From Cell to Organ

    PubMed Central

    Trayanova, Natalia A.; Rice, John Jeremy

    2011-01-01

    The heart is a multiphysics and multiscale system that has driven the development of the most sophisticated mathematical models at the frontiers of computational physiology and medicine. This review focuses on electromechanical (EM) models of the heart from the molecular level of myofilaments to anatomical models of the organ. Because of the coupling in terms of function and emergent behaviors at each level of biological hierarchy, separation of behaviors at a given scale is difficult. Here, a separation is drawn at the cell level so that the first half addresses subcellular/single-cell models and the second half addresses organ models. At the subcellular level, myofilament models represent actin–myosin interaction and Ca-based activation. The discussion of specific models emphasizes the roles of cooperative mechanisms and sarcomere length dependence of contraction force, considered to be the cellular basis of the Frank–Starling law. A model of electrophysiology and Ca handling can be coupled to a myofilament model to produce an EM cell model, and representative examples are summarized to provide an overview of the progression of the field. The second half of the review covers organ-level models that require solution of the electrical component as a reaction–diffusion system and the mechanical component, in which active tension generated by the myocytes produces deformation of the organ as described by the equations of continuum mechanics. As outlined in the review, different organ-level models have chosen to use different ionic and myofilament models depending on the specific application; this choice has been largely dictated by compromises between model complexity and computational tractability. The review also addresses application areas of EM models such as cardiac resynchronization therapy and the role of mechano-electric coupling in arrhythmias and defibrillation. PMID:21886622

  16. D Digital Model Database Applied to Conservation and Research of Wooden Construction in China

    NASA Astrophysics Data System (ADS)

    Zheng, Y.

    2013-07-01

    Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E.) to the end of the Yuan dynasty (1279-1368C.E.) in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH) to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate information inventory

  17. Enumeration of 166 billion organic small molecules in the chemical universe database GDB-17.

    PubMed

    Ruddigkeit, Lars; van Deursen, Ruud; Blum, Lorenz C; Reymond, Jean-Louis

    2012-11-26

    Drug molecules consist of a few tens of atoms connected by covalent bonds. How many such molecules are possible in total and what is their structure? This question is of pressing interest in medicinal chemistry to help solve the problems of drug potency, selectivity, and toxicity and reduce attrition rates by pointing to new molecular series. To better define the unknown chemical space, we have enumerated 166.4 billion molecules of up to 17 atoms of C, N, O, S, and halogens forming the chemical universe database GDB-17, covering a size range containing many drugs and typical for lead compounds. GDB-17 contains millions of isomers of known drugs, including analogs with high shape similarity to the parent drug. Compared to known molecules in PubChem, GDB-17 molecules are much richer in nonaromatic heterocycles, quaternary centers, and stereoisomers, densely populate the third dimension in shape space, and represent many more scaffold types.

  18. A scalable database model for multiparametric time series: a volcano observatory case study

    NASA Astrophysics Data System (ADS)

    Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea

    2014-05-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  19. The Chinchilla Research Resource Database: resource for an otolaryngology disease model

    PubMed Central

    Shimoyama, Mary; Smith, Jennifer R.; De Pons, Jeff; Tutaj, Marek; Khampang, Pawjai; Hong, Wenzhou; Erbe, Christy B.; Ehrlich, Garth D.; Bakaletz, Lauren O.; Kerschner, Joseph E.

    2016-01-01

    The long-tailed chinchilla (Chinchilla lanigera) is an established animal model for diseases of the inner and middle ear, among others. In particular, chinchilla is commonly used to study diseases involving viral and bacterial pathogens and polymicrobial infections of the upper respiratory tract and the ear, such as otitis media. The value of the chinchilla as a model for human diseases prompted the sequencing of its genome in 2012 and the more recent development of the Chinchilla Research Resource Database (http://crrd.mcw.edu) to provide investigators with easy access to relevant datasets and software tools to enhance their research. The Chinchilla Research Resource Database contains a complete catalog of genes for chinchilla and, for comparative purposes, human. Chinchilla genes can be viewed in the context of their genomic scaffold positions using the JBrowse genome browser. In contrast to the corresponding records at NCBI, individual gene reports at CRRD include functional annotations for Disease, Gene Ontology (GO) Biological Process, GO Molecular Function, GO Cellular Component and Pathway assigned to chinchilla genes based on annotations from the corresponding human orthologs. Data can be retrieved via keyword and gene-specific searches. Lists of genes with similar functional attributes can be assembled by leveraging the hierarchical structure of the Disease, GO and Pathway vocabularies through the Ontology Search and Browser tool. Such lists can then be further analyzed for commonalities using the Gene Annotator (GA) Tool. All data in the Chinchilla Research Resource Database is freely accessible and downloadable via the CRRD FTP site or using the download functions available in the search and analysis tools. The Chinchilla Research Resource Database is a rich resource for researchers using, or considering the use of, chinchilla as a model for human disease. Database URL: http://crrd.mcw.edu PMID:27173523

  20. The Chinchilla Research Resource Database: resource for an otolaryngology disease model.

    PubMed

    Shimoyama, Mary; Smith, Jennifer R; De Pons, Jeff; Tutaj, Marek; Khampang, Pawjai; Hong, Wenzhou; Erbe, Christy B; Ehrlich, Garth D; Bakaletz, Lauren O; Kerschner, Joseph E

    2016-01-01

    The long-tailed chinchilla (Chinchilla lanigera) is an established animal model for diseases of the inner and middle ear, among others. In particular, chinchilla is commonly used to study diseases involving viral and bacterial pathogens and polymicrobial infections of the upper respiratory tract and the ear, such as otitis media. The value of the chinchilla as a model for human diseases prompted the sequencing of its genome in 2012 and the more recent development of the Chinchilla Research Resource Database (http://crrd.mcw.edu) to provide investigators with easy access to relevant datasets and software tools to enhance their research. The Chinchilla Research Resource Database contains a complete catalog of genes for chinchilla and, for comparative purposes, human. Chinchilla genes can be viewed in the context of their genomic scaffold positions using the JBrowse genome browser. In contrast to the corresponding records at NCBI, individual gene reports at CRRD include functional annotations for Disease, Gene Ontology (GO) Biological Process, GO Molecular Function, GO Cellular Component and Pathway assigned to chinchilla genes based on annotations from the corresponding human orthologs. Data can be retrieved via keyword and gene-specific searches. Lists of genes with similar functional attributes can be assembled by leveraging the hierarchical structure of the Disease, GO and Pathway vocabularies through the Ontology Search and Browser tool. Such lists can then be further analyzed for commonalities using the Gene Annotator (GA) Tool. All data in the Chinchilla Research Resource Database is freely accessible and downloadable via the CRRD FTP site or using the download functions available in the search and analysis tools. The Chinchilla Research Resource Database is a rich resource for researchers using, or considering the use of, chinchilla as a model for human disease.Database URL: http://crrd.mcw.edu.

  1. Database of the United States Coal Pellet Collection of the U.S. Geological Survey Organic Petrology Laboratory

    USGS Publications Warehouse

    Deems, Nikolaus J.; Hackley, Paul C.

    2012-01-01

    The Organic Petrology Laboratory (OPL) of the U.S. Geological Survey (USGS) Eastern Energy Resources Science Center in Reston, Virginia, contains several thousand processed coal sample materials that were loosely organized in laboratory drawers for the past several decades. The majority of these were prepared as 1-inch-diameter particulate coal pellets (more than 6,000 pellets; one sample usually was prepared as two pellets, although some samples were prepared in as many as four pellets), which were polished and used in reflected light petrographic studies. These samples represent the work of many scientists from the 1970s to the present, most notably Ron Stanton, who managed the OPL until 2001 (see Warwick and Ruppert, 2005, for a comprehensive bibliography of Ron Stanton's work). The purpose of the project described herein was to organize and catalog the U.S. part of the petrographic sample collection into a comprehensive database (available with this report as a Microsoft Excel file) and to compile and list published studies associated with the various sample sets. Through this work, the extent of the collection is publicly documented as a resource and sample library available to other scientists and researchers working in U.S. coal basins previously studied by organic petrologists affiliated with the USGS. Other researchers may obtain samples in the OPL collection on loan at the discretion of the USGS authors listed in this report and its associated Web page.

  2. Modeling Personnel Turnover in the Parametric Organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.

  3. Condensing Organic Aerosols in a Microphysical Model

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Tsigaridis, K.; Bauer, S.

    2015-12-01

    The condensation of organic aerosols is represented in a newly developed box-model scheme, where its effect on the growth and composition of particles are examined. We implemented the volatility-basis set (VBS) framework into the aerosol mixing state resolving microphysical scheme Multiconfiguration Aerosol TRacker of mIXing state (MATRIX). This new scheme is unique and advances the representation of organic aerosols in models in that, contrary to the traditional treatment of organic aerosols as non-volatile in most climate models and in the original version of MATRIX, this new scheme treats them as semi-volatile. Such treatment is important because low-volatility organics contribute significantly to the growth of particles. The new scheme includes several classes of semi-volatile organic compounds from the VBS framework that can partition among aerosol populations in MATRIX, thus representing the growth of particles via condensation of low volatility organic vapors. Results from test cases representing Mexico City and a Finish forrest condistions show good representation of the time evolutions of concentration for VBS species in the gas phase and in the condensed particulate phase. Emitted semi-volatile primary organic aerosols evaporate almost completely in the high volatile range, and they condense more efficiently in the low volatility range.

  4. Modelling nitrous oxide emissions from organic soils in Europe

    NASA Astrophysics Data System (ADS)

    Leppelt, Thomas; Dechow, Rene; Gebbert, Sören; Freibauer, Annette

    2013-04-01

    The greenhouse gas emission potential of peatland ecosystems are mandatory for a complete annual emission budget in Europe. The GHG-Europe project aims to improve the modelling capabilities for greenhouse gases, e.g., nitrous oxide. The heterogeneous and event driven fluxes of nitrous oxide are challenging to model on European scale, especially regarding the upscaling purpose and certain parameter estimations. Due to these challenges adequate techniques are needed to create a robust empirical model. Therefore a literature study of nitrous oxide fluxes from organic soils has been carried out. This database contains flux data from boreal and temperate climate zones and covers the different land use categories: cropland, grassland, forest, natural and peat extraction sites. Especially managed crop- and grassland sites feature high emission potential. Generally nitrous oxide emissions increases significantly with deep drainage and intensive application of nitrogen fertilisation. Whereas natural peatland sites with a near surface groundwater table can act as nitrous oxide sink. An empirical fuzzy logic model has been applied to predict annual nitrous oxide emissions from organic soils. The calibration results in two separate models with best model performances for bogs and fens, respectively. The derived parameter combinations of these models contain mean groundwater table, nitrogen fertilisation, annual precipitation, air temperature, carbon content and pH value. Influences of the calibrated parameters on nitrous oxide fluxes are verified by several studies in literature. The extrapolation potential has been tested by an implemented cross validation. Furthermore the parameter ranges of the calibrated models are compared to occurring values on European scale. This avoid unknown systematic errors for the regionalisation purpose. Additionally a sensitivity analysis specify the model behaviour for each alternating parameter. The upscaling process for European peatland

  5. Leaf respiration (GlobResp) - global trait database supports Earth System Models

    DOE PAGES

    Wullschleger, Stan D.; Warren, Jeffrey; Thornton, Peter E.

    2015-03-20

    Here we detail how Atkin and his colleagues compiled a global database (GlobResp) that details rates of leaf dark respiration and associated traits from sites that span Arctic tundra to tropical forests. This compilation builds upon earlier research (Reich et al., 1998; Wright et al., 2006) and was supplemented by recent field campaigns and unpublished data.In keeping with other trait databases, GlobResp provides insights on how physiological traits, especially rates of dark respiration, vary as a function of environment and how that variation can be used to inform terrestrial biosphere models and land surface components of Earth System Models. Althoughmore » an important component of plant and ecosystem carbon (C) budgets (Wythers et al., 2013), respiration has only limited representation in models. Seen through the eyes of a plant scientist, Atkin et al. (2015) give readers a unique perspective on the climatic controls on respiration, thermal acclimation and evolutionary adaptation of dark respiration, and insights into the covariation of respiration with other leaf traits. We find there is ample evidence that once large databases are compiled, like GlobResp, they can reveal new knowledge of plant function and provide a valuable resource for hypothesis testing and model development.« less

  6. Leaf respiration (GlobResp) - global trait database supports Earth System Models

    SciTech Connect

    Wullschleger, Stan D.; Warren, Jeffrey; Thornton, Peter E.

    2015-03-20

    Here we detail how Atkin and his colleagues compiled a global database (GlobResp) that details rates of leaf dark respiration and associated traits from sites that span Arctic tundra to tropical forests. This compilation builds upon earlier research (Reich et al., 1998; Wright et al., 2006) and was supplemented by recent field campaigns and unpublished data.In keeping with other trait databases, GlobResp provides insights on how physiological traits, especially rates of dark respiration, vary as a function of environment and how that variation can be used to inform terrestrial biosphere models and land surface components of Earth System Models. Although an important component of plant and ecosystem carbon (C) budgets (Wythers et al., 2013), respiration has only limited representation in models. Seen through the eyes of a plant scientist, Atkin et al. (2015) give readers a unique perspective on the climatic controls on respiration, thermal acclimation and evolutionary adaptation of dark respiration, and insights into the covariation of respiration with other leaf traits. We find there is ample evidence that once large databases are compiled, like GlobResp, they can reveal new knowledge of plant function and provide a valuable resource for hypothesis testing and model development.

  7. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  8. S-World: A high resolution global soil database for simulation modelling (Invited)

    NASA Astrophysics Data System (ADS)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  9. The origin and evolution of model organisms

    NASA Technical Reports Server (NTRS)

    Hedges, S. Blair

    2002-01-01

    The phylogeny and timescale of life are becoming better understood as the analysis of genomic data from model organisms continues to grow. As a result, discoveries are being made about the early history of life and the origin and development of complex multicellular life. This emerging comparative framework and the emphasis on historical patterns is helping to bridge barriers among organism-based research communities.

  10. Putting "Organizations" into an Organization Theory Course: A Hybrid CAO Model for Teaching Organization Theory

    ERIC Educational Resources Information Center

    Hannah, David R.; Venkatachary, Ranga

    2010-01-01

    In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…

  11. A Global Database of Land Surface Parameters at 1-km Resolution in Meteorological and Climate Models.

    NASA Astrophysics Data System (ADS)

    Masson, Valéry; Champeaux, Jean-Louis; Chauvin, Fabrice; Meriguet, Christelle; Lacaze, Roselyne

    2003-05-01

    Ecoclimap, a new complete surface parameter global dataset at a 1-km resolution, is presented. It is intended to be used to initialize the soil-vegetation-atmosphere transfer schemes (SVATs) in meteorological and climate models (at all horizontal scales). The database supports the `tile' approach, which is utilized by an increasing number of SVATs. Two hundred and fifteen ecosystems representing areas of homogeneous vegetation are derived by combining existing land cover maps and climate maps, in addition to using Advanced Very High Resolution Radiometer (AVHRR) satellite data. Then, all surface parameters are derived for each of these ecosystems using lookup tables with the annual cycle of the leaf area index (LAI) being constrained by the AVHRR information. The resulting LAI is validated against a large amount of in situ ground observations, and it is also compared to LAI derived from the International Satellite Land Surface Climatology Project (ISLSCP-2) database and the Polarization and Directionality of the Earth's Reflectance (POLDER) satellite. The comparison shows that this new LAI both reproduces values coherent at large scales with other datasets, and includes the high spatial variations owing to the input land cover data at a 1-km resolution. In terms of climate modeling studies, the use of this new database is shown to improve the surface climatology of the ARPEGE climate model.

  12. Sediment-Hosted Copper Deposits of the World: Deposit Models and Database

    USGS Publications Warehouse

    Cox, Dennis P.; Lindsey, David A.; Singer, Donald A.; Diggles, Michael F.

    2003-01-01

    Introduction This publication contains four descriptive models and four grade-tonnage models for sediment hosted copper deposits. Descriptive models are useful in exploration planning and resource assessment because they enable the user to identify deposits in the field and to identify areas on geologic and geophysical maps where deposits could occur. Grade and tonnage models are used in resource assessment to predict the likelihood of different combinations of grades and tonnages that could occur in undiscovered deposits in a specific area. They are also useful in exploration in deciding what deposit types meet the economic objectives of the exploration company. The models in this report supersede the sediment-hosted copper models in USGS Bulletin 1693 (Cox, 1986, and Mosier and others, 1986) and are subdivided into a general type and three subtypes. The general model is useful in classifying deposits whose features are obscured by metamorphism or are otherwise poorly described, and for assessing regions in which the geologic environments are poorly understood. The three subtypes are based on differences in deposit form and environments of deposition. These differences are described under subtypes in the general model. Deposit models are based on the descriptions of geologic environments and physical characteristics, and on metal grades and tonnages of many individual deposits. Data used in this study are presented in a database representing 785 deposits in nine continents. This database was derived partly from data published by Kirkham and others (1994) and from new information in recent publications. To facilitate the construction of grade and tonnage models, the information, presented by Kirkham in disaggregated form, was brought together to provide a single grade and a single tonnage for each deposit. Throughout the report individual deposits are defined as being more than 2,000 meters from the nearest adjacent deposit. The deposit models are presented here as

  13. Research proceedings on amphibian model organisms

    PubMed Central

    LIU, Lu-Sha; ZHAO, Lan-Ying; WANG, Shou-Hong; JIANG, Jian-Ping

    2016-01-01

    Model organisms have long been important in biology and medicine due to their specific characteristics. Amphibians, especially Xenopus, play key roles in answering fundamental questions on developmental biology, regeneration, genetics, and toxicology due to their large and abundant eggs, as well as their versatile embryos, which can be readily manipulated and developed in vivo. Furthermore, amphibians have also proven to be of considerable benefit in human disease research due to their conserved cellular developmental and genomic organization. This review gives a brief introduction on the progress and limitations of these animal models in biology and human disease research, and discusses the potential and challenge of Microhyla fissipes as a new model organism. PMID:27469255

  14. Geometric modeling of pelvic organs with thickness

    NASA Astrophysics Data System (ADS)

    Bay, T.; Chen, Z.-W.; Raffin, R.; Daniel, M.; Joli, P.; Feng, Z.-Q.; Bellemare, M.-E.

    2012-03-01

    Physiological changes in the spatial configuration of the internal organs in the abdomen can induce different disorders that need surgery. Following the complexity of the surgical procedure, mechanical simulations are necessary but the in vivo factor makes complicate the study of pelvic organs. In order to determine a realistic behavior of these organs, an accurate geometric model associated with a physical modeling is therefore required. Our approach is integrated in the partnership between a geometric and physical module. The Geometric Modeling seeks to build a continuous geometric model: from a dataset of 3D points provided by a Segmentation step, surfaces are created through a B-spline fitting process. An energy function is built to measure the bidirectional distance between surface and data. This energy is minimized with an alternate iterative Hoschek-like method. A thickness is added with an offset formulation, and the geometric model is finally exported in a hexahedral mesh. Afterward, the Physical Modeling tries to calculate the properties of the soft tissues to simulate the organs displacements. The physical parameters attached to the data are determined with a feedback loop between finite-elements deformations and ground-truth acquisition (dynamic MRI).

  15. Experiment Databases

    NASA Astrophysics Data System (ADS)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  16. Chess databases as a research vehicle in psychology: Modeling large data.

    PubMed

    Vaci, Nemanja; Bilalić, Merim

    2016-09-01

    The game of chess has often been used for psychological investigations, particularly in cognitive science. The clear-cut rules and well-defined environment of chess provide a model for investigations of basic cognitive processes, such as perception, memory, and problem solving, while the precise rating system for the measurement of skill has enabled investigations of individual differences and expertise-related effects. In the present study, we focus on another appealing feature of chess-namely, the large archive databases associated with the game. The German national chess database presented in this study represents a fruitful ground for the investigation of multiple longitudinal research questions, since it collects the data of over 130,000 players and spans over 25 years. The German chess database collects the data of all players, including hobby players, and all tournaments played. This results in a rich and complete collection of the skill, age, and activity of the whole population of chess players in Germany. The database therefore complements the commonly used expertise approach in cognitive science by opening up new possibilities for the investigation of multiple factors that underlie expertise and skill acquisition. Since large datasets are not common in psychology, their introduction also raises the question of optimal and efficient statistical analysis. We offer the database for download and illustrate how it can be used by providing concrete examples and a step-by-step tutorial using different statistical analyses on a range of topics, including skill development over the lifetime, birth cohort effects, effects of activity and inactivity on skill, and gender differences.

  17. A prediction model to estimate completeness of electronic physician claims databases

    PubMed Central

    Lix, Lisa M; Yao, Xue; Kephart, George; Quan, Hude; Smith, Mark; Kuwornu, John Paul; Manoharan, Nitharsana; Kouokam, Wilfrid; Sikdar, Khokan

    2015-01-01

    Objectives Electronic physician claims databases are widely used for chronic disease research and surveillance, but quality of the data may vary with a number of physician characteristics, including payment method. The objectives were to develop a prediction model for the number of prevalent diabetes cases in fee-for-service (FFS) electronic physician claims databases and apply it to estimate cases among non-FFS (NFFS) physicians, for whom claims data are often incomplete. Design A retrospective observational cohort design was adopted. Setting Data from the Canadian province of Newfoundland and Labrador were used to construct the prediction model and data from the province of Manitoba were used to externally validate the model. Participants A cohort of diagnosed diabetes cases was ascertained from physician claims, insured resident registry and hospitalisation records. A cohort of FFS physicians who were responsible for the diagnosis was ascertained from physician claims and registry data. Primary and secondary outcome measures A generalised linear model with a γ distribution was used to model the number of diabetes cases per FFS physician as a function of physician characteristics. The expected number of diabetes cases per NFFS physician was estimated. Results The diabetes case cohort consisted of 31 714 individuals; the mean cases per FFS physician was 75.5 (median=49.0). Sex and years since specialty licensure were significantly associated (p<0.05) with the number of cases per physician. Applying the prediction model to NFFS physician registry data resulted in an estimate of 18 546 cases; only 411 were observed in claims data. The model demonstrated face validity in an independent data set. Conclusions Comparing observed and predicted disease cases is a useful and generalisable approach to assess the quality of electronic databases for population-based research and surveillance. PMID:26310395

  18. Polymer models of chromosome (re)organization

    NASA Astrophysics Data System (ADS)

    Mirny, Leonid

    Chromosome Conformation Capture technique (Hi-C) provides comprehensive information about frequencies of spatial interactions between genomic loci. Inferring 3D organization of chromosomes from these data is a challenging biophysical problem. We develop a top-down approach to biophysical modeling of chromosomes. Starting with a minimal set of biologically motivated interactions we build ensembles of polymer conformations that can reproduce major features observed in Hi-C experiments. I will present our work on modeling organization of human metaphase and interphase chromosomes. Our works suggests that active processes of loop extrusion can be a universal mechanism responsible for formation of domains in interphase and chromosome compaction in metaphase.

  19. A database and tool for boundary conditions for regional air quality modeling: description and evaluation

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.

    2014-02-01

    Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying lateral boundary conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2001-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complemented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone and carbon monoxide vertical profiles. The results show performance is largely within uncertainty estimates for ozone from the Ozone Monitoring Instrument and carbon monoxide from the Measurements Of Pollution In The Troposphere (MOPITT), but there were some notable biases compared with Tropospheric Emission Spectrometer (TES) ozone. Compared with TES, our ozone predictions are high-biased in the upper troposphere, particularly in the south during January. This publication documents the global simulation database, the tool for conversion to LBC, and the evaluation of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.

  20. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models

  1. Thermodynamic modeling for organic solid precipitation

    SciTech Connect

    Chung, T.H.

    1992-12-01

    A generalized predictive model which is based on thermodynamic principle for solid-liquid phase equilibrium has been developed for organic solid precipitation. The model takes into account the effects of temperature, composition, and activity coefficient on the solubility of wax and asphaltenes in organic solutions. The solid-liquid equilibrium K-value is expressed as a function of the heat of melting, melting point temperature, solubility parameter, and the molar volume of each component in the solution. All these parameters have been correlated with molecular weight. Thus, the model can be applied to crude oil systems. The model has been tested with experimental data for wax formation and asphaltene precipitation. The predicted wax appearance temperature is very close to the measured temperature. The model not only can match the measured asphaltene solubility data but also can be used to predict the solubility of asphaltene in organic solvents or crude oils. The model assumes that asphaltenes are dissolved in oil in a true liquid state, not in colloidal suspension, and the precipitation-dissolution process is reversible by changing thermodynamic conditions. The model is thermodynamically consistent and has no ambiguous assumptions.

  2. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  3. Discovery of approximate concepts in clinical databases based on a rough set model

    NASA Astrophysics Data System (ADS)

    Tsumoto, Shusaku

    2000-04-01

    Rule discovery methods have been introduced to find useful and unexpected patterns from databases. However, one of the most important problems on these methods is that extracted rules have only positive knowledge, which do not include negative information that medical experts need to confirm whether a patient will suffer from symptoms caused by drug side-effect. This paper first discusses the characteristics of medical reasoning and defines positive and negative rules based on rough set model. Then, algorithms for induction of positive and negative rules are introduced. Then, the proposed method was evaluated on clinical databases, the experimental results of which shows several interesting patterns were discovered, such as a rule describing a relation between urticaria caused by antibiotics and food.

  4. Modeling and Design of Capacitive Micromachined Ultrasonic Transducers Based-on Database Optimization

    NASA Astrophysics Data System (ADS)

    Chang, M. W.; Gwo, T. J.; Deng, T. M.; Chang, H. C.

    2006-04-01

    A Capacitive Micromachined Ultrasonic Transducers simulation database, based on electromechanical coupling theory, has been fully developed for versatile capacitive microtransducer design and analysis. Both arithmetic and graphic configurations are used to find optimal parameters based on serial coupling simulations. The key modeling parameters identified can improve microtransducer's character and reliability effectively. This method could be used to reduce design time and fabrication cost, eliminating trial-and-error procedures. Various microtransducers, with optimized characteristics, can be developed economically using the developed database. A simulation to design an ultrasonic microtransducer is completed as an executed example. The dependent relationship between membrane geometry, vibration displacement and output response is demonstrated. The electromechanical coupling effects, mechanical impedance and frequency response are also taken into consideration for optimal microstructures. The microdevice parameters with the best output signal response are predicted, and microfabrication processing constraints and realities are also taken into consideration.

  5. Associative memory model for searching an image database by image snippet

    NASA Astrophysics Data System (ADS)

    Khan, Javed I.; Yun, David Y.

    1994-09-01

    This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.

  6. Chromatin fiber functional organization: Some plausible models

    NASA Astrophysics Data System (ADS)

    Lesne, A.; Victor, J.-M.

    2006-03-01

    We here present a modeling study of the chromatin fiber functional organization. Multi-scale modeling is required to unravel the complex interplay between the fiber and the DNA levels. It suggests plausible scenarios, including both physical and biological aspects, for fiber condensation, its targeted decompaction, and transcription regulation. We conclude that a major role of the chromatin fiber structure might be to endow DNA with allosteric potentialities and to control DNA transactions by an epigenetic tuning of its mechanical and topological constraints.

  7. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2014-08-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.

  8. Seismic hazard assessment for Myanmar: Earthquake model database, ground-motion scenarios, and probabilistic assessments

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.

    2015-12-01

    We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.

  9. Epidemiology of Occupational Accidents in Iran Based on Social Security Organization Database

    PubMed Central

    Mehrdad, Ramin; Seifmanesh, Shahdokht; Chavoshi, Farzaneh; Aminian, Omid; Izadi, Nazanin

    2014-01-01

    Background: Background: Today, occupational accidents are one of the most important problems in industrial world. Due to lack of appropriate system for registration and reporting, there is no accurate statistics of occupational accidents all over the world especially in developing countries. Objectives: The aim of this study is epidemiological assessment of occupational accidents in Iran. Materials and Methods: Information of available occupational accidents in Social Security Organization was extracted from accident reporting and registration forms. In this cross-sectional study, gender, age, economic activity, type of accident and injured body part in 22158 registered accidents during 2008 were described. Results: The occupational accidents rate was 253 in 100,000 workers in 2008. 98.2% of injured workers were men. The mean age of injured workers was 32.07 ± 9.12 years. The highest percentage belonged to age group of 25-34 years old. In our study, most of the accidents occurred in basic metals industry, electrical and non-electrical machines and construction industry. Falling down from height and crush injury were the most prevalent accidents. Upper and lower extremities were the most common injured body parts. Conclusion: Due to the high rate of accidents in metal and construction industries, engineering controls, the use of appropriate protective equipment and safety worker training seems necessary. PMID:24719699

  10. Longitudinal driver model and collision warning and avoidance algorithms based on human driving databases

    NASA Astrophysics Data System (ADS)

    Lee, Kangwon

    Intelligent vehicle systems, such as Adaptive Cruise Control (ACC) or Collision Warning/Collision Avoidance (CW/CA), are currently under development, and several companies have already offered ACC on selected models. Control or decision-making algorithms of these systems are commonly evaluated under extensive computer simulations and well-defined scenarios on test tracks. However, they have rarely been validated with large quantities of naturalistic human driving data. This dissertation utilized two University of Michigan Transportation Research Institute databases (Intelligent Cruise Control Field Operational Test and System for Assessment of Vehicle Motion Environment) in the development and evaluation of longitudinal driver models and CW/CA algorithms. First, to examine how drivers normally follow other vehicles, the vehicle motion data from the databases were processed using a Kalman smoother. The processed data was then used to fit and evaluate existing longitudinal driver models (e.g., the linear follow-the-leader model, the Newell's special model, the nonlinear follow-the-leader model, the linear optimal control model, the Gipps model and the optimal velocity model). A modified version of the Gipps model was proposed and found to be accurate in both microscopic (vehicle) and macroscopic (traffic) senses. Second, to examine emergency braking behavior and to evaluate CW/CA algorithms, the concepts of signal detection theory and a performance index suitable for unbalanced situations (few threatening data points vs. many safe data points) are introduced. Selected existing CW/CA algorithms were found to have a performance index (geometric mean of true-positive rate and precision) not exceeding 20%. To optimize the parameters of the CW/CA algorithms, a new numerical optimization scheme was developed to replace the original data points with their representative statistics. A new CW/CA algorithm was proposed, which was found to score higher than 55% in the

  11. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Nonlinear Analysis.

    PubMed

    Henley, Brandon; Shin, Dae; Zhang, Rong; Marmarelis, Vasilis

    2016-07-09

    Objective-As an extension to our study comparing a putative compartmental and data-based model of linear dynamic cerebral autoregulation (CA) and CO2-vasomotor reactivity (VR), we study the CA-VR process in a nonlinear context. Methods- We use the concept of Principal Dynamic Modes (PDM) in order to obtain a compact and more easily interpretable input-output model. This in silico study permits the use of input data with a dynamic range large enough to simulate the classic homeostatic CA and VR curves using a putative structural model of the regulatory control of the cerebral circulation. The PDM model obtained using theoretical and experimental data are compared. Results- It was found that the PDM model was able to reflect accurately both the simulated static CA and VR curves in the Associated Nonlinear Functions (ANFs). Similar to experimental observations, the PDM model essentially separates the pressure-flow relationship into a linear component with fast dynamics and nonlinear components with slow dynamics. In addition, we found good qualitative agreement between the PDMs representing the dynamic theoretical and experimental CO2-flow relationship. Conclusion- Under the modeling assumption and in light of other experimental findings, we hypothesize that PDMs obtained from experimental data correspond with passive fluid dynamical and active regulatory mechanisms. Significance- Both hypothesis-based and data-based modeling approaches can be combined to offer some insight into the physiological basis of PDM model obtained from human experimental data. The PDM modeling approach potentially offers a practical way to quantify the status of specific regulatory mechanisms in the CA-VR process.

  12. Modeling organic nitrogen conversions in activated sludge bioreactors.

    PubMed

    Makinia, Jacek; Pagilla, Krishna; Czerwionka, Krzysztof; Stensel, H David

    2011-01-01

    For biological nutrient removal (BNR) systems designed to maximize nitrogen removal, the effluent total nitrogen (TN) concentration may range from 2.0 to 4.0 g N/m(3) with about 25-50% in the form of organic nitrogen (ON). In this study, current approaches to modeling organic N conversions (separate processes vs. constant contents of organic fractions) were compared. A new conceptual model of ON conversions was developed and combined with Activated Sludge Model No. 2d (ASM2d). The model addresses a new insight into the processes of ammonification, biomass decay and hydrolysis of particulate and colloidal ON (PON and CON, respectively). Three major ON fractions incorporated are defined as dissolved (DON) (<0.1 µm), CON (0.1-1.2 µm) and PON (41.2 µm). Each major fraction was further divided into two sub-fractions - biodegradable and non-biodegradable. Experimental data were collected during field measurements and lab experiments conducted at the ''Wschod'' WWTP (570,000 PE) in Gdansk (Poland). The accurate steady-state predictions of DON and CON profiles were possible by varying ammonification and hydrolysis rates under different electron acceptor conditions. With the same model parameter set, the behaviors of both inorganic N forms (NH4-N, NOX-N) and ON forms (DON, CON) in the batch experiments were predicted. The challenges to accurately simulate and predict effluent ON levels from BNR systems are due to analytical methods of direct ON measurement (replacing TKN) and lack of large enough database (in-process measurements, dynamic variations of the ON concentrations) which can be used to determine parameter value ranges.

  13. A grid-based model for integration of distributed medical databases.

    PubMed

    Luo, Yongxing; Jiang, Lijun; Zhuang, Tian-ge

    2009-12-01

    Grid has emerged recently as an integration infrastructure for sharing and coordinated use of diverse resources in dynamic, distributed environment. In this paper, we present a prototype system for integration of heterogeneous medical databases based on Grid technology, which can provide a uniform access interface and efficient query mechanism to different medical databases. After presenting the architecture of the prototype system that employs corresponding Grid services and middleware technologies, we make an analysis on its basic functional components including OGSA-DAI, metadata model, transaction management, and query processing in detail, which cooperate with each other to enable uniform accessing and seamless integration of the underlying heterogeneous medical databases. Then, we test effectiveness and performance of the system through a query instance, analyze the experiment result, and make a discussion on some issues relating to practical medical applications. Although the prototype system has been carried out and tested in a simulated hospital information environment at present, the underlying principles are applicable to practical applications.

  14. Modeling global organic aerosol formation and growth

    NASA Astrophysics Data System (ADS)

    Tsimpidi, Alexandra; Karydis, Vlasios; Pandis, Spyros; Lelieveld, Jos

    2014-05-01

    A computationally efficient framework for the description of organic aerosol (OA)-gas partitioning and chemical aging has been developed and implemented into the EMAC atmospheric chemistry-climate model. This model simulates the formation of primary (POA) and secondary organic aerosols (SOA) from semi-volatile (SVOC), intermediate-volatile (IVOC) and volatile organic compounds (VOC). POA are divided in two groups with saturation concentrations at 298 K 0.1, 10, 1000, 100000 µg m-3: OA from fossil fuel combustion and biomass burning. The first 2 surrogate species from each group represent the SVOC while the other surrogate species represent the IVOC. Photochemical reactions that change the volatility of the organics in the gas phase are taken into account. The oxidation products from each group of precursors (SVOC, IVOC, and VOC) are lumped into an additional set of oxidized surrogate species (S-SOA, I-SOA, and V-SOA, respectively) in order to track their source of origin. This model is used to i) estimate the relative contributions of SOA and POA to total OA, ii) determine how SOA concentrations are affected by biogenic and anthropogenic emissions, and iii) evaluate the effect of photochemical aging and long-range transport on OA budget over specific regions.

  15. Self-organized model of cascade spreading

    NASA Astrophysics Data System (ADS)

    Gualdi, S.; Medo, M.; Zhang, Y.-C.

    2011-01-01

    We study simultaneous price drops of real stocks and show that for high drop thresholds they follow a power-law distribution. To reproduce these collective downturns, we propose a minimal self-organized model of cascade spreading based on a probabilistic response of the system elements to stress conditions. This model is solvable using the theory of branching processes and the mean-field approximation. For a wide range of parameters, the system is in a critical state and displays a power-law cascade-size distribution similar to the empirically observed one. We further generalize the model to reproduce volatility clustering and other observed properties of real stocks.

  16. Theory and modeling of stereoselective organic reactions.

    PubMed

    Houk, K N; Paddon-Row, M N; Rondan, N G; Wu, Y D; Brown, F K; Spellmeyer, D C; Metz, J T; Li, Y; Loncharich, R J

    1986-03-07

    Theoretical investigations of the transition structures of additions and cycloadditions reveal details about the geometries of bond-forming processes that are not directly accessible by experiment. The conformational analysis of transition states has been developed from theoretical generalizations about the preferred angle of attack by reagents on multiple bonds and predictions of conformations with respect to partially formed bonds. Qualitative rules for the prediction of the stereochemistries of organic reactions have been devised, and semi-empirical computational models have also been developed to predict the stereoselectivities of reactions of large organic molecules, such as nucleophilic additions to carbonyls, electrophilic hydroborations and cycloadditions, and intramolecular radical additions and cycloadditions.

  17. Assessment of cloud cover in climate models and reanalysis databases with ISCCP over the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Enriquez, Aaron; Calbo, Josep; Gonzalez, Josep-Abel

    2013-04-01

    Clouds are an important regulator of climate due to their influence on the water balance of the atmosphere and their interaction with solar and infrared radiation. At any time, clouds cover a great percentage of the Earth's surface but their distribution is very irregular along time and space, which makes the evaluation of their influence on climate a difficult task. At present there are few studies related to cloud cover comparing current climate models with observational data. In this study, the database of monthly cloud cover provided by the International Satellite Cloud Climatology Project (ISCCP) has been chosen as a reference against which we compare the output of CMIP5 climate models and reanalysis databases, on the domain South-Europe-Mediterranean (SEM) established by the Intergovernmental Panel on Climate Change (IPCC) [1]. The study covers the period between 1984 and 2009, and the performance of cloud cover estimations for seasons has also been studied. To quantify the agreement between the databases we use two types of statistics: bias and SkillScore, which is based on the probability density functions (PDFs) of the databases [2]. We also use Taylor diagrams to visualize the statistics. Results indicate that there are areas where the models accurately describe what it is observed by ISCCP, for some periods of the year (e.g. Northern Africa, for autumn), compared to other areas and periods for which the agreement is lower (Iberian Peninsula in winter and the Black Sea for the summer months). However these differences should be attributed not only to the limitations of climate models, but possibly also to the data provided by ISCCP. References [1] Intergovernmental Panel on Climate Change (2007) Fourth Assessment Report: Climate Change 2007: Working Group I Report: The Physical Science Basis. [2] Ranking the AR4 climate models over the Murray Darling Basin using simulated maximum temperature, minimum temperature and precipitation. Int J Climatol 28

  18. Transforming the Premier Perspective® Hospital Database into the Observational Medical Outcomes Partnership (OMOP) Common Data Model

    PubMed Central

    Makadia, Rupa; Ryan, Patrick B.

    2014-01-01

    Background: The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) has been implemented on various claims and electronic health record (EHR) databases, but has not been applied to a hospital transactional database. This study addresses the implementation of the OMOP CDM on the U.S. Premier Hospital database. Methods: We designed and implemented an extract, transform, load (ETL) process to convert the Premier hospital database into the OMOP CDM. Standard charge codes in Premier were mapped between the OMOP version 4.0 Vocabulary and standard charge descriptions. Visit logic was added to impute the visit dates. We tested the conversion by replicating a published study using the raw and transformed databases. The Premier hospital database was compared to a claims database, in regard to prevalence of disease. Findings: The data transformed into the CDM resulted in 1% of the data being discarded due to data errors in the raw data. A total of 91.4% of Premier standard charge codes were mapped successfully to a standard vocabulary. The results of the replication study resulted in a similar distribution of patient characteristics. The comparison to the claims data yields notable similarities and differences amongst conditions represented in both databases. Discussion: The transformation of the Premier database into the OMOP CDM version 4.0 adds value in conducting analyses due to successful mapping of the drugs and procedures. The addition of visit logic gives ordinality to drugs and procedures that wasn’t present prior to the transformation. Comparing conditions in Premier against a claims database can provide an understanding about Premier’s potential use in pharmacoepidemiology studies that are traditionally conducted via claims databases. Conclusion and Next Steps: The conversion of the Premier database into the OMOP CDM 4.0 was completed successfully. The next steps include refinement of vocabularies and mappings and continual maintenance of

  19. Improving Quality and Quantity of Contributions: Two Models for Promoting Knowledge Exchange with Shared Databases

    ERIC Educational Resources Information Center

    Cress, U.; Barquero, B.; Schwan, S.; Hesse, F. W.

    2007-01-01

    Shared databases are used for knowledge exchange in groups. Whether a person is willing to contribute knowledge to a shared database presents a social dilemma: Each group member saves time and energy by not contributing any information to the database and by using the database only to retrieve information which was contributed by others. But if…

  20. InterMOD: integrated data and tools for the unification of model organism research

    PubMed Central

    Sullivan, Julie; Karra, Kalpana; Moxon, Sierra A. T.; Vallejos, Andrew; Motenko, Howie; Wong, J. D.; Aleksic, Jelena; Balakrishnan, Rama; Binkley, Gail; Harris, Todd; Hitz, Benjamin; Jayaraman, Pushkala; Lyne, Rachel; Neuhauser, Steven; Pich, Christian; Smith, Richard N.; Trinh, Quang; Cherry, J. Michael; Richardson, Joel; Stein, Lincoln; Twigger, Simon; Westerfield, Monte; Worthey, Elizabeth; Micklem, Gos

    2013-01-01

    Model organisms are widely used for understanding basic biology, and have significantly contributed to the study of human disease. In recent years, genomic analysis has provided extensive evidence of widespread conservation of gene sequence and function amongst eukaryotes, allowing insights from model organisms to help decipher gene function in a wider range of species. The InterMOD consortium is developing an infrastructure based around the InterMine data warehouse system to integrate genomic and functional data from a number of key model organisms, leading the way to improved cross-species research. So far including budding yeast, nematode worm, fruit fly, zebrafish, rat and mouse, the project has set up data warehouses, synchronized data models, and created analysis tools and links between data from different species. The project unites a number of major model organism databases, improving both the consistency and accessibility of comparative research, to the benefit of the wider scientific community. PMID:23652793

  1. Hydrodynamic interaction of two swimming model micro-organisms

    NASA Astrophysics Data System (ADS)

    Ishikawa, Takuji; Simmonds, M. P.; Pedley, T. J.

    2006-12-01

    In order to understand the rheological and transport properties of a suspension of swimming micro-organisms, it is necessary to analyse the fluid-dynamical interaction of pairs of such swimming cells. In this paper, a swimming micro-organism is modelled as a squirming sphere with prescribed tangential surface velocity, referred to as a squirmer. The centre of mass of the sphere may be displaced from the geometric centre (bottom-heaviness). The effects of inertia and Brownian motion are neglected, because real micro-organisms swim at very low Reynolds numbers but are too large for Brownian effects to be important. The interaction of two squirmers is calculated analytically for the limits of small and large separations and is also calculated numerically using a boundary-element method. The analytical and the numerical results for the translational rotational velocities and for the stresslet of two squirmers correspond very well. We sought to generate a database for an interacting pair of squirmers from which one can easily predict the motion of a collection of squirmers. The behaviour of two interacting squirmers is discussed phenomenologically, too. The results for the trajectories of two squirmers show that first the squirmers attract each other, then they change their orientation dramatically when they are in near contact and finally they separate from each other. The effect of bottom-heaviness is considerable. Restricting the trajectories to two dimensions is shown to give misleading results. Some movies of interacting squirmers are available with the online version of the paper.

  2. Direct data-based model predictive control with applications to structures, robotic swarms, and aircraft

    NASA Astrophysics Data System (ADS)

    Barlow, Jonathan S.

    A direct method to design data-based model predictive controllers is presented. The design method uses system identification techniques to identify model predictive controller gains directly from a set of excitation input and disturbance corrupted output. The design is direct in that the controller gains can be designed directly from input and disturbance corrupted output data without an intermediate identification step. The direct design is simpler than previous two-step designs and reduces computation time for the design of the controller. The direct design also enables an adaptive implementation capable of identifying controller gains online. The direct data-based controllers can be used for vibration suppression, disturbance rejection, tracking and is applied to structures, robot swarms and aircraft. For the cases of vibration suppression and disturbance rejection, the data-based controller has the advantage that any disturbances present in the design data are automatically rejected without needing to know the details of the disturbances. For the case of robot swarms, extensions are made for formation control and obstacle avoidance, and the controller can be implemented as a decentralized controller in real time and in parallel on individual vehicles with communication limited to past input and past output data. A formulation for improving the robustness of the controller to parametric variations is also developed. Finally, the adaptive implementation is shown to be useful for the control of linear time-varying systems and has been successfully implemented to control a linear time-varying model of a Cruise Efficient Short Take-Off and Landing (CESTOL) type aircraft.

  3. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    SciTech Connect

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  4. Model Organisms and Traditional Chinese Medicine Syndrome Models

    PubMed Central

    Xu, Jin-Wen

    2013-01-01

    Traditional Chinese medicine (TCM) is an ancient medical system with a unique cultural background. Nowadays, more and more Western countries due to its therapeutic efficacy are accepting it. However, safety and clear pharmacological action mechanisms of TCM are still uncertain. Due to the potential application of TCM in healthcare, it is necessary to construct a scientific evaluation system with TCM characteristics and benchmark the difference from the standard of Western medicine. Model organisms have played an important role in the understanding of basic biological processes. It is easier to be studied in certain research aspects and to obtain the information of other species. Despite the controversy over suitable syndrome animal model under TCM theoretical guide, it is unquestionable that many model organisms should be used in the studies of TCM modernization, which will bring modern scientific standards into mysterious ancient Chinese medicine. In this review, we aim to summarize the utilization of model organisms in the construction of TCM syndrome model and highlight the relevance of modern medicine with TCM syndrome animal model. It will serve as the foundation for further research of model organisms and for its application in TCM syndrome model. PMID:24381636

  5. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  6. Creating a standard models for the specialized GIS database and their functions in solving forecast tasks

    NASA Astrophysics Data System (ADS)

    Sharapatov, Abish

    2015-04-01

    Standard models of skarn-magnetite deposits in folded regions of Kazakhstan, is made by using generalized geological and geophysical parameters of the similar existing deposits. Such models might be Sarybay, Sokolovskoe and other deposits of Valeryanovskaya structural-facies zone (SFZ) in Torgay paleorifts structure. They are located in the north of SFZ. Forecasting area located in the south of SFZ - in the North of Aral Sea region. These models are outlined from the study of deep structure of the region using geophysical data. Upper and deep zones were studied by separating gravity and magnetic fields on the regional and local components. Seismic and geoelectric data of region were used in interpretation. Thus, the similarity between northern and southern part of SFZ has been identified in geophysical aspects, regional and local geophysical characteristics. Creation of standard models of scarn-magnetite deposits for GIS database allows highlighting forecast criteria of such deposits type. These include: - the presence of fault zones; - thickness of volcanic strata - about 2 km or more, the total capacity of circum-ore metasomatic rocks - about 1.5 km and more; - spatial positions and geometric data of the ore bodies - steeply dipping bodies in the medium gabbroic intrusions and their contact with carbonate-dolomitic strata; - presence in the geological section of the near surface zone with the electrical resistance of 200 Om*m, corresponding to the Devonian, Early Carboniferous volcanic sediments and volcanics associated with subvolcanic bodies and intrusions; - a relatively shallow depth of the zone at a rate of Vp = 6.4-6.8 km/s - uplifting Conrad border, thickening of the granulite-basic layer; - positive values of magnetic (high-amplitude) and gravitational field. A geological forecast model is carried out by structuring geodata based on detailed analysis and aggregation of geological and formal knowledge bases on standard targets. Aggregation method of

  7. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  8. Charophytes: Evolutionary Giants and Emerging Model Organisms

    PubMed Central

    Domozych, David S.; Popper, Zoë A.; Sørensen, Iben

    2016-01-01

    Charophytes are the group of green algae whose ancestral lineage gave rise to land plants in what resulted in a profoundly transformative event in the natural history of the planet. Extant charophytes exhibit many features that are similar to those found in land plants and their relatively simple phenotypes make them efficacious organisms for the study of many fundamental biological phenomena. Several taxa including Micrasterias, Penium, Chara, and Coleochaete are valuable model organisms for the study of cell biology, development, physiology and ecology of plants. New and rapidly expanding molecular studies are increasing the use of charophytes that in turn, will dramatically enhance our understanding of the evolution of plants and the adaptations that allowed for survival on land. The Frontiers in Plant Science series on “Charophytes” provides an assortment of new research reports and reviews on charophytes and their emerging significance as model plants. PMID:27777578

  9. Subject and authorship of records related to the Organization for Tropical Studies (OTS) in BINABITROP, a comprehensive database about Costa Rican biology.

    PubMed

    Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz

    2013-06-01

    BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.

  10. Modeling plasmonic efficiency enhancement in organic photovoltaics.

    PubMed

    Taff, Y; Apter, B; Katz, E A; Efron, U

    2015-09-10

    Efficiency enhancement of bulk heterojunction (BHJ) organic solar cells by means of the plasmonic effect is investigated by using finite-difference time-domain (FDTD) optical simulations combined with analytical modeling of exciton dissociation and charge transport efficiencies. The proposed method provides an improved analysis of the cell performance compared to previous FDTD studies. The results of the simulations predict an 11.8% increase in the cell's short circuit current with the use of Ag nano-hexagons.

  11. Modeling Attrition in Organizations from Email Communication

    DTIC Science & Technology

    2013-09-08

    online multiplayer games , user involvement and movement in online social networking plat- forms, and employee turnover within an organization. At... online games . The same problem of reducing churn rates appears in the scenario of social networking platforms. In recent years there has been a...ABSTRACT 16. SECURITY CLASSIFICATION OF: Modeling people’s online behavior in relation to their real-world social context is an interesting and important

  12. Evaluation of a vortex-based subgrid stress model using DNS databases

    NASA Technical Reports Server (NTRS)

    Misra, Ashish; Lund, Thomas S.

    1996-01-01

    The performance of a SubGrid Stress (SGS) model for Large-Eddy Simulation (LES) developed by Misra k Pullin (1996) is studied for forced and decaying isotropic turbulence on a 32(exp 3) grid. The physical viability of the model assumptions are tested using DNS databases. The results from LES of forced turbulence at Taylor Reynolds number R(sub (lambda)) approximately equals 90 are compared with filtered DNS fields. Probability density functions (pdfs) of the subgrid energy transfer, total dissipation, and the stretch of the subgrid vorticity by the resolved velocity-gradient tensor show reasonable agreement with the DNS data. The model is also tested in LES of decaying isotropic turbulence where it correctly predicts the decay rate and energy spectra measured by Comte-Bellot & Corrsin (1971).

  13. Estimating the computational limits of detection of microbial non-model organisms.

    PubMed

    Kuhring, Mathias; Renard, Bernhard Y

    2015-10-01

    Mass spectrometry has become a key instrument for proteomic studies of single bacteria as well as microbial communities. However, the identification of spectra from MS/MS experiments is still challenging, in particular for non-model organisms. Due to the limited amount of related protein reference sequences, underexplored organisms often remain completely unidentified or their spectra match to peptides of uncertain degree of relation. Alternative strategies such as error-tolerant spectra searches or proteogenomic approaches may reduce the amount of unidentified spectra and lead to peptide matches on more related taxonomic levels. However, to what extent these strategies may be successful is difficult to judge prior to an MS/MS experiment. In this contribution, we introduce a method to estimate the suitability of databases of interest. Further, it allows estimating the possible influence of error-tolerant searches and proteogenomic approaches on databases of interest with respect to the number of unidentified spectra and the taxonomic distances of identified spectra. Furthermore, we provide an implementation of our approach that supports experimental design by evaluating the benefit and need of different search strategies with respect to present databases and organisms under study. We provide several examples which highlight the different effects of additional search strategies on databases and organisms with varying amount of known relative species available.

  14. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    SciTech Connect

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model

  15. Clinical risk assessment of organ manifestations in systemic sclerosis: a report from the EULAR Scleroderma Trials And Research group database

    PubMed Central

    Walker, U A; Tyndall, A; Czirják, L; Denton, C; Farge‐Bancel, D; Kowal‐Bielecka, O; Müller‐Ladner, U; Bocelli‐Tyndall, C; Matucci‐Cerinic, M

    2007-01-01

    Background Systemic sclerosis (SSc) is a multisystem autoimmune disease, which is classified into a diffuse cutaneous (dcSSc) and a limited cutaneous (lcSSc) subset according to the skin involvement. In order to better understand the vascular, immunological and fibrotic processes of SSc and to guide its treatment, the EULAR Scleroderma Trials And Research (EUSTAR) group was formed in June 2004. Aims and methods EUSTAR collects prospectively the Minimal Essential Data Set (MEDS) on all sequential patients fulfilling the American College of Rheumatology diagnostic criteria in participating centres. We aimed to characterise demographic, clinical and laboratory characteristics of disease presentation in SSc and analysed EUSTAR baseline visits. Results In April 2006, a total of 3656 patients (1349 with dcSSc and 2101 with lcSSc) were enrolled in 102 centres and 30 countries. 1330 individuals had autoantibodies against Scl70 and 1106 against anticentromere antibodies. 87% of patients were women. On multivariate analysis, scleroderma subsets (dcSSc vs lcSSc), antibody status and age at onset of Raynaud's phenomenon, but not gender, were found to be independently associated with the prevalence of organ manifestations. Autoantibody status in this analysis was more closely associated with clinical manifestations than were SSc subsets. Conclusion dcSSc and lcSSc subsets are associated with particular organ manifestations, but in this analysis the clinical distinction seemed to be superseded by an antibody‐based classification in predicting some scleroderma complications. The EUSTAR MEDS database facilitates the analysis of clinical patterns in SSc, and contributes to the standardised assessment and monitoring of SSc internationally. PMID:17234652

  16. A database and model to support proactive management of sediment-related sewer blockages.

    PubMed

    Rodríguez, Juan Pablo; McIntyre, Neil; Díaz-Granados, Mario; Maksimović, Cedo

    2012-10-01

    Due to increasing customer and political pressures, and more stringent environmental regulations, sediment and other blockage issues are now a high priority when assessing sewer system operational performance. Blockages caused by sediment deposits reduce sewer system reliability and demand remedial action at considerable operational cost. Consequently, procedures are required for identifying which parts of the sewer system are in most need of proactive removal of sediments. This paper presents an exceptionally long (7.5 years) and spatially detailed (9658 grid squares--0.03 km² each--covering a population of nearly 7.5 million) data set obtained from a customer complaints database in Bogotá (Colombia). The sediment-related blockage data are modelled using homogeneous and non-homogeneous Poisson process models. In most of the analysed areas the inter-arrival time between blockages can be represented by the homogeneous process, but there are a considerable number of areas (up to 34%) for which there is strong evidence of non-stationarity. In most of these cases, the mean blockage rate increases over time, signifying a continual deterioration of the system despite repairs, this being particularly marked for pipe and gully pot related blockages. The physical properties of the system (mean pipe slope, diameter and pipe length) have a clear but weak influence on observed blockage rates. The Bogotá case study illustrates the potential value of customer complaints databases and formal analysis frameworks for proactive sewerage maintenance scheduling in large cities.

  17. ModBase, a database of annotated comparative protein structure models and associated resources

    PubMed Central

    Pieper, Ursula; Webb, Benjamin M.; Dong, Guang Qiang; Schneidman-Duhovny, Dina; Fan, Hao; Kim, Seung Joong; Khuri, Natalia; Spill, Yannick G.; Weinkam, Patrick; Hammel, Michal; Tainer, John A.; Nilges, Michael; Sali, Andrej

    2014-01-01

    ModBase (http://salilab.org/modbase) is a database of annotated comparative protein structure models. The models are calculated by ModPipe, an automated modeling pipeline that relies primarily on Modeller for fold assignment, sequence-structure alignment, model building and model assessment (http://salilab.org/modeller/). ModBase currently contains almost 30 million reliable models for domains in 4.7 million unique protein sequences. ModBase allows users to compute or update comparative models on demand, through an interface to the ModWeb modeling server (http://salilab.org/modweb). ModBase models are also available through the Protein Model Portal (http://www.proteinmodelportal.org/). Recently developed associated resources include the AllosMod server for modeling ligand-induced protein dynamics (http://salilab.org/allosmod), the AllosMod-FoXS server for predicting a structural ensemble that fits an SAXS profile (http://salilab.org/allosmod-foxs), the FoXSDock server for protein–protein docking filtered by an SAXS profile (http://salilab.org/foxsdock), the SAXS Merge server for automatic merging of SAXS profiles (http://salilab.org/saxsmerge) and the Pose & Rank server for scoring protein–ligand complexes (http://salilab.org/poseandrank). In this update, we also highlight two applications of ModBase: a PSI:Biology initiative to maximize the structural coverage of the human alpha-helical transmembrane proteome and a determination of structural determinants of human immunodeficiency virus-1 protease specificity. PMID:24271400

  18. A Conceptual Model and Database to Integrate Data and Project Management

    NASA Astrophysics Data System (ADS)

    Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.

    2015-12-01

    Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype

  19. Data-based modeling of the geomagnetosphere with an IMF-dependent magnetopause

    NASA Astrophysics Data System (ADS)

    Tsyganenko, N. A.

    2014-01-01

    The paper presents first results of the data-based modeling of the geomagnetospheric magnetic field, using the data of Polar, Geotail, Cluster, and Time History of Events and Macroscale Interactions during Substorms satellites, taken during the period 1995-2012 and covering 123 storm events with SYM-H ≥ -200 nT. The most important innovations in the model are (1) taking into account the interplanetary magnetic field (IMF)-dependent shape of the model magnetopause, (2) a physically more consistent global deformation of the equatorial current sheet due to the geodipole tilt, (3) symmetric and partial components of the ring current are calculated based on a realistic background magnetic field, instead of a purely dipolar field, used in earlier models, and (4) the validity region on the nightside is extended to ˜40-50 RE. The model field is confined within a magnetopause, based on Lin et al. (2010) empirical model, driven by the dipole tilt angle, solar wind pressure, and IMF Bz. A noteworthy finding is a significant dependence of the magnetotail flux connection across the equatorial plane on the model magnetopause flaring rate, controlled by the southward component of the IMF.

  20. The LBNL Water Heater Retail Price Database

    SciTech Connect

    Lekov, Alex; Glover, Julie; Lutz, Jim

    2000-10-01

    Lawrence Berkeley National Laboratory developed the LBNL Water Heater Price Database to compile and organize information used in the revision of U.S. energy efficiency standards for water heaters. The Database contains all major components that contribute to the consumer cost of water heaters, including basic retail prices, sales taxes, installation costs, and any associated fees. In addition, the Database provides manufacturing data on the features and design characteristics of more than 1100 different water heater models. Data contained in the Database was collected over a two-year period from 1997 to 1999.

  1. The NCBI Taxonomy database.

    PubMed

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  2. Modelling motions within the organ of Corti

    NASA Astrophysics Data System (ADS)

    Ni, Guangjian; Baumgart, Johannes; Elliott, Stephen

    2015-12-01

    Most cochlear models used to describe the basilar membrane vibration along the cochlea are concerned with macromechanics, and often assume that the organ of Corti moves as a single unit, ignoring the individual motion of different components. New experimental technologies provide the opportunity to measure the dynamic behaviour of different components within the organ of Corti, but only for certain types of excitation. It is thus still difficult to directly measure every aspect of cochlear dynamics, particularly for acoustic excitation of the fully active cochlea. The present work studies the dynamic response of a model of the cross-section of the cochlea, at the microscopic level, using the finite element method. The elastic components are modelled with plate elements and the perilymph and endolymph are modelled with inviscid fluid elements. The individual motion of each component within the organ of Corti is calculated with dynamic pressure loading on the basilar membrane and the motions of the experimentally accessible parts are compared with measurements. The reticular lamina moves as a stiff plate, without much bending, and is pivoting around a point close to the region of the inner hair cells, as observed experimentally. The basilar membrane shows a slightly asymmetric mode shape, with maximum displacement occurring between the second-row and the third-row of the outer hair cells. The dynamics responses is also calculated, and compared with experiments, when driven by the outer hair cells. The receptance of the basilar membrane motion and of the deflection of the hair bundles of the outer hair cells is thus obtained, when driven either acoustically or electrically. In this way, the fully active linear response of the basilar membrane to acoustic excitation can be predicted by using a linear superposition of the calculated receptances and a defined gain function for the outer hair cell feedback.

  3. Theory and modeling of stereoselective organic reactions

    SciTech Connect

    Houk, K.N.; Paddon-Row, M.N.; Rondan, N.G.; Wu, Y.D.; Brown, F.K.; Spellmeyer, D.C.; Metz, J.T.; Li, Y.; Loncharich, R.J.

    1986-03-07

    Theoretical investigations of the transition structures of additions and cycloadditions reveal details about the geometrics of bond-forming processes that are not directly accessible by experiment. The conformational analysis of transition states has been developed from theoretical generalizations about the preferred angle of attack by reagents on multiple bonds and predictions of conformations with respect to partially formed bonds. Qualitative rules for the prediction of the stereochemistries of organic reactions have been devised, and semi-empirical computational models have also been developed to predict the stereoselectivities of reactions of large organic molecules, such as nucleophilic additions to carbonyls, electrophilic hydroborations and cycloadditions, and intramolecular radical additions and cycloadditions. 52 references, 7 figures.

  4. MODEL-BASED HYDROACOUSTIC BLOCKAGE ASSESSMENT AND DEVELOPMENT OF AN EXPLOSIVE SOURCE DATABASE

    SciTech Connect

    Matzel, E; Ramirez, A; Harben, P

    2005-07-11

    We are continuing the development of the Hydroacoustic Blockage Assessment Tool (HABAT) which is designed for use by analysts to predict which hydroacoustic monitoring stations can be used in discrimination analysis for any particular event. The research involves two approaches (1) model-based assessment of blockage, and (2) ground-truth data-based assessment of blockage. The tool presents the analyst with a map of the world, and plots raypath blockages from stations to sources. The analyst inputs source locations and blockage criteria, and the tool returns a list of blockage status from all source locations to all hydroacoustic stations. We are currently using the tool in an assessment of blockage criteria for simple direct-path arrivals. Hydroacoustic data, predominantly from earthquake sources, are read in and assessed for blockage at all available stations. Several measures are taken. First, can the event be observed at a station above background noise? Second, can we establish backazimuth from the station to the source. Third, how large is the decibel drop at one station relative to other stations. These observational results are then compared with model estimates to identify the best set of blockage criteria and used to create a set of blockage maps for each station. The model-based estimates are currently limited by the coarse bathymetry of existing databases and by the limitations inherent in the raytrace method. In collaboration with BBN Inc., the Hydroacoustic Coverage Assessment Model (HydroCAM) that generates the blockage files that serve as input to HABAT, is being extended to include high-resolution bathymetry databases in key areas that increase model-based blockage assessment reliability. An important aspect of this capability is to eventually include reflected T-phases where they reliably occur and to identify the associated reflectors. To assess how well any given hydroacoustic discriminant works in separating earthquake and in-water explosion

  5. Studying Oogenesis in a Non-model Organism Using Transcriptomics: Assembling, Annotating, and Analyzing Your Data.

    PubMed

    Carter, Jean-Michel; Gibbs, Melanie; Breuker, Casper J

    2016-01-01

    This chapter provides a guide to processing and analyzing RNA-Seq data in a non-model organism. This approach was implemented for studying oogenesis in the Speckled Wood Butterfly Pararge aegeria. We focus in particular on how to perform a more informative primary annotation of your non-model organism by implementing our multi-BLAST annotation strategy. We also provide a general guide to other essential steps in the next-generation sequencing analysis workflow. Before undertaking these methods, we recommend you familiarize yourself with command line usage and fundamental concepts of database handling. Most of the operations in the primary annotation pipeline can be performed in Galaxy (or equivalent standalone versions of the tools) and through the use of common database operations (e.g. to remove duplicates) but other equivalent programs and/or custom scripts can be implemented for further automation.

  6. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  7. Meta-Analysis in Human Neuroimaging: Computational Modeling of Large-Scale Databases

    PubMed Central

    Fox, Peter T.; Lancaster, Jack L.; Laird, Angela R.; Eickhoff, Simon B.

    2016-01-01

    Spatial normalization—applying standardized coordinates as anatomical addresses within a reference space—was introduced to human neuroimaging research nearly 30 years ago. Over these three decades, an impressive series of methodological advances have adopted, extended, and popularized this standard. Collectively, this work has generated a methodologically coherent literature of unprecedented rigor, size, and scope. Large-scale online databases have compiled these observations and their associated meta-data, stimulating the development of meta-analytic methods to exploit this expanding corpus. Coordinate-based meta-analytic methods have emerged and evolved in rigor and utility. Early methods computed cross-study consensus, in a manner roughly comparable to traditional (nonimaging) meta-analysis. Recent advances now compute coactivation-based connectivity, connectivity-based functional parcellation, and complex network models powered from data sets representing tens of thousands of subjects. Meta-analyses of human neuroimaging data in large-scale databases now stand at the forefront of computational neurobiology. PMID:25032500

  8. Making designer mutants in model organisms.

    PubMed

    Peng, Ying; Clark, Karl J; Campbell, Jarryd M; Panetta, Magdalena R; Guo, Yi; Ekker, Stephen C

    2014-11-01

    Recent advances in the targeted modification of complex eukaryotic genomes have unlocked a new era of genome engineering. From the pioneering work using zinc-finger nucleases (ZFNs), to the advent of the versatile and specific TALEN systems, and most recently the highly accessible CRISPR/Cas9 systems, we now possess an unprecedented ability to analyze developmental processes using sophisticated designer genetic tools. In this Review, we summarize the common approaches and applications of these still-evolving tools as they are being used in the most popular model developmental systems. Excitingly, these robust and simple genomic engineering tools also promise to revolutionize developmental studies using less well established experimental organisms.

  9. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  10. A One-Degree Seismic Tomographic Model Based on a Sensitivity Kernel Database

    NASA Astrophysics Data System (ADS)

    Sales de Andrade, E.; Liu, Q.; Manners, U.; Lee-Varisco, E.; Ma, Z.; Masters, G.

    2013-12-01

    Seismic tomography is instrumental in mapping 3D velocity structures of the Earth's interior based on travel-time measurements and waveform differences. Although both ray theory and other asymptotic methods have been successfully employed in global tomography, they are less accurate for long-period waves or steep velocity gradients. They also lack the ability to predict 'non-geometrical' effects such as those for the core diffracted phases (Pdiff, Sdiff) which are crucial for mapping heterogeneities in the lowermost mantle (D'' layer). On the other hand, sensitivity kernels can be accurately calculated with no approximations by the interaction of forward and adjoint wavefields, both numerically simulated by spectral element methods. We have previously shown that by taking advantage of the symmetry of 1D reference models, we can efficiently and speedily construct sensitivity kernels of both P and S wavespeeds based on the simulation and storage of forward and adjoint strain fields for select source and receiver geometries. This technique has been used to create a database of strain fields as well as sensitivity kernels for phases typically used in global inversions. We also performed picks for 27,000 Sdiff, 35,000 Pdiff, 400,000 S, and 600,000 P phases and 33,000 SS-S, 33,000 PP-P, and 41,000 ScS-S differential phases, which provide much improved coverage of the globe. Using these travel-times and our sensitivity kernel database in a parallel LSQR inversion, we generate an updated tomographic model with 1° resolution. Using this improved coverage, we investigate differences between global models inverted based on ray theory and finite-frequency kernels.

  11. Mouse Tumor Biology (MTB): a database of mouse models for human cancer.

    PubMed

    Bult, Carol J; Krupke, Debra M; Begley, Dale A; Richardson, Joel E; Neuhauser, Steven B; Sundberg, John P; Eppig, Janan T

    2015-01-01

    The Mouse Tumor Biology (MTB; http://tumor.informatics.jax.org) database is a unique online compendium of mouse models for human cancer. MTB provides online access to expertly curated information on diverse mouse models for human cancer and interfaces for searching and visualizing data associated with these models. The information in MTB is designed to facilitate the selection of strains for cancer research and is a platform for mining data on tumor development and patterns of metastases. MTB curators acquire data through manual curation of peer-reviewed scientific literature and from direct submissions by researchers. Data in MTB are also obtained from other bioinformatics resources including PathBase, the Gene Expression Omnibus and ArrayExpress. Recent enhancements to MTB improve the association between mouse models and human genes commonly mutated in a variety of cancers as identified in large-scale cancer genomics studies, provide new interfaces for exploring regions of the mouse genome associated with cancer phenotypes and incorporate data and information related to Patient-Derived Xenograft models of human cancers.

  12. Modeling disordered morphologies in organic semiconductors.

    PubMed

    Neumann, Tobias; Danilov, Denis; Lennartz, Christian; Wenzel, Wolfgang

    2013-12-05

    Organic thin film devices are investigated for many diverse applications, including light emitting diodes, organic photovoltaic and organic field effect transistors. Modeling of their properties on the basis of their detailed molecular structure requires generation of representative morphologies, many of which are amorphous. Because time-scales for the formation of the molecular structure are slow, we have developed a linear-scaling single molecule deposition protocol which generates morphologies by simulation of vapor deposition of molecular films. We have applied this protocol to systems comprising argon, buckminsterfullerene, N,N-Di(naphthalene-1-yl)-N,N'-diphenyl-benzidine, mer-tris(8-hydroxy-quinoline)aluminum(III), and phenyl-C61-butyric acid methyl ester, with and without postdeposition relaxation of the individually deposited molecules. The proposed single molecule deposition protocol leads to formation of highly ordered morphologies in argon and buckminsterfullerene systems when postdeposition relaxation is used to locally anneal the configuration in the vicinity of the newly deposited molecule. The other systems formed disordered amorphous morphologies and the postdeposition local relaxation step has only a small effect on the characteristics of the disordered morphology in comparison to the materials forming crystals.

  13. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  14. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  15. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  16. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  17. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  18. Engineering the object-relation database model in O-Raid

    NASA Technical Reports Server (NTRS)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  19. Volcanogenic Massive Sulfide Deposits of the World - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Mosier, Dan L.; Berger, Vladimir I.; Singer, Donald A.

    2009-01-01

    Grade and tonnage models are useful in quantitative mineral-resource assessments. The models and database presented in this report are an update of earlier publications about volcanogenic massive sulfide (VMS) deposits. These VMS deposits include what were formerly classified as kuroko, Cyprus, and Besshi deposits. The update was necessary because of new information about some deposits, changes in information in some deposits, such as grades, tonnages, or ages, revised locations of some deposits, and reclassification of subtypes. In this report we have added new VMS deposits and removed a few incorrectly classified deposits. This global compilation of VMS deposits contains 1,090 deposits; however, it was not our intent to include every known deposit in the world. The data was recently used for mineral-deposit density models (Mosier and others, 2007; Singer, 2008). In this paper, 867 deposits were used to construct revised grade and tonnage models. Our new models are based on a reclassification of deposits based on host lithologies: Felsic, Bimodal-Mafic, and Mafic volcanogenic massive sulfide deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types occur in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment and economists to determine the possible economic viability of these resources. Thus, mineral-deposit models play a central role in presenting geoscience

  20. Drug-target interaction prediction: databases, web servers and computational models.

    PubMed

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.

  1. Combining a weed traits database with a population dynamics model predicts shifts in weed communities

    PubMed Central

    Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A

    2015-01-01

    A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated ‘fitness contours’ (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments. PMID:26190870

  2. Modelling the Geographical Origin of Rice Cultivation in Asia Using the Rice Archaeological Database

    PubMed Central

    Silva, Fabio; Stevens, Chris J.; Weisskopf, Alison; Castillo, Cristina; Qin, Ling; Bevan, Andrew; Fuller, Dorian Q.

    2015-01-01

    We have compiled an extensive database of archaeological evidence for rice across Asia, including 400 sites from mainland East Asia, Southeast Asia and South Asia. This dataset is used to compare several models for the geographical origins of rice cultivation and infer the most likely region(s) for its origins and subsequent outward diffusion. The approach is based on regression modelling wherein goodness of fit is obtained from power law quantile regressions of the archaeologically inferred age versus a least-cost distance from the putative origin(s). The Fast Marching method is used to estimate the least-cost distances based on simple geographical features. The origin region that best fits the archaeobotanical data is also compared to other hypothetical geographical origins derived from the literature, including from genetics, archaeology and historical linguistics. The model that best fits all available archaeological evidence is a dual origin model with two centres for the cultivation and dispersal of rice focused on the Middle Yangtze and the Lower Yangtze valleys. PMID:26327225

  3. Combining a weed traits database with a population dynamics model predicts shifts in weed communities.

    PubMed

    Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A

    2015-04-01

    A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated 'fitness contours' (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments.

  4. Modelling the Geographical Origin of Rice Cultivation in Asia Using the Rice Archaeological Database.

    PubMed

    Silva, Fabio; Stevens, Chris J; Weisskopf, Alison; Castillo, Cristina; Qin, Ling; Bevan, Andrew; Fuller, Dorian Q

    2015-01-01

    We have compiled an extensive database of archaeological evidence for rice across Asia, including 400 sites from mainland East Asia, Southeast Asia and South Asia. This dataset is used to compare several models for the geographical origins of rice cultivation and infer the most likely region(s) for its origins and subsequent outward diffusion. The approach is based on regression modelling wherein goodness of fit is obtained from power law quantile regressions of the archaeologically inferred age versus a least-cost distance from the putative origin(s). The Fast Marching method is used to estimate the least-cost distances based on simple geographical features. The origin region that best fits the archaeobotanical data is also compared to other hypothetical geographical origins derived from the literature, including from genetics, archaeology and historical linguistics. The model that best fits all available archaeological evidence is a dual origin model with two centres for the cultivation and dispersal of rice focused on the Middle Yangtze and the Lower Yangtze valleys.

  5. A lattice vibrational model using vibrational density of states for constructing thermodynamic databases (Invited)

    NASA Astrophysics Data System (ADS)

    Jacobs, M. H.; Van Den Berg, A. P.

    2013-12-01

    Thermodynamic databases are indispensable tools in materials science and mineral physics to derive thermodynamic properties in regions of pressure-temperature-composition space for which experimental data are not available or scant. Because the amount of phases and substances in a database is arbitrarily large, thermodynamic formalisms coupled to these databases are often kept as simple as possible to sustain computational efficiency. Although formalisms based on parameterizations of 1 bar thermodynamic data, commonly used in Calphad methodology, meet this requirement, physically unrealistic behavior in properties hamper the application in the pressure regime prevailing in the Earth's lower mantle. The application becomes especially cumbersome when they are applied to planetary mantles of massive super earth exoplanets or in the development of pressure scales, where Hugoniot data at extreme conditions are involved. Methods based on the Mie-Grüneisen-Debye formalism have the advantage that physically unrealistic behavior in thermodynamic properties is absent, but due to the simple construction of the vibrational density of states (VDoS), they lack engineering precision in the low-pressure regime, especially at 1 bar pressure, hampering application of databases incorporating such formalism to industrial processes. To obtain a method that is generally applicable in the complete stability range of a material, we developed a method based on an alternative use of Kieffer's lattice vibrational formalism. The method requires experimental data to constrain the model parameters and is therefore semi-empirical. It has the advantage that microscopic properties for substances, such as the VDoS, Grüneisen parameters and electronic and static lattice properties resulting from present-day ab-initio methods can be incorporated to constrain a thermodynamic analysis of experimental data. It produces results free from physically unrealistic behavior at high pressure and temperature

  6. JAK/STAT signalling--an executable model assembled from molecule-centred modules demonstrating a module-oriented database concept for systems and synthetic biology.

    PubMed

    Blätke, Mary Ann; Dittrich, Anna; Rohr, Christian; Heiner, Monika; Schaper, Fred; Marwan, Wolfgang

    2013-06-01

    Mathematical models of molecular networks regulating biological processes in cells or organisms are most frequently designed as sets of ordinary differential equations. Various modularisation methods have been applied to reduce the complexity of models, to analyse their structural properties, to separate biological processes, or to reuse model parts. Taking the JAK/STAT signalling pathway with the extensive combinatorial cross-talk of its components as a case study, we make a natural approach to modularisation by creating one module for each biomolecule. Each module consists of a Petri net and associated metadata and is organised in a database publically accessible through a web interface (). The Petri net describes the reaction mechanism of a given biomolecule and its functional interactions with other components including relevant conformational states. The database is designed to support the curation, documentation, version control, and update of individual modules, and to assist the user in automatically composing complex models from modules. Biomolecule centred modules, associated metadata, and database support together allow the automatic creation of models by considering differential gene expression in given cell types or under certain physiological conditions or states of disease. Modularity also facilitates exploring the consequences of alternative molecular mechanisms by comparative simulation of automatically created models even for users without mathematical skills. Models may be selectively executed as an ODE system, stochastic, or qualitative models or hybrid and exported in the SBML format. The fully automated generation of models of redesigned networks by metadata-guided modification of modules representing biomolecules with mutated function or specificity is proposed.

  7. Coverage of whole proteome by structural genomics observed through protein homology modeling database

    PubMed Central

    Yamaguchi, Akihiro; Go, Mitiko

    2006-01-01

    We have been developing FAMSBASE, a protein homology-modeling database of whole ORFs predicted from genome sequences. The latest update of FAMSBASE (http://daisy.nagahama-i-bio.ac.jp/Famsbase/), which is based on the protein three-dimensional (3D) structures released by November 2003, contains modeled 3D structures for 368,724 open reading frames (ORFs) derived from genomes of 276 species, namely 17 archaebacterial, 130 eubacterial, 18 eukaryotic and 111 phage genomes. Those 276 genomes are predicted to have 734,193 ORFs in total and the current FAMSBASE contains protein 3D structure of approximately 50% of the ORF products. However, cases that a modeled 3D structure covers the whole part of an ORF product are rare. When portion of an ORF with 3D structure is compared in three kingdoms of life, in archaebacteria and eubacteria, approximately 60% of the ORFs have modeled 3D structures covering almost the entire amino acid sequences, however, the percentage falls to about 30% in eukaryotes. When annual differences in the number of ORFs with modeled 3D structure are calculated, the fraction of modeled 3D structures of soluble protein for archaebacteria is increased by 5%, and that for eubacteria by 7% in the last 3 years. Assuming that this rate would be maintained and that determination of 3D structures for predicted disordered regions is unattainable, whole soluble protein model structures of prokaryotes without the putative disordered regions will be in hand within 15 years. For eukaryotic proteins, they will be in hand within 25 years. The 3D structures we will have at those times are not the 3D structure of the entire proteins encoded in single ORFs, but the 3D structures of separate structural domains. Measuring or predicting spatial arrangements of structural domains in an ORF will then be a coming issue of structural genomics. PMID:17146617

  8. Carbonatites of the World, Explored Deposits of Nb and REE - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Berger, Vladimir I.; Singer, Donald A.; Orris, Greta J.

    2009-01-01

    This report is based on published tonnage and grade data on 58 Nb- and rare-earth-element (REE)-bearing carbonatite deposits that are mostly well explored and are partially mined or contain resources of these elements. The deposits represent only a part of the known 527 carbonatites around the world, but they are characterized by reliable quantitative data on ore tonnages and grades of niobium and REE. Grade and tonnage models are an important component of mineral resource assessments. Carbonatites present one of the main natural sources of niobium and rare-earth elements, the economic importance of which grows consistently. A purpose of this report is to update earlier publications. New information about known deposits, as well as data on new deposits published during the last decade, are incorporated in the present paper. The compiled database (appendix 1; linked to right) contains 60 explored Nb- and REE-bearing carbonatite deposits - resources of 55 of these deposits are taken from publications. In the present updated grade-tonnage model we have added 24 deposits comparing with the previous model of Singer (1998). Resources of most deposits are residuum ores in the upper part of carbonatite bodies. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types are present in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment, and the grade and tonnage models allow economists to

  9. A database of lumbar spinal mechanical behavior for validation of spinal analytical models.

    PubMed

    Stokes, Ian A F; Gardner-Morse, Mack

    2016-03-21

    Data from two experimental studies with eight specimens each of spinal motion segments and/or intervertebral discs are presented in a form that can be used for comparison with finite element model predictions. The data include the effect of compressive preload (0, 250 and 500N) with quasistatic cyclic loading (0.0115Hz) and the effect of loading frequency (1, 0.1, 0.01 and 0.001Hz) with a physiological compressive preload (mean 642N). Specimens were tested with displacements in each of six degrees of freedom (three translations and three rotations) about defined anatomical axes. The three forces and three moments in the corresponding axis system were recorded during each test. Linearized stiffness matrices were calculated that could be used in multi-segmental biomechanical models of the spine and these matrices were analyzed to determine whether off-diagonal terms and symmetry assumptions should be included. These databases of lumbar spinal mechanical behavior under physiological conditions quantify behaviors that should be present in finite element model simulations. The addition of more specimens to identify sources of variability associated with physical dimensions, degeneration, and other variables would be beneficial. Supplementary data provide the recorded data and Matlab® codes for reading files. Linearized stiffness matrices derived from the tests at different preloads revealed few significant unexpected off-diagonal terms and little evidence of significant matrix asymmetry.

  10. Towards global QSAR model building for acute toxicity: Munro database case study.

    PubMed

    Chavan, Swapnil; Nicholls, Ian A; Karlsson, Björn C G; Rosengren, Annika M; Ballabio, Davide; Consonni, Viviana; Todeschini, Roberto

    2014-10-09

    A series of 436 Munro database chemicals were studied with respect to their corresponding experimental LD50 values to investigate the possibility of establishing a global QSAR model for acute toxicity. Dragon molecular descriptors were used for the QSAR model development and genetic algorithms were used to select descriptors better correlated with toxicity data. Toxic values were discretized in a qualitative class on the basis of the Globally Harmonized Scheme: the 436 chemicals were divided into 3 classes based on their experimental LD50 values: highly toxic, intermediate toxic and low to non-toxic. The k-nearest neighbor (k-NN) classification method was calibrated on 25 molecular descriptors and gave a non-error rate (NER) equal to 0.66 and 0.57 for internal and external prediction sets, respectively. Even if the classification performances are not optimal, the subsequent analysis of the selected descriptors and their relationship with toxicity levels constitute a step towards the development of a global QSAR model for acute toxicity.

  11. Spectral Line-Shape Model to Replace the Voigt Profile in Spectroscopic Databases

    NASA Astrophysics Data System (ADS)

    Lisak, Daniel; Ngo, Ngoc Hoa; Tran, Ha; Hartmann, Jean-Michel

    2014-06-01

    The standard description of molecular line shapes in spectral databases and radiative transfer codes is based on the Voigt profile. It is well known that its simplified assumptions of absorber free motion and independence of collisional parameters from absorber velocity lead to systematic errors in analysis of experimental spectra, and retrieval of gas concentration. We demonstrate1,2 that the partially correlated quadratic speed-dependent hardcollision profile3. (pCqSDHCP) is a good candidate to replace the Voigt profile in the next generations of spectroscopic databases. This profile takes into account the following physical effects: the Doppler broadening, the pressure broadening and shifting of the line, the velocity-changing collisions, the speed-dependence of pressure broadening and shifting, and correlations between velocity- and phase/state-changing collisions. The speed-dependence of pressure broadening and shifting is incorporated into the pCqSDNGP in the so-called quadratic approximation. The velocity-changing collisions lead to the Dicke narrowing effect; however in many cases correlations between velocityand phase/state-changing collisions may lead to effective reduction of observed Dicke narrowing. The hard-collision model of velocity-changing collisions is also known as the Nelkin-Ghatak model or Rautian model. Applicability of the pCqSDHCP for different molecular systems was tested on calculated and experimental spectra of such molecules as H2, O2, CO2, H2O in a wide span of pressures. For all considered systems, pCqSDHCP is able to describe molecular spectra at least an order of magnitude better than the Voigt profile with all fitted parameters being linear with pressure. In the most cases pCqSDHCP can reproduce the reference spectra down to 0.2% or better, which fulfills the requirements of the most demanding remote-sensing applications. An important advantage of pCqSDHCP is that a fast algorithm for its computation was developedab4,5 and allows

  12. High Prevalence of Multistability of Rest States and Bursting in a Database of a Model Neuron

    PubMed Central

    Marin, Bóris; Barnett, William H.; Doloc-Mihu, Anca; Calabrese, Ronald L.; Cymbalyuk, Gennady S.

    2013-01-01

    Flexibility in neuronal circuits has its roots in the dynamical richness of their neurons. Depending on their membrane properties single neurons can produce a plethora of activity regimes including silence, spiking and bursting. What is less appreciated is that these regimes can coexist with each other so that a transient stimulus can cause persistent change in the activity of a given neuron. Such multistability of the neuronal dynamics has been shown in a variety of neurons under different modulatory conditions. It can play either a functional role or present a substrate for dynamical diseases. We considered a database of an isolated leech heart interneuron model that can display silent, tonic spiking and bursting regimes. We analyzed only the cases of endogenous bursters producing functional half-center oscillators (HCOs). Using a one parameter (the leak conductance ()) bifurcation analysis, we extended the database to include silent regimes (stationary states) and systematically classified cases for the coexistence of silent and bursting regimes. We showed that different cases could exhibit two stable depolarized stationary states and two hyperpolarized stationary states in addition to various spiking and bursting regimes. We analyzed all cases of endogenous bursters and found that 18% of the cases were multistable, exhibiting coexistences of stationary states and bursting. Moreover, 91% of the cases exhibited multistability in some range of . We also explored HCOs built of multistable neuron cases with coexisting stationary states and a bursting regime. In 96% of cases analyzed, the HCOs resumed normal alternating bursting after one of the neurons was reset to a stationary state, proving themselves robust against this perturbation. PMID:23505348

  13. Speciation of volatile organic compound emissions for regional air quality modeling of particulate matter and ozone

    NASA Astrophysics Data System (ADS)

    Makar, P. A.; Moran, M. D.; Scholtz, M. T.; Taylor, A.

    2003-01-01

    A new classification scheme for the speciation of organic compound emissions for use in air quality models is described. The scheme uses 81 organic compound classes to preserve both net gas-phase reactivity and particulate matter (PM) formation potential. Chemical structure, vapor pressure, hydroxyl radical (OH) reactivity, freezing point/boiling point, and solubility data were used to create the 81 compound classes. Volatile, semivolatile, and nonvolatile organic compounds are included. The new classification scheme has been used in conjunction with the Canadian Emissions Processing System (CEPS) to process 1990 gas-phase and particle-phase organic compound emissions data for summer and winter conditions for a domain covering much of eastern North America. A simple postprocessing model was used to analyze the speciated organic emissions in terms of both gas-phase reactivity and potential to form organic PM. Previously unresolved compound classes that may have a significant impact on ozone formation include biogenic high-reactivity esters and internal C6-8 alkene-alcohols and anthropogenic ethanol and propanol. Organic radical production associated with anthropogenic organic compound emissions may be 1 or more orders of magnitude more important than biogenic-associated production in northern United States and Canadian cities, and a factor of 3 more important in southern U.S. cities. Previously unresolved organic compound classes such as low vapour pressure PAHs, anthropogenic diacids, dialkyl phthalates, and high carbon number alkanes may have a significant impact on organic particle formation. Primary organic particles (poorly characterized in national emissions databases) dominate total organic particle concentrations, followed by secondary formation and primary gas-particle partitioning. The influence of the assumed initial aerosol water concentration on subsequent thermodynamic calculations suggests that hydrophobic and hydrophilic compounds may form external

  14. Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region

    NASA Astrophysics Data System (ADS)

    Gülen, L.; Wp2 Team

    2010-12-01

    The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare

  15. Data-based information gain on the response behaviour of hydrological models at catchment scale

    NASA Astrophysics Data System (ADS)

    Willems, Patrick

    2013-04-01

    A data-based approach is presented to analyse the response behaviour of hydrological models at the catchment scale. The approach starts with a number of sequential time series processing steps, applied to available rainfall, ETo and river flow observation series. These include separation of the high frequency (e.g., hourly, daily) river flow series into subflows, split of the series in nearly independent quick and slow flow hydrograph periods, and the extraction of nearly independent peak and low flows. Quick-, inter- and slow-subflow recession behaviour, sub-responses to rainfall and soil water storage are derived from the time series data. This data-based information on the catchment response behaviour can be applied on the basis of: - Model-structure identification and case-specific construction of lumped conceptual models for gauged catchments; or diagnostic evaluation of existing model structures; - Intercomparison of runoff responses for gauged catchments in a river basin, in order to identify similarity or significant differences between stations or between time periods, and relate these differences to spatial differences or temporal changes in catchment characteristics; - (based on the evaluation of the temporal changes in previous point:) Detection of temporal changes/trends and identification of its causes: climate trends, or land use changes; - Identification of asymptotic properties of the rainfall-runoff behaviour towards extreme peak or low flow conditions (for a given catchment) or towards extreme catchment conditions (for regionalization, ungauged basin prediction purposes); hence evaluating the performance of the model in making extrapolations beyond the range of available stations' data; - (based on the evaluation in previous point:) Evaluation of the usefulness of the model for making extrapolations to more extreme climate conditions projected by for instance climate models. Examples are provided for river basins in Belgium, Ethiopia, Kenya

  16. The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease.

    PubMed

    Eppig, Janan T; Blake, Judith A; Bult, Carol J; Kadin, James A; Richardson, Joel E

    2015-01-01

    The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse-human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human-Mouse: Disease Connection, allows users to explore gene-phenotype-disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community.

  17. The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease

    PubMed Central

    Eppig, Janan T.; Blake, Judith A.; Bult, Carol J.; Kadin, James A.; Richardson, Joel E.

    2015-01-01

    The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse–human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human–Mouse: Disease Connection, allows users to explore gene–phenotype–disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community. PMID:25348401

  18. iGNM 2.0: the Gaussian network model database for biomolecular structural dynamics

    PubMed Central

    Li, Hongchun; Chang, Yuan-Yu; Yang, Lee-Wei; Bahar, Ivet

    2016-01-01

    Gaussian network model (GNM) is a simple yet powerful model for investigating the dynamics of proteins and their complexes. GNM analysis became a broadly used method for assessing the conformational dynamics of biomolecular structures with the development of a user-friendly interface and database, iGNM, in 2005. We present here an updated version, iGNM 2.0 http://gnmdb.csb.pitt.edu/, which covers more than 95% of the structures currently available in the Protein Data Bank (PDB). Advanced search and visualization capabilities, both 2D and 3D, permit users to retrieve information on inter-residue and inter-domain cross-correlations, cooperative modes of motion, the location of hinge sites and energy localization spots. The ability of iGNM 2.0 to provide structural dynamics data on the large majority of PDB structures and, in particular, on their biological assemblies makes it a useful resource for establishing the bridge between structure, dynamics and function. PMID:26582920

  19. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    EPA Pesticide Factsheets

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  20. Incorporating Aquatic Interspecies Toxicity Estimates into Large Databases: Model Evaluations and Data Gains

    EPA Science Inventory

    The Chemical Aquatic Fate and Effects (CAFE) database, developed by NOAA’s Emergency Response Division (ERD), is a centralized data repository that allows for unrestricted access to fate and effects data. While this database was originally designed to help support decisions...

  1. Topobathymetric elevation model development using a new methodology: Coastal National Elevation Database

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Poppenga, Sandra; Brock, John C.; Evans, Gayla A.; Tyler, Dean; Gesch, Dean B.; Thatcher, Cindy; Barras, John

    2016-01-01

    During the coming decades, coastlines will respond to widely predicted sea-level rise, storm surge, and coastalinundation flooding from disastrous events. Because physical processes in coastal environments are controlled by the geomorphology of over-the-land topography and underwater bathymetry, many applications of geospatial data in coastal environments require detailed knowledge of the near-shore topography and bathymetry. In this paper, an updated methodology used by the U.S. Geological Survey Coastal National Elevation Database (CoNED) Applications Project is presented for developing coastal topobathymetric elevation models (TBDEMs) from multiple topographic data sources with adjacent intertidal topobathymetric and offshore bathymetric sources to generate seamlessly integrated TBDEMs. This repeatable, updatable, and logically consistent methodology assimilates topographic data (land elevation) and bathymetry (water depth) into a seamless coastal elevation model. Within the overarching framework, vertical datum transformations are standardized in a workflow that interweaves spatially consistent interpolation (gridding) techniques with a land/water boundary mask delineation approach. Output gridded raster TBDEMs are stacked into a file storage system of mosaic datasets within an Esri ArcGIS geodatabase for efficient updating while maintaining current and updated spatially referenced metadata. Topobathymetric data provide a required seamless elevation product for several science application studies, such as shoreline delineation, coastal inundation mapping, sediment-transport, sea-level rise, storm surge models, and tsunami impact assessment. These detailed coastal elevation data are critical to depict regions prone to climate change impacts and are essential to planners and managers responsible for mitigating the associated risks and costs to both human communities and ecosystems. The CoNED methodology approach has been used to construct integrated TBDEM models

  2. Data-based modelling and environmental sensitivity of vegetation in China

    NASA Astrophysics Data System (ADS)

    Wang, H.; Prentice, I. C.; Ni, J.

    2013-09-01

    A process-oriented niche specification (PONS) model was constructed to quantify climatic controls on the distribution of ecosystems, based on the vegetation map of China. PONS uses general hypotheses about bioclimatic controls to provide a "bridge" between statistical niche models and more complex process-based models. Canonical correspondence analysis provided an overview of relationships between the abundances of 55 plant communities in 0.1° grid cells and associated mean values of 20 predictor variables. Of these, GDD0 (accumulated degree days above 0 °C), Cramer-Prentice α (an estimate of the ratio of actual to equilibrium evapotranspiration) and mGDD5 (mean temperature during the period above 5 °C) showed the greatest predictive power. These three variables were used to develop generalized linear models for the probability of occurrence of 16 vegetation classes, aggregated from the original 55 types by k-means clustering according to bioclimatic similarity. Each class was hypothesized to possess a unimodal relationship to each bioclimate variable, independently of the other variables. A simple calibration was used to generate vegetation maps from the predicted probabilities of the classes. Modelled and observed vegetation maps showed good to excellent agreement (κ = 0.745). A sensitivity study examined modelled responses of vegetation distribution to spatially uniform changes in temperature, precipitation and [CO2], the latter included via an offset to α (based on an independent, data-based light use efficiency model for forest net primary production). Warming shifted the boundaries of most vegetation classes northward and westward while temperate steppe and desert replaced alpine tundra and steppe in the southeast of the Tibetan Plateau. Increased precipitation expanded mesic vegetation at the expense of xeric vegetation. The effect of [CO2] doubling was roughly equivalent to increasing precipitation by ~ 30%, favouring woody vegetation types

  3. Data-based modelling and environmental sensitivity of vegetation in China

    NASA Astrophysics Data System (ADS)

    Wang, H.; Prentice, I. C.; Ni, J.

    2013-01-01

    A process-oriented niche specification (PONS) model was constructed to quantify climatic controls on the distribution of ecosystems, based on the vegetation map of China. PONS uses general hypotheses about bioclimatic controls to provide a "bridge" between statistical niche models and more complex process-based models. Canonical correspondence analysis provided an overview of relationships between the abundances of 55 plant communities in 0.1° grid cells and associated mean values of 20 predictor variables. Of these, GDD (accumulated degree days above 0 °C) Cramer-Prentice α (an estimate of the ratio of actual to equilibrium evapotranspiration) and mGDD5 (mean temperature during the period above 5 °C) showed the greatest predictive power. These three variables were used to develop generalized linear models for the probability of occurrence of 16 vegetation classes, aggregated from the original 55 types by k-means clustering according to bioclimatic similarity. Each class was hypothesized to possess a unimodal relationship to each bioclimate variable, independently of the other variables. A simple calibration was used to generate vegetation maps from the predicted probabilities of the classes. Modelled and observed vegetation maps showed good to excellent agreement (κ = 0.745). A sensitivity study examined modelled responses of vegetation distribution to spatially uniform changes in temperature, precipitation and [CO2], the latter included via an offset to α (based on an independent, data-based light use efficiency model for forest net primary production). Warming shifted the boundaries of most vegetation classes northward and westward while temperate steppe and desert replaced alpine tundra and steppe in the southeast of the Tibetan Plateau. Increased precipitation expanded mesic vegetation at the expense of xeric vegetation. The effect of [CO2] doubling was roughly equivalent to increasing precipitation by ∼ 30%, favouring woody vegetation types

  4. Model estimation of cerebral hemodynamics between blood flow and volume changes: a data-based modeling approach.

    PubMed

    Wei, Hua-Liang; Zheng, Ying; Pan, Yi; Coca, Daniel; Li, Liang-Min; Mayhew, J E W; Billings, Stephen A

    2009-06-01

    It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.

  5. The Time Is Right to Focus on Model Organism Metabolomes

    PubMed Central

    Edison, Arthur S.; Hall, Robert D.; Junot, Christophe; Karp, Peter D.; Kurland, Irwin J.; Mistrik, Robert; Reed, Laura K.; Saito, Kazuki; Salek, Reza M.; Steinbeck, Christoph; Sumner, Lloyd W.; Viant, Mark R.

    2016-01-01

    Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research. PMID:26891337

  6. The Time Is Right to Focus on Model Organism Metabolomes.

    PubMed

    Edison, Arthur S; Hall, Robert D; Junot, Christophe; Karp, Peter D; Kurland, Irwin J; Mistrik, Robert; Reed, Laura K; Saito, Kazuki; Salek, Reza M; Steinbeck, Christoph; Sumner, Lloyd W; Viant, Mark R

    2016-02-15

    Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research.

  7. FACILITATING ADVANCED URBAN METEOROLOGY AND AIR QUALITY MODELING CAPABILITIES WITH HIGH RESOLUTION URBAN DATABASE AND ACCESS PORTAL TOOLS

    EPA Science Inventory

    Information of urban morphological features at high resolution is needed to properly model and characterize the meteorological and air quality fields in urban areas. We describe a new project called National Urban Database with Access Portal Tool, (NUDAPT) that addresses this nee...

  8. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    SciTech Connect

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 {times} 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs.

  9. Sediment-hosted gold deposits of the world: database and grade and tonnage models

    USGS Publications Warehouse

    Berger, Vladimir I.; Mosier, Dan L.; Bliss, James D.; Moring, Barry C.

    2014-01-01

    All sediment-hosted gold deposits (as a single population) share one characteristic—they all have disseminated micron-sized invisible gold in sedimentary rocks. Sediment-hosted gold deposits are recognized in the Great Basin province of the western United States and in China along with a few recognized deposits in Indonesia, Iran, and Malaysia. Three new grade and tonnage models for sediment-hosted gold deposits are presented in this paper: (1) a general sediment-hosted gold type model, (2) a Carlin subtype model, and (3) a Chinese subtype model. These models are based on grade and tonnage data from a database compilation of 118 sediment-hosted gold deposits including a total of 123 global deposits. The new general grade and tonnage model for sediment-hosted gold deposits (n=118) has a median tonnage of 5.7 million metric tonnes (Mt) and a gold grade of 2.9 grams per tonne (g/t). This new grade and tonnage model is remarkable in that the estimated parameters of the resulting grade and tonnage distributions are comparable to the previous model of Mosier and others (1992). A notable change is in the reporting of silver in more than 10 percent of deposits; moreover, the previous model had not considered deposits in China. From this general grade and tonnage model, two significantly different subtypes of sediment-hosted gold deposits are differentiated: Carlin and Chinese. The Carlin subtype includes 88 deposits in the western United States, Indonesia, Iran, and Malaysia, with median tonnage and grade of 7.1 Mt and 2.0 g/t Au, respectively. The silver grade is 0.78 g/t Ag for the 10th percentile of deposits. The Chinese subtype represents 30 deposits in China, with a median tonnage of 3.9 Mt and medium grade of 4.6 g/t Au. Important differences are recognized in the mineralogy and alteration of the two sediment-hosted gold subtypes such as: increased sulfide minerals in the Chinese subtype and decalcification alteration dominant in the Carlin type. We therefore

  10. A Database for Propagation Models and Conversion to C++ Programming Language

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    The telecommunications system design engineer generally needs the quantification of effects of the propagation medium (definition of the propagation channel) to design an optimal communications system. To obtain the definition of the channel, the systems engineer generally has a few choices. A search of the relevant publications such as the IEEE Transactions, CCIR's, NASA propagation handbook, etc., may be conducted to find the desired channel values. This method may need excessive amounts of time and effort on the systems engineer's part and there is a possibility that the search may not even yield the needed results. To help the researcher and the systems engineers, it was recommended by the conference participants of NASA Propagation Experimenters (NAPEX) XV (London, Ontario, Canada, June 28 and 29, 1991) that a software should be produced that would contain propagation models and the necessary prediction methods of most propagation phenomena. Moreover, the software should be flexible enough for the user to make slight changes to the models without expending a substantial effort in programming. In the past few years, a software was produced to fit these requirements as best as could be done. The software was distributed to all NAPEX participants for evaluation and use, the participant reactions, suggestions etc., were gathered and were used to improve the subsequent releases of the software. The existing database program is in the Microsoft Excel application software and works fine within the guidelines of that environment, however, recently there have been some questions about the robustness and survivability of the Excel software in the ever changing (hopefully improving) world of software packages.

  11. Consolidated Human Activity Database (CHAD) for use in human exposure and health studies and predictive models

    EPA Pesticide Factsheets

    EPA scientists have compiled detailed data on human behavior from 22 separate exposure and time-use studies into CHAD. The database includes more than 54,000 individual study days of detailed human behavior.

  12. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    SciTech Connect

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  13. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    hides both the underlying database implementation and the mechanism or the framework being used to persist data to the database ( McKenzie , 2008, p...DAOFactory class will have a single static invocable method that will return an instantiated instance of the DAOFactory ( McKenzie et al., 2008, p 399). The... McKenzie , C. Hibernate Made Easy: Simplified Data Persistence with Hibernate and JPA Annotations; PulpJava: Palo Alto, CA, 2008. 6. Freeman, E.; Freeman

  14. 3D Bioprinting of Tissue/Organ Models.

    PubMed

    Pati, Falguni; Gantelius, Jesper; Svahn, Helene Andersson

    2016-04-04

    In vitro tissue/organ models are useful platforms that can facilitate systematic, repetitive, and quantitative investigations of drugs/chemicals. The primary objective when developing tissue/organ models is to reproduce physiologically relevant functions that typically require complex culture systems. Bioprinting offers exciting prospects for constructing 3D tissue/organ models, as it enables the reproducible, automated production of complex living tissues. Bioprinted tissues/organs may prove useful for screening novel compounds or predicting toxicity, as the spatial and chemical complexity inherent to native tissues/organs can be recreated. In this Review, we highlight the importance of developing 3D in vitro tissue/organ models by 3D bioprinting techniques, characterization of these models for evaluating their resemblance to native tissue, and their application in the prioritization of lead candidates, toxicity testing, and as disease/tumor models.

  15. A Topological Model for C2 Organizations

    DTIC Science & Technology

    2011-06-01

    functions of the organization, and the capabilities of its members, as these sets somehow efine the boundaries of organizational performance and the...and functions of the organization, and the capabilities of its members, as these sets somehow efine the boundaries of organizational performance and

  16. A computational platform to maintain and migrate manual functional annotations for BioCyc databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Model organism databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integratio...

  17. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    USGS Publications Warehouse

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  18. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    NASA Astrophysics Data System (ADS)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2014-06-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~275 to ~400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures with the introduction of a new temperature dependence parameterisation. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multicomponent system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~190 to ~440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 25% in comparison to

  19. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  20. Livestock Anaerobic Digester Database

    EPA Pesticide Factsheets

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  1. The Digital Astronaut: An integrated modeling and database system for space biomedical research and operations

    NASA Astrophysics Data System (ADS)

    White, Ronald J.; McPhee, Jancy C.

    2007-02-01

    The Digital Astronaut is an integrated, modular modeling and database system that will support space biomedical research and operations in a variety of fundamental ways. This system will enable the identification and meaningful interpretation of the medical and physiological research required for human space exploration, a determination of the effectiveness of specific individual human countermeasures in reducing risk and meeting health and performance goals on challenging exploration missions and an evaluation of the appropriateness of various medical interventions during mission emergencies, accidents and illnesses. Such a computer-based, decision support system will enable the construction, validation and utilization of important predictive simulations of the responses of the whole human body to the types of stresses experienced during space flight and low-gravity environments. These simulations will be essential for direct, real-time analysis and maintenance of astronaut health and performance capabilities. The Digital Astronaut will collect and integrate past and current human data across many physiological disciplines and simulations into an operationally useful form that will not only summarize knowledge in a convenient and novel way but also reveal gaps that must be filled via new research in order to effectively ameliorate biomedical risks. Initial phases of system development will focus on simulating ground-based analog systems that are just beginning to collect multidisciplinary data in a standardized way (e.g., the International Multidisciplinary Artificial Gravity Project). During later phases, the focus will shift to development and planning for missions and to exploration mission operations. Then, the Digital Astronaut system will enable evaluation of the effectiveness of multiple, simultaneously applied countermeasures (a task made difficult by the many-system physiological effects of individual countermeasures) and allow for the prescription of

  2. A Modeling Exercise for the Organic Classroom

    ERIC Educational Resources Information Center

    Whitlock, Christine R.

    2010-01-01

    An in-class molecular modeling exercise is described. Groups of students are given molecular models to investigate and questions about the models to answer. This exercise is a quick and effective way to review nomenclature, stereochemistry, and conformational analysis.

  3. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  4. A Model for Implementing E-Learning in Iranian Organizations

    ERIC Educational Resources Information Center

    Ghaeni, Emad; Abdehagh, Babak

    2010-01-01

    This article reviews the current status of information and communications technology (ICT) usage and provides a comprehensive outlook on e-learning in both virtual universities and organizations in Iran. A model for e-learning implementation is presented. This model tries to address specific issues in Iranian organizations. (Contains 1 table and 2…

  5. Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols

    SciTech Connect

    Madronich, Sasha

    2015-12-09

    The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.

  6. CARD 2017: expansion and model-centric curation of the Comprehensive Antibiotic Resistance Database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins, and mutations involved in AMR. CARD is ontologi...

  7. Application of Knowledge Discovery in Databases Methodologies for Predictive Models for Pregnancy Adverse Events

    ERIC Educational Resources Information Center

    Taft, Laritza M.

    2010-01-01

    In its report "To Err is Human", The Institute of Medicine recommended the implementation of internal and external voluntary and mandatory automatic reporting systems to increase detection of adverse events. Knowledge Discovery in Databases (KDD) allows the detection of patterns and trends that would be hidden or less detectable if analyzed by…

  8. Populating a Control Point Database: A cooperative effort between the USGS, Grand Canyon Monitoring and Research Center and the Grand Canyon Youth Organization

    NASA Astrophysics Data System (ADS)

    Brown, K. M.; Fritzinger, C.; Wharton, E.

    2004-12-01

    The Grand Canyon Monitoring and Research Center measures the effects of Glen Canyon Dam operations on the resources along the Colorado River from Glen Canyon Dam to Lake Mead in support of the Grand Canyon Adaptive Management Program. Control points are integral for geo-referencing the myriad of data collected in the Grand Canyon including aerial photography, topographic and bathymetric data used for classification and change-detection analysis of physical, biologic and cultural resources. The survey department has compiled a list of 870 control points installed by various organizations needing to establish a consistent reference for data collected at field sites along the 240 mile stretch of Colorado River in the Grand Canyon. This list is the foundation for the Control Point Database established primarily for researchers, to locate control points and independently geo-reference collected field data. The database has the potential to be a valuable mapping tool for assisting researchers to easily locate a control point and reduce the occurrance of unknowingly installing new control points within close proximity of an existing control point. The database is missing photographs and accurate site description information. Current site descriptions do not accurately define the location of the point but refer to the project that used the point, or some other interesting fact associated with the point. The Grand Canyon Monitoring and Research Center (GCMRC) resolved this problem by turning the data collection effort into an educational exercise for the participants of the Grand Canyon Youth organization. Grand Canyon Youth is a non-profit organization providing experiential education for middle and high school aged youth. GCMRC and the Grand Canyon Youth formed a partnership where GCMRC provided the logistical support, equipment, and training to conduct the field work, and the Grand Canyon Youth provided the time and personnel to complete the field work. Two data

  9. The Cardiac Atlas Project—an imaging database for computational modeling and statistical atlases of the heart

    PubMed Central

    Fonseca, Carissa G.; Backhaus, Michael; Bluemke, David A.; Britten, Randall D.; Chung, Jae Do; Cowan, Brett R.; Dinov, Ivo D.; Finn, J. Paul; Hunter, Peter J.; Kadish, Alan H.; Lee, Daniel C.; Lima, Joao A. C.; Medrano−Gracia, Pau; Shivkumar, Kalyanam; Suinesiaputra, Avan; Tao, Wenchao; Young, Alistair A.

    2011-01-01

    Motivation: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. Results: Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt). Availability: http://www.cardiacatlas.org Contact: a.young@auckland.ac.nz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21737439

  10. A model organism for new gene discovery by cDNA sequencing

    SciTech Connect

    El-Saved, N.M.; Donelson, J.E.; Alarcon, C.M.

    1994-09-01

    One method of new gene discovery is single pass sequencing of cDNAs to identify expressed sequence tags (ESTs). Model organisms can have biological properties which makes their use advantageous over studies with humans. One such model organism with advantages for cDNA sequencing is the African trypanosome T. brucei rhodesiense. This organism has the same 40 nucleotide sequence (splice leader sequence) on the 5{prime} end of all mRNAs. We have constructed a 5{prime} cDNA library by priming off the splice leader sequence and have begun sequencing this cDNA library. To date, over nearly 500 such cDNA expressed sequence tags (ESTs) have been examined. Forty-three percent of the sequences sampled from the trypanosome cDNA library have significant similarities to sequences already in the protein and translated nucleic acid databases. Among these are cDNA sequences which encode previously reported T. brucej proteins such as the VSG, tubulin, calflagin, etc., and proteins previously identified in other trypanosomatids. Other cDNAs display significant similarities to genes in unrelated organisms encoding several ribosomal proteins, metabolic enzymes, GTP binding proteins, transcription factors, cyclophillin, nucleosomal histones, histone H1, and a macrophage stress protein, among others. The 57% of the cDNAs that are not similar to sequences currently in the databases likely encode both trypanosome-specific proteins and housekeeping proteins shared with other eukaryotes. These cDNA ESTs provide new avenues of research for exploring both the biochemistry and the genome organization of this parasite, as well as a resource for identifying the 5{prime} sequence of novel genes likely to have homology to genes expressed in other organisms.

  11. Differentiating rectal carcinoma by an immunohistological analysis of carcinomas of pelvic organs based on the NCBI Literature Survey and the Human Protein Atlas database.

    PubMed

    Miura, Koh; Ishida, Kazuyuki; Fujibuchi, Wataru; Ito, Akihiro; Niikura, Hitoshi; Ogawa, Hitoshi; Sasaki, Iwao

    2012-06-01

    The treatments and prognoses of pelvic organ carcinomas differ, depending on whether the primary tumor originated in the rectum, urinary bladder, prostate, ovary, or uterus; therefore, it is essential to diagnose pathologically the primary origin and stages of these tumors. To establish the panels of immunohistochemical markers for differential diagnosis, we reviewed 91 of the NCBI articles on these topics and found that the results correlated closely with those of the public protein database, the Human Protein Atlas. The results revealed the panels of immunohistochemical markers for the differential diagnosis of rectal adenocarcinoma, in which [+] designates positivity in rectal adenocarcinoma and [-] designates negativity in rectal adenocarcinoma: from bladder adenocarcinoma, CDX2[+], VIL1[+], KRT7[-], THBD[-] and UPK3A[-]; from prostate adenocarcinoma, CDX2[+], VIL1[+], CEACAM5[+], KLK3(PSA)[-], ACPP(PAP)[-] and SLC45A3(prostein)[-]; and from ovarian mucinous adenocarcinoma, CEACAM5[+], VIL1[+], CDX2[+], KRT7[-] and MUC5AC[-]. The panels of markers distinguishing ovarian serous adenocarcinoma, cervical carcinoma, and endometrial adenocarcinoma were also represented. Such a comprehensive review on the differential diagnosis of carcinomas of pelvic organs has not been reported before. Thus, much information has been accumulated in public databases to provide an invaluable resource for clinicians and researchers.

  12. The MEXICO project (Model Experiments in Controlled Conditions): The database and first results of data processing and interpretation

    NASA Astrophysics Data System (ADS)

    Snel, H.; Schepers, J. G.; Montgomerie, B.

    2007-07-01

    The Mexico (Model experiments in Controlled Conditions) was a FP5 project, partly financed by European Commission. The main objective was to create a database of detailed aerodynamic and load measurements on a wind turbine model, in a large and high quality wind tunnel, to be used for model validation and improvement. Here model stands for both the extended BEM modelling used in state-of-the-art design and certification software, and CFD modelling of the rotor and near wake flow. For this purpose a three bladed 4.5 m diameter wind tunnel model was built and instrumented. The wind tunnel experiments were carried out in the open section (9.5*9.5 m2) of the Large Scale Facility of the DNW (German-Netherlands) during a six day campaign in December 2006. The conditions for measurements cover three operational tip speed ratios, many blade pitch angles, three yaw misalignment angles and a small number of unsteady cases in the form of pitch ramps and rotor speed ramps. One of the most important feats of the measurement program was the flow field mapping, with stereo PIV techniques. Overall the measurement campaign was very successful. The paper describes the now existing database and discusses a number of highlights from early data processing and interpretation. It should be stressed that all results are first results, no tunnel correction has been performed so far, nor has the necessary checking of data quality.

  13. Public Opinion Poll Question Databases: An Evaluation

    ERIC Educational Resources Information Center

    Woods, Stephen

    2007-01-01

    This paper evaluates five polling resource: iPOLL, Polling the Nations, Gallup Brain, Public Opinion Poll Question Database, and Polls and Surveys. Content was evaluated on disclosure standards from major polling organizations, scope on a model for public opinion polls, and presentation on a flow chart discussing search limitations and usability.

  14. UGTA Photograph Database

    SciTech Connect

    NSTec Environmental Restoration

    2009-04-20

    One of the advantages of the Nevada Test Site (NTS) is that most of the geologic and hydrologic features such as hydrogeologic units (HGUs), hydrostratigraphic units (HSUs), and faults, which are important aspects of flow and transport modeling, are exposed at the surface somewhere in the vicinity of the NTS and thus are available for direct observation. However, due to access restrictions and the remote locations of many of the features, most Underground Test Area (UGTA) participants cannot observe these features directly in the field. Fortunately, National Security Technologies, LLC, geologists and their predecessors have photographed many of these features through the years. During fiscal year 2009, work was done to develop an online photograph database for use by the UGTA community. Photographs were organized, compiled, and imported into Adobe® Photoshop® Elements 7. The photographs were then assigned keyword tags such as alteration type, HGU, HSU, location, rock feature, rock type, and stratigraphic unit. Some fully tagged photographs were then selected and uploaded to the UGTA website. This online photograph database provides easy access for all UGTA participants and can help “ground truth” their analytical and modeling tasks. It also provides new participants a resource to more quickly learn the geology and hydrogeology of the NTS.

  15. PhasePlot: A Software Program for Visualizing Phase Relations Computed Using Thermochemical Models and Databases

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2011-12-01

    A new software program has been developed for Macintosh computers that permits the visualization of phase relations calculated from thermodynamic data-model collections. The data-model collections of MELTS (Ghiorso and Sack, 1995, CMP 119, 197-212), pMELTS (Ghiorso et al., 2002, G-cubed 3, 10.1029/2001GC000217) and the deep mantle database of Stixrude and Lithgow-Bertelloni (2011, GJI 184, 1180-1213) are currently implemented. The software allows users to enter a system bulk composition and a range of reference conditions and then calculate a grid of phase relations. These relations may be visualized in a variety of ways including phase diagrams, phase proportion plots, and contour diagrams of phase compositions and abundances. Results may be exported into Excel or similar spreadsheet applications. Flexibility in stipulating reference conditions permit the construction of temperature-pressure, temperature-volume, entropy-pressure, or entropy-volume display grids. Calculations on the grid are performed for fixed bulk composition or in open systems governed by user specified constraints on component chemical potentials (e.g., specified oxygen fugacity buffers). The calculation engine for the software is optimized for multi-core compute architectures and is very fast, allowing a typical grid of 64 points to be calculated in under 10 seconds on a dual-core laptop/iMac. The underlying computational thermodynamic algorithms have been optimized for speed and robust behavior. Taken together, both of these advances facilitate in classroom demonstrations and permit novice users to work with the program effectively, focusing on problem specification and interpretation of results rather than on manipulation and mechanics of computation - a key feature of an effective instructional tool. The emphasis in this software package is graphical visualization, which aids in better comprehension of complex phase relations in multicomponent systems. Anecdotal experience in using Phase

  16. Exploring Organic Mechanistic Puzzles with Molecular Modeling

    ERIC Educational Resources Information Center

    Horowitz, Gail; Schwartz, Gary

    2004-01-01

    The molecular modeling was used to reinforce more general skills such as deducing and drawing reaction mechanisms, analyzing reaction kinetics and thermodynamics and drawing reaction coordinate energy diagrams. This modeling was done through the design of mechanistic puzzles, involving reactions not familiar to the students.

  17. Temporal Data, Temporal Data Models, Temporal Data Languages and Temporal Database Systems

    DTIC Science & Technology

    1988-06-01

    bibliographies are in chronological order dating from the 1960’s. Breutmann, B., Falkenberg , E., Mauer, R ., CSL: A language for Defining Conceptual Schemas... Falkenberg , E., and Mauer, R ., CSL: A language for Defining Conceptual Schemas, Data Base Architect, North-Holland Publishing Company, 1979. 33. Jones, S...of real-time military applications using temporal database computers. DTIC INS ECTED N~ ~AA DTIC TAF Li Av ....... C~C. -: , r I I %A TABLE OF

  18. A Comprehensive Opacities/Atomic Database for the Analysis of Astrophysical Spectra and Modeling

    NASA Technical Reports Server (NTRS)

    Pradhan, Anil K. (Principal Investigator)

    1997-01-01

    The main goals of this ADP award have been accomplished. The electronic database TOPBASE, consisting of the large volume of atomic data from the Opacity Project, has been installed and is operative at a NASA site at the Laboratory for High Energy Astrophysics Science Research Center (HEASRC) at the Goddard Space Flight Center. The database will be continually maintained and updated by the PI and collaborators. TOPBASE is publicly accessible from IP: topbase.gsfc.nasa.gov. During the last six months (since the previous progress report), considerable work has been carried out to: (1) put in the new data for low ionization stages of iron: Fe I - V, beginning with Fe II, (2) high-energy photoionization cross sections computed by Dr. Hong Lin Zhang (consultant on the Project) were 'merged' with the current Opacity Project data and input into TOPbase; (3) plans laid out for a further extension of TOPbase to include TIPbase, the database for collisional data to complement the radiative data in TOPbase.

  19. Nearly data-based optimal control for linear discrete model-free systems with delays via reinforcement learning

    NASA Astrophysics Data System (ADS)

    Zhang, Jilie; Zhang, Huaguang; Wang, Binrui; Cai, Tiaoyang

    2016-05-01

    In this paper, a nearly data-based optimal control scheme is proposed for linear discrete model-free systems with delays. The nearly optimal control can be obtained using only measured input/output data from systems, by reinforcement learning technology, which combines Q-learning with value iterative algorithm. First, we construct a state estimator by using the measured input/output data. Second, the quadratic functional is used to approximate the value function at each point in the state space, and the data-based control is designed by Q-learning method using the obtained state estimator. Then, the paper states the method, that is, how to solve the optimal inner kernel matrix ? in the least-square sense, by value iteration algorithm. Finally, the numerical examples are given to illustrate the effectiveness of our approach.

  20. The GED4GEM project: development of a Global Exposure Database for the Global Earthquake Model initiative

    USGS Publications Warehouse

    Gamba, P.; Cavalca, D.; Jaiswal, K.S.; Huyck, C.; Crowley, H.

    2012-01-01

    In order to quantify earthquake risk of any selected region or a country of the world within the Global Earthquake Model (GEM) framework (www.globalquakemodel.org/), a systematic compilation of building inventory and population exposure is indispensable. Through the consortium of leading institutions and by engaging the domain-experts from multiple countries, the GED4GEM project has been working towards the development of a first comprehensive publicly available Global Exposure Database (GED). This geospatial exposure database will eventually facilitate global earthquake risk and loss estimation through GEM’s OpenQuake platform. This paper provides an overview of the GED concepts, aims, datasets, and inference methodology, as well as the current implementation scheme, status and way forward.

  1. NERVE AS MODEL TEMPERATURE END ORGAN

    PubMed Central

    Bernhard, C. G.; Granit, Ragnar

    1946-01-01

    Rapid local cooling of mammalian nerve sets up a discharge which is preceded by a local temperature potential, the cooled region being electronegative relative to a normal portion of the nerve. Heating the nerve locally above its normal temperature similarly makes the heated region electronegative relative to a region at normal temperature, and again a discharge is set up from the heated region. These local temperature potentials, set up by the nerve itself, are held to serve as "generator potentials" and the mechanism found is regarded as the prototype for temperature end organs. PMID:19873460

  2. YMDB: the Yeast Metabolome Database

    PubMed Central

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  3. Modeling the influence of organic acids on soil weathering

    USGS Publications Warehouse

    Lawrence, Corey R.; Harden, Jennifer W.; Maher, Kate

    2014-01-01

    Biological inputs and organic matter cycling have long been regarded as important factors in the physical and chemical development of soils. In particular, the extent to which low molecular weight organic acids, such as oxalate, influence geochemical reactions has been widely studied. Although the effects of organic acids are diverse, there is strong evidence that organic acids accelerate the dissolution of some minerals. However, the influence of organic acids at the field-scale and over the timescales of soil development has not been evaluated in detail. In this study, a reactive-transport model of soil chemical weathering and pedogenic development was used to quantify the extent to which organic acid cycling controls mineral dissolution rates and long-term patterns of chemical weathering. Specifically, oxalic acid was added to simulations of soil development to investigate a well-studied chronosequence of soils near Santa Cruz, CA. The model formulation includes organic acid input, transport, decomposition, organic-metal aqueous complexation and mineral surface complexation in various combinations. Results suggest that although organic acid reactions accelerate mineral dissolution rates near the soil surface, the net response is an overall decrease in chemical weathering. Model results demonstrate the importance of organic acid input concentrations, fluid flow, decomposition and secondary mineral precipitation rates on the evolution of mineral weathering fronts. In particular, model soil profile evolution is sensitive to kaolinite precipitation and oxalate decomposition rates. The soil profile-scale modeling presented here provides insights into the influence of organic carbon cycling on soil weathering and pedogenesis and supports the need for further field-scale measurements of the flux and speciation of reactive organic compounds.

  4. A novel model for estimating organic chemical bioconcentration in agricultural plants

    SciTech Connect

    Hung, H.; Mackay, D.; Di Guardo, A.

    1995-12-31

    There is increasing recognition that much human and wildlife exposure to organic contaminants can be traced through the food chain to bioconcentration in vegetation. For risk assessment, there is a need for an accurate model to predict organic chemical concentrations in plants. Existing models range from relatively simple correlations of concentrations using octanol-water or octanol-air partition coefficients, to complex models involving extensive physiological data. To satisfy the need for a relatively accurate model of intermediate complexity, a novel approach has been devised to predict organic chemical concentrations in agricultural plants as a function of soil and air concentrations, without the need for extensive plant physiological data. The plant is treated as three compartments, namely, leaves, roots and stems (including fruit and seeds). Data readily available from the literature, including chemical properties, volume, density and composition of each compartment; metabolic and growth rate of plant; and readily obtainable environmental conditions at the site are required as input. Results calculated from the model are compared with observed and experimentally-determined concentrations. It is suggested that the model, which includes a physiological database for agricultural plants, gives acceptably accurate predictions of chemical partitioning between plants, air and soil.

  5. MORPHIN: a web tool for human disease research by projecting model organism biology onto a human integrated gene network.

    PubMed

    Hwang, Sohyun; Kim, Eiru; Yang, Sunmo; Marcotte, Edward M; Lee, Insuk

    2014-07-01

    Despite recent advances in human genetics, model organisms are indispensable for human disease research. Most human disease pathways are evolutionally conserved among other species, where they may phenocopy the human condition or be associated with seemingly unrelated phenotypes. Much of the known gene-to-phenotype association information is distributed across diverse databases, growing rapidly due to new experimental techniques. Accessible bioinformatics tools will therefore facilitate translation of discoveries from model organisms into human disease biology. Here, we present a web-based discovery tool for human disease studies, MORPHIN (model organisms projected on a human integrated gene network), which prioritizes the most relevant human diseases for a given set of model organism genes, potentially highlighting new model systems for human diseases and providing context to model organism studies. Conceptually, MORPHIN investigates human diseases by an orthology-based projection of a set of model organism genes onto a genome-scale human gene network. MORPHIN then prioritizes human diseases by relevance to the projected model organism genes using two distinct methods: a conventional overlap-based gene set enrichment analysis and a network-based measure of closeness between the query and disease gene sets capable of detecting associations undetectable by the conventional overlap-based methods. MORPHIN is freely accessible at http://www.inetbio.org/morphin.

  6. MaizeGDB: The Maize Model Organism Database for Basic, Translational, and Applied Research

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In 2001, maize became the number one production crop in the world (with over 614 million tons produced; http://faostat.fao.org). Its success is due to the high productivity per acre in tandem with a wide variety of commercial uses: not only is maize an excellent source of food, feed, and fuel, its...

  7. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    ERIC Educational Resources Information Center

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have…

  8. Finding Mouse Models of Human Lymphomas and Leukemia’s using The Jackson Laboratory Mouse Tumor Biology Database

    PubMed Central

    Begley, Dale A.; Sundberg, John P.; Krupke, Debra M.; Neuhauser, Steven B.; Bult, Carol J.; Eppig, Janan T.; Morse, Herbert C.; Ward, Jerrold M.

    2015-01-01

    Many mouse models have been created to study hematopoietic cancer types. There are over thirty hematopoietic tumor types and subtypes, both human and mouse, with various origins, characteristics and clinical prognoses. Determining the specific type of hematopoietic lesion produced in a mouse model and identifying mouse models that correspond to the human subtypes of these lesions has been a continuing challenge for the scientific community. The Mouse Tumor Biology Database (MTB; http://tumor.informatics.jax.org) is designed to facilitate use of mouse models of human cancer by providing detailed histopathologic and molecular information on lymphoma subtypes, including expertly annotated, on line, whole slide scans, and providing a repository for storing information on and querying these data for specific lymphoma models. PMID:26302176

  9. NATIONAL URBAN DATABASE AND ACCESS PORTAL TOOL (NUDAPT): FACILITATING ADVANCEMENTS IN URBAN METEOROLOGY AND CLIMATE MODELING WITH COMMUNITY-BASED URBAN DATABASES

    EPA Science Inventory

    We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...

  10. Sphere-filled organ model for virtual surgery system.

    PubMed

    Suzuki, Shigeyuki; Suzuki, Naoki; Hattori, Asaki; Uchiyama, Akihiko; Kobayashi, Susumu

    2004-06-01

    We have been developing a virtual surgery system that is capable of simulating surgical maneuvers on elastic organs. In order to perform such maneuvers, we have created a deformable organ model using a sphere-filled method instead of the finite element method. This model is suited for real-time simulation and quantitative deformation. Furthermore, we have equipped this model with a sense of touch and a sense of force by connecting it to a force feedback device. However, in the initial stage the model became problematic when faced with complicated incisions. Therefore, we modified this model by developing an algorithm for organ deformation that performs various, complicated incisions while taking into account the effect of gravity. As a result, the sphere-filled model allowed our system to respond to various incisions that deform the organ. Thus, various physical manipulations that involve pressing, pinching, or incising an organ's surface can be performed. Furthermore, the deformation of the internal organ structures and changes in organ vasculature can be observed via the internal spheres' behavior.

  11. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  12. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  13. Lattice animal model of chromosome organization

    NASA Astrophysics Data System (ADS)

    Iyer, Balaji V. S.; Arya, Gaurav

    2012-07-01

    Polymer models tied together by constraints of looping and confinement have been used to explain many of the observed organizational characteristics of interphase chromosomes. Here we introduce a simple lattice animal representation of interphase chromosomes that combines the features of looping and confinement constraints into a single framework. We show through Monte Carlo simulations that this model qualitatively captures both the leveling off in the spatial distance between genomic markers observed in fluorescent in situ hybridization experiments and the inverse decay in the looping probability as a function of genomic separation observed in chromosome conformation capture experiments. The model also suggests that the collapsed state of chromosomes and their segregation into territories with distinct looping activities might be a natural consequence of confinement.

  14. Representational Translation with Concrete Models in Organic Chemistry

    ERIC Educational Resources Information Center

    Stull, Andrew T.; Hegarty, Mary; Dixon, Bonnie; Stieff, Mike

    2012-01-01

    In representation-rich domains such as organic chemistry, students must be facile and accurate when translating between different 2D representations, such as diagrams. We hypothesized that translating between organic chemistry diagrams would be more accurate when concrete models were used because difficult mental processes could be augmented by…

  15. Resilient organizations: matrix model and service line management.

    PubMed

    Westphal, Judith A

    2005-09-01

    Resilient organizations modify structures to meet the demands of the marketplace. The author describes a structure that enables multihospital organizations to innovate and rapidly adapt to changes. Service line management within a matrix model is an evolving organizational structure for complex systems in which nurses are pivotal members.

  16. Expatriate Training in International Nongovernmental Organizations: A Model for Research

    ERIC Educational Resources Information Center

    Chang, Wei-Wen

    2005-01-01

    In light of the massive tsunami relief efforts that were still being carried out by humanitarian organizations around the world when this article went to press, this article points out a lack of human resources development research in international nongovernmental organizations (INGOs) and proposes a conceptual model for future empirical research.…

  17. A REVIEW OF BIOACCUMULATION MODELING APPROACHES FOR PERSISTENT ORGANIC POLLUTANTS

    EPA Science Inventory

    Persistent organic pollutants and mercury are likely to bioaccumulate in biological components of the environment, including fish and wildlife. The complex and long-term dynamics involved with bioaccumulation are often represented with models. Current scientific developments in t...

  18. Animal models of female pelvic organ prolapse: lessons learned

    PubMed Central

    Couri, Bruna M; Lenis, Andrew T; Borazjani, Ali; Paraiso, Marie Fidela R; Damaser, Margot S

    2012-01-01

    Pelvic organ prolapse is a vaginal protrusion of female pelvic organs. It has high prevalence worldwide and represents a great burden to the economy. The pathophysiology of pelvic organ prolapse is multifactorial and includes genetic predisposition, aberrant connective tissue, obesity, advancing age, vaginal delivery and other risk factors. Owing to the long course prior to patients becoming symptomatic and ethical questions surrounding human studies, animal models are necessary and useful. These models can mimic different human characteristics – histological, anatomical or hormonal, but none present all of the characteristics at the same time. Major animal models include knockout mice, rats, sheep, rabbits and nonhuman primates. In this article we discuss different animal models and their utility for investigating the natural progression of pelvic organ prolapse pathophysiology and novel treatment approaches. PMID:22707980

  19. HITEMP derived spectral database for the prediction of jet engine exhaust infrared emission using a statistical band model

    NASA Astrophysics Data System (ADS)

    Lindermeir, E.; Beier, K.

    2012-08-01

    The spectroscopic database HITEMP 2010 is used to upgrade the parameters of the statistical molecular band model which is part of the infrared signature prediction code NIRATAM (NATO InfraRed Air TArget Model). This band model was recommended by NASA and is applied in several codes that determine the infrared emission of combustion gases. The upgrade regards spectral absorption coefficients and line densities of the gases H2O, CO2, and CO in the spectral region 400-5000 cm-1 (2-25μm) with a spectral resolution of 5 cm-1. The temperature range 100-3000 K is covered. Two methods to update the database are presented: the usually applied method as provided in the literature and an alternative, more laborious procedure that employs least squares fitting. The achieved improvements resulting from both methods are demonstrated by comparisons of radiance spectra obtained from the band model to line-by-line results. The performance in a realistic scenario is investigated on the basis of measured and predicted spectra of a jet aircraft plume in afterburner mode.

  20. Self-organizing map models of language acquisition

    PubMed Central

    Li, Ping; Zhao, Xiaowei

    2013-01-01

    Connectionist models have had a profound impact on theories of language. While most early models were inspired by the classic parallel distributed processing architecture, recent models of language have explored various other types of models, including self-organizing models for language acquisition. In this paper, we aim at providing a review of the latter type of models, and highlight a number of simulation experiments that we have conducted based on these models. We show that self-organizing connectionist models can provide significant insights into long-standing debates in both monolingual and bilingual language development. We suggest future directions in which these models can be extended, to better connect with behavioral and neural data, and to make clear predictions in testing relevant psycholinguistic theories. PMID:24312061

  1. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  2. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  3. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  4. Data Model and Relational Database Design for the New Jersey Water-Transfer Data System (NJWaTr)

    DTIC Science & Technology

    2003-01-01

    between two site objects, and the sites and conveyances form a water network . The core entities in the NJWaTr model are site, conveyance, transfer...for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water ... network projects, and is available as a stand-alone Microsoft Access database. Data stored in the NJWaTr structure can be retrieved in user-defined

  5. Influence of dissolved organic carbon content on modelling natural organic matter acid-base properties.

    PubMed

    Garnier, Cédric; Mounier, Stéphane; Benaïm, Jean Yves

    2004-10-01

    Natural organic matter (NOM) behaviour towards proton is an important parameter to understand NOM fate in the environment. Moreover, it is necessary to determine NOM acid-base properties before investigating trace metals complexation by natural organic matter. This work focuses on the possibility to determine these acid-base properties by accurate and simple titrations, even at low organic matter concentrations. So, the experiments were conducted on concentrated and diluted solutions of extracted humic and fulvic acid from Laurentian River, on concentrated and diluted model solutions of well-known simple molecules (acetic and phenolic acids), and on natural samples from the Seine river (France) which are not pre-concentrated. Titration experiments were modelled by a 6 acidic-sites discrete model, except for the model solutions. The modelling software used, called PROSECE (Programme d'Optimisation et de SpEciation Chimique dans l'Environnement), has been developed in our laboratory, is based on the mass balance equilibrium resolution. The results obtained on extracted organic matter and model solutions point out a threshold value for a confident determination of the studied organic matter acid-base properties. They also show an aberrant decreasing carboxylic/phenolic ratio with increasing sample dilution. This shift is neither due to any conformational effect, since it is also observed on model solutions, nor to ionic strength variations which is controlled during all experiments. On the other hand, it could be the result of an electrode troubleshooting occurring at basic pH values, which effect is amplified at low total concentration of acidic sites. So, in our conditions, the limit for a correct modelling of NOM acid-base properties is defined as 0.04 meq of total analysed acidic sites concentration. As for the analysed natural samples, due to their high acidic sites content, it is possible to model their behaviour despite the low organic carbon concentration.

  6. Predicting long-term organic carbon dynamics in organically-amended soils using the CQESTR model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A process-based soil C model “CQESTR” was developed to simulate soil organic carbon (SOC) dynamics. The model has been validated successfully for North America, but needs to be tested in other geographic areas. We evaluated the predictive performance of CQESTR in a long-term (34-yr) SOC-depleted Eur...

  7. Drosophila melanogaster as a model organism to study nanotoxicity.

    PubMed

    Ong, Cynthia; Yung, Lin-Yue Lanry; Cai, Yu; Bay, Boon-Huat; Baeg, Gyeong-Hun

    2015-05-01

    Drosophila melanogaster has been used as an in vivo model organism for the study of genetics and development since 100 years ago. Recently, the fruit fly Drosophila was also developed as an in vivo model organism for toxicology studies, in particular, the field of nanotoxicity. The incorporation of nanomaterials into consumer and biomedical products is a cause for concern as nanomaterials are often associated with toxicity in many in vitro studies. In vivo animal studies of the toxicity of nanomaterials with rodents and other mammals are, however, limited due to high operational cost and ethical objections. Hence, Drosophila, a genetically tractable organism with distinct developmental stages and short life cycle, serves as an ideal organism to study nanomaterial-mediated toxicity. This review discusses the basic biology of Drosophila, the toxicity of nanomaterials, as well as how the Drosophila model can be used to study the toxicity of various types of nanomaterials.

  8. Pharmacophore modeling and three-dimensional database searching for drug design using catalyst.

    PubMed

    Kurogi, Y; Güner, O F

    2001-07-01

    Perceiving a pharmacophore is the first essential step towards understanding the interaction between a receptor and a ligand. Once a pharmacophore is established, a beneficial use of it is 3D database searching to retrieve novel compounds that would match the pharmacophore, without necessarily duplicating the topological features of known active compounds (hence remain independent of existing patents). As the 3D searching technology has evolved over the years, it has been effectively used for lead optimization, combinatorial library focusing, as well as virtual high-throughput screening. Clearly established as one of the successful computational tools in rational drug design, we present in this review article a brief history of the evolution of this technology and detailed algorithms of Catalyst, the latest 3D searching software to be released. We also provide brief summary of published successes with this technology, including two recent patent applications.

  9. Cyberkelp: an integrative approach to the modelling of flexible organisms.

    PubMed Central

    Denny, Mark W; Hale, Ben B

    2003-01-01

    Biomechanical models come in a variety of forms: conceptual models; physical models; and mathematical models (both of the sort written down on paper and the sort carried out on computers). There are model structures (such as insect flight muscle and the tendons of rats' tails), model organisms (such as the flying insect, Manduca sexta), even model systems of organisms (such as the communities that live on wave-swept rocky shores). These different types of models are typically employed separately, but their value often can be enhanced if their insights are integrated. In this brief report we explore a particular example of such integration among models, as applied to flexible marine algae. A conceptual model serves as a template for the construction of a mathematical model of a model species of giant kelp, and the validity of this numerical model is tested using physical models. The validated mathematical model is then used in conjunction with a computer-controlled tensile testing apparatus to simulate the loading regime placed on algal materials. The resulting information can be used to create a more precise mathematical model. PMID:14561344

  10. Comparing the Hematopoetic Syndrome Time Course in the NHP Animal Model to Radiation Accident Cases From the Database Search.

    PubMed

    Graessle, Dieter H; Dörr, Harald; Bennett, Alexander; Shapiro, Alla; Farese, Ann M; MacVittie, Thomas J; Meineke, Viktor

    2015-11-01

    Since controlled clinical studies on drug administration for the acute radiation syndrome are lacking, clinical data of human radiation accident victims as well as experimental animal models are the main sources of information. This leads to the question of how to compare and link clinical observations collected after human radiation accidents with experimental observations in non-human primate (NHP) models. Using the example of granulocyte counts in the peripheral blood following radiation exposure, approaches for adaptation between NHP and patient databases on data comparison and transformation are introduced. As a substitute for studying the effects of administration of granulocyte-colony stimulating factor (G-CSF) in human clinical trials, the method of mathematical modeling is suggested using the example of G-CSF administration to NHP after total body irradiation.

  11. Glycoproteomic and glycomic databases.

    PubMed

    Baycin Hizal, Deniz; Wolozny, Daniel; Colao, Joseph; Jacobson, Elena; Tian, Yuan; Krag, Sharon S; Betenbaugh, Michael J; Zhang, Hui

    2014-01-01

    Protein glycosylation serves critical roles in the cellular and biological processes of many organisms. Aberrant glycosylation has been associated with many illnesses such as hereditary and chronic diseases like cancer, cardiovascular diseases, neurological disorders, and immunological disorders. Emerging mass spectrometry (MS) technologies that enable the high-throughput identification of glycoproteins and glycans have accelerated the analysis and made possible the creation of dynamic and expanding databases. Although glycosylation-related databases have been established by many laboratories and institutions, they are not yet widely known in the community. Our study reviews 15 different publicly available databases and identifies their key elements so that users can identify the most applicable platform for their analytical needs. These databases include biological information on the experimentally identified glycans and glycopeptides from various cells and organisms such as human, rat, mouse, fly and zebrafish. The features of these databases - 7 for glycoproteomic data, 6 for glycomic data, and 2 for glycan binding proteins are summarized including the enrichment techniques that are used for glycoproteome and glycan identification. Furthermore databases such as Unipep, GlycoFly, GlycoFish recently established by our group are introduced. The unique features of each database, such as the analytical methods used and bioinformatical tools available are summarized. This information will be a valuable resource for the glycobiology community as it presents the analytical methods and glycosylation related databases together in one compendium. It will also represent a step towards the desired long term goal of integrating the different databases of glycosylation in order to characterize and categorize glycoproteins and glycans better for biomedical research.

  12. A vertically resolved, global, gap-free ozone database for assessing or constraining global climate model simulations

    NASA Astrophysics Data System (ADS)

    Bodeker, G. E.; Hassler, B.; Young, P. J.; Portmann, R. W.

    2013-02-01

    High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa). The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa) was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric

  13. A vertically resolved, global, gap-free ozone database for assessing or constraining global climate model simulations

    NASA Astrophysics Data System (ADS)

    Bodeker, G. E.; Hassler, B.; Young, P. J.; Portmann, R. W.

    2012-10-01

    High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These "Tier 0" ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced ~1 km apart (878.4 hPa to 0.046 hPa). The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N+1th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa) was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different "Tier 1" databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric

  14. Development of Novel Repellents Using Structure - Activity Modeling of Compounds in the USDA Archival Database

    DTIC Science & Technology

    2011-01-01

    used in efforts to develop QSAR models. Measurement of Repellent Efficacy Screening for Repellency of Compounds with Unknown Toxicology In screening...CPT) were used to develop Quantitative Structure Activity Relationship ( QSAR ) models to predict repellency. Successful prediction of novel...acylpiperidine QSAR models employed 4 descriptors to describe the relationship between structure and repellent duration. The ANN model of the carboxamides did not

  15. Performance of a semi-automated approach for risk estimation using a common data model for longitudinal healthcare databases.

    PubMed

    Van Le, Hoa; Beach, Kathleen J; Powell, Gregory; Pattishall, Ed; Ryan, Patrick; Mera, Robertino M

    2013-02-01

    Different structures and coding schemes may limit rapid evaluation of a large pool of potential drug safety signals using multiple longitudinal healthcare databases. To overcome this restriction, a semi-automated approach utilising common data model (CDM) and robust pharmacoepidemiologic methods was developed; however, its performance needed to be evaluated. Twenty-three established drug-safety associations from publications were reproduced in a healthcare claims database and four of these were also repeated in electronic health records. Concordance and discrepancy of pairwise estimates were assessed between the results derived from the publication and results from this approach. For all 27 pairs, an observed agreement between the published results and the results from the semi-automated approach was greater than 85% and Kappa coefficient was 0.61, 95% CI: 0.19-1.00. Ln(IRR) differed by less than 50% for 13/27 pairs, and the IRR varied less than 2-fold for 19/27 pairs. Reproducibility based on the intra-class correlation coefficient was 0.54. Most covariates (>90%) in the publications were available for inclusion in the models. Once the study populations and inclusion/exclusion criteria were obtained from the literature, the analysis was able to be completed in 2-8 h. The semi-automated methodology using a CDM produced consistent risk estimates compared to the published findings for most selected drug-outcome associations, regardless of original study designs, databases, medications and outcomes. Further assessment of this approach is useful to understand its roles, strengths and limitations in rapidly evaluating safety signals.

  16. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    NASA Astrophysics Data System (ADS)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2015-01-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~ 275 to ~ 400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures. To this end we introduce a new parameterisation for the temperature dependence. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multi-component system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~ 190 to ~ 440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 28% in

  17. Dietary Uptake Models Used for Modeling the Bioaccumulation of Organic Contaminants in Fish

    EPA Science Inventory

    Numerous models have been developed to predict the bioaccumulation of organic chemicals in fish. Although chemical dietary uptake can be modeled using assimilation efficiencies, bioaccumulation models fall into two distinct groups. The first group implicitly assumes that assimila...

  18. Human Immunodeficiency Virus Reverse Transcriptase and Protease Sequence Database: an expanded data model integrating natural language text and sequence analysis programs.

    PubMed

    Kantor, R; Machekano, R; Gonzales, M J; Dupnik, K; Schapiro, J M; Shafer, R W

    2001-01-01

    The HIV Reverse Transcriptase and Protease Sequence Database is an on-line relational database that catalogs evolutionary and drug-related sequence variation in the human immunodeficiency virus (HIV) reverse transcriptase (RT) and protease enzymes, the molecular targets of anti-HIV therapy (http://hivdb.stanford.edu). The database contains a compilation of nearly all published HIV RT and protease sequences, including submissions from International Collaboration databases and sequences published in journal articles. Sequences are linked to data about the source of the sequence sample and the antiretroviral drug treatment history of the individual from whom the isolate was obtained. During the past year 3500 sequences have been added and the data model has been expanded to include drug susceptibility data on sequenced isolates. Database content has also been integrated with didactic text and the output of two sequence analysis programs.

  19. Spatial Arrangment of Organic Compounds on a Model Mineral Surface: Implications for Soil Organic Matter Stabilization

    SciTech Connect

    Petridis, Loukas; Ambaye, Haile Arena; Jagadamma, Sindhu; Kilbey, S. Michael; Lokitz, Bradley S; Lauter, Valeria; Mayes, Melanie

    2014-01-01

    The complexity of the mineral organic carbon interface may influence the extent of stabilization of organic carbon compounds in soils, which is important for global climate futures. The nanoscale structure of a model interface was examined here by depositing films of organic carbon compounds of contrasting chemical character, hydrophilic glucose and amphiphilic stearic acid, onto a soil mineral analogue (Al2O3). Neutron reflectometry, a technique which provides depth-sensitive insight into the organization of the thin films, indicates that glucose molecules reside in a layer between Al2O3 and stearic acid, a result that was verified by water contact angle measurements. Molecular dynamics simulations reveal the thermodynamic driving force behind glucose partitioning on the mineral interface: The entropic penalty of confining the less mobile glucose on the mineral surface is lower than for stearic acid. The fundamental information obtained here helps rationalize how complex arrangements of organic carbon on soil mineral surfaces may arise

  20. Principles of chromatin organization in yeast: relevance of polymer models to describe nuclear organization and dynamics.

    PubMed

    Wang, Renjie; Mozziconacci, Julien; Bancaud, Aurélien; Gadal, Olivier

    2015-06-01

    Nuclear organization can impact on all aspects of the genome life cycle. This organization is thoroughly investigated by advanced imaging and chromosome conformation capture techniques, providing considerable amount of datasets describing the spatial organization of chromosomes. In this review, we will focus on polymer models to describe chromosome statics and dynamics in the yeast Saccharomyces cerevisiae. We suggest that the equilibrium configuration of a polymer chain tethered at both ends and placed in a confined volume is consistent with the current literature, implying that local chromatin interactions play a secondary role in yeast nuclear organization. Future challenges are to reach an integrated multi-scale description of yeast chromosome organization, which is crucially needed to improve our understanding of the regulation of genomic transaction.

  1. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  2. A Workforce Design Model: Providing Energy to Organizations in Transition

    ERIC Educational Resources Information Center

    Halm, Barry J.

    2011-01-01

    The purpose of this qualitative study was to examine the change in performance realized by a professional services organization, which resulted in the Life Giving Workforce Design (LGWD) model through a grounded theory research design. This study produced a workforce design model characterized as an organizational blueprint that provides virtuous…

  3. Simple model of self-organized biological evolution

    SciTech Connect

    de Boer, J.; Derrida, B.; Flyvbjerg, H.; Jackson, A.D.; Wettig, T. The Isaac Newton Institute for Mathematical Sciences, 20 Clarkson Road, Cambridge, CB4 0EH Laboratoire de Physique Statistique, Ecole Normale Superieure, 24 rue Lhomond, F-75005 Paris Service de Physique Theorique, Centre de Etudes Nucleaires de Saclay, F-91191, Gif-Sur-Yvette CONNECT, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen )

    1994-08-08

    We give an exact solution of a recently proposed self-organized critical model of biological evolution. We show that the model has a power law distribution of durations of coevolutionary avalanches'' with a mean field exponent 3/2. We also calculate analytically the finite size effects which cut off this power law at times of the order of the system size.

  4. Simple model of self-organized biological evolution

    NASA Astrophysics Data System (ADS)

    de Boer, Jan; Derrida, Bernard; Flyvbjerg, Henrik; Jackson, Andrew D.; Wettig, Tilo

    1994-08-01

    We give an exact solution of a recently proposed self-organized critical model of biological evolution. We show that the model has a power law distribution of durations of coevolutionary ``avalanches'' with a mean field exponent 3/2. We also calculate analytically the finite size effects which cut off this power law at times of the order of the system size.

  5. Mechanism for production of secondary organic aerosols and their representation in atmospheric models. Final report

    SciTech Connect

    Seinfeld, J.H.; Flagan, R.C.

    1999-06-07

    This document contains the following: organic aerosol formation from the oxidation of biogenic hydrocarbons; gas/particle partitioning of semivolatile organic compounds to model inorganic, organic, and ambient smog aerosols; and representation of secondary organic aerosol formation in atmospheric models.

  6. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    EPA Science Inventory

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  7. An Online Database and User Community for Physical Models in the Engineering Classroom

    ERIC Educational Resources Information Center

    Welch, Robert W.; Klosky, J. Ledlie

    2007-01-01

    This paper will present information about the Web site--www.handsonmechanics.com, the process to develop the Web site, the vetting and management process for inclusion of physical models by the faculty at West Point, and how faculty at other institutions can add physical models and participate in the site as it grows. Each physical model has a…

  8. The expanding epigenetic landscape of non-model organisms.

    PubMed

    Bonasio, Roberto

    2015-01-01

    Epigenetics studies the emergence of different phenotypes from a single genotype. Although these processes are essential to cellular differentiation and transcriptional memory, they are also widely used in all branches of the tree of life by organisms that require plastic but stable adaptation to their physical and social environment. Because of the inherent flexibility of epigenetic regulation, a variety of biological phenomena can be traced back to evolutionary adaptations of few conserved molecular pathways that converge on chromatin. For these reasons chromatin biology and epigenetic research have a rich history of chasing discoveries in a variety of model organisms, including yeast, flies, plants and humans. Many more fascinating examples of epigenetic plasticity lie outside the realm of model organisms and have so far been only sporadically investigated at a molecular level; however, recent progress on sequencing technology and genome editing tools have begun to blur the lines between model and non-model organisms, opening numerous new avenues for investigation. Here, I review examples of epigenetic phenomena in non-model organisms that have emerged as potential experimental systems, including social insects, fish and flatworms, and are becoming accessible to molecular approaches.

  9. The expanding epigenetic landscape of non-model organisms

    PubMed Central

    Bonasio, Roberto

    2015-01-01

    Epigenetics studies the emergence of different phenotypes from a single genotype. Although these processes are essential to cellular differentiation and transcriptional memory, they are also widely used in all branches of the tree of life by organisms that require plastic but stable adaptation to their physical and social environment. Because of the inherent flexibility of epigenetic regulation, a variety of biological phenomena can be traced back to evolutionary adaptations of few conserved molecular pathways that converge on chromatin. For these reasons chromatin biology and epigenetic research have a rich history of chasing discoveries in a variety of model organisms, including yeast, flies, plants and humans. Many more fascinating examples of epigenetic plasticity lie outside the realm of model organisms and have so far been only sporadically investigated at a molecular level; however, recent progress on sequencing technology and genome editing tools have begun to blur the lines between model and non-model organisms, opening numerous new avenues for investigation. Here, I review examples of epigenetic phenomena in non-model organisms that have emerged as potential experimental systems, including social insects, fish and flatworms, and are becoming accessible to molecular approaches. PMID:25568458

  10. Quantitative model studies for interfaces in organic electronic devices

    NASA Astrophysics Data System (ADS)

    Gottfried, J. Michael

    2016-11-01

    In organic light-emitting diodes and similar devices, organic semiconductors are typically contacted by metal electrodes. Because the resulting metal/organic interfaces have a large impact on the performance of these devices, their quantitative understanding is indispensable for the further rational development of organic electronics. A study by Kröger et al (2016 New J. Phys. 18 113022) of an important single-crystal based model interface provides detailed insight into its geometric and electronic structure and delivers valuable benchmark data for computational studies. In view of the differences between typical surface-science model systems and real devices, a ‘materials gap’ is identified that needs to be addressed by future research to make the knowledge obtained from fundamental studies even more beneficial for real-world applications.

  11. Making Organisms Model Human Behavior: Situated Models in North-American Alcohol Research, 1950-onwards

    PubMed Central

    Leonelli, Sabina; Ankeny, Rachel A.; Nelson, Nicole C.; Ramsden, Edmund

    2014-01-01

    Argument We examine the criteria used to validate the use of nonhuman organisms in North-American alcohol addiction research from the 1950s to the present day. We argue that this field, where the similarities between behaviors in humans and non-humans are particularly difficult to assess, has addressed questions of model validity by transforming the situatedness of non-human organisms into an experimental tool. We demonstrate that model validity does not hinge on the standardization of one type of organism in isolation, as often the case with genetic model organisms. Rather, organisms are viewed as necessarily situated: they cannot be understood as a model for human behavior in isolation from their environmental conditions. Hence the environment itself is standardized as part of the modeling process; and model validity is assessed with reference to the environmental conditions under which organisms are studied. PMID:25233743

  12. Driving risk assessment using near-crash database through data mining of tree-based model.

    PubMed

    Wang, Jianqiang; Zheng, Yang; Li, Xiaofei; Yu, Chenfei; Kodaka, Kenji; Li, Keqiang

    2015-11-01

    This paper considers a comprehensive naturalistic driving experiment to collect driving data under potential threats on actual Chinese roads. Using acquired real-world naturalistic driving data, a near-crash database is built, which contains vehicle status, potential crash objects, driving environment and road types, weather condition, and driver information and actions. The aims of this study are summarized into two aspects: (1) to cluster different driving-risk levels involved in near-crashes, and (2) to unveil the factors that greatly influence the driving-risk level. A novel method to quantify the driving-risk level of a near-crash scenario is proposed by clustering the braking process characteristics, namely maximum deceleration, average deceleration, and percentage reduction in vehicle kinetic energy. A classification and regression tree (CART) is employed to unveil the relationship among driving risk, driver/vehicle characteristics, and road environment. The results indicate that the velocity when braking, triggering factors, potential object type, and potential crash type exerted the greatest influence on the driving-risk levels in near-crashes.

  13. Model Organisms Fact Sheet: Using Model Organisms to Study Health and Disease

    MedlinePlus

    ... good understanding of biological complexity, in which many molecular networks operate in synchrony inside our bodies. Researchers ... organisms can also help reveal changes at the molecular level that are associated with diseases and identify ...

  14. Research on an expert system for database operation of simulation-emulation math models. Volume 1, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.

  15. Modelling of organic matter dynamics during the composting process.

    PubMed

    Zhang, Y; Lashermes, G; Houot, S; Doublet, J; Steyer, J P; Zhu, Y G; Barriuso, E; Garnier, P

    2012-01-01

    Composting urban organic wastes enables the recycling of their organic fraction in agriculture. The objective of this new composting model was to gain a clearer understanding of the dynamics of organic fractions during composting and to predict the final quality of composts. Organic matter was split into different compartments according to its degradability. The nature and size of these compartments were studied using a biochemical fractionation method. The evolution of each compartment and the microbial biomass were simulated, as was the total organic carbon loss corresponding to organic carbon mineralisation into CO(2). Twelve composting experiments from different feedstocks were used to calibrate and validate our model. We obtained a unique set of estimated parameters. Good agreement was achieved between the simulated and experimental results that described the evolution of different organic fractions, with the exception of some compost because of a poor simulation of the cellulosic and soluble pools. The degradation rate of the cellulosic fraction appeared to be highly variable and dependent on the origin of the feedstocks. The initial soluble fraction could contain some degradable and recalcitrant elements that are not easily accessible experimentally.

  16. Application of an OCT data-based mathematical model of the foveal pit in Parkinson disease.

    PubMed

    Ding, Yin; Spund, Brian; Glazman, Sofya; Shrier, Eric M; Miri, Shahnaz; Selesnick, Ivan; Bodis-Wollner, Ivan

    2014-11-01

    Spectral-domain Optical coherence tomography (OCT) has shown remarkable utility in the study of retinal disease and has helped to characterize the fovea in Parkinson disease (PD) patients. We developed a detailed mathematical model based on raw OCT data to allow differentiation of foveae of PD patients from healthy controls. Of the various models we tested, a difference of a Gaussian and a polynomial was found to have "the best fit". Decision was based on mathematical evaluation of the fit of the model to the data of 45 control eyes versus 50 PD eyes. We compared the model parameters in the two groups using receiver-operating characteristics (ROC). A single parameter discriminated 70 % of PD eyes from controls, while using seven of the eight parameters of the model allowed 76 % to be discriminated. The future clinical utility of mathematical modeling in study of diffuse neurodegenerative conditions that also affect the fovea is discussed.

  17. A Multivariate Mixture Model to Estimate the Accuracy of Glycosaminoglycan Identifications Made by Tandem Mass Spectrometry (MS/MS) and Database Search.

    PubMed

    Chiu, Yulun; Schliekelman, Paul; Orlando, Ron; Sharp, Joshua S

    2017-02-01

    We present a statistical model to estimate the accuracy of derivatized heparin and heparan sulfate (HS) glycosaminoglycan (GAG) assignments to tandem mass (MS/MS) spectra made by the first published database search application, GAG-ID. Employing a multivariate expectation-maximization algorithm, this statistical model distinguishes correct from ambiguous and incorrect database search results when computing the probability that heparin/HS GAG assignments to spectra are correct based upon database search scores. Using GAG-ID search results for spectra generated from a defined mixture of 21 synthesized tetrasaccharide sequences as well as seven spectra of longer defined oligosaccharides, we demonstrate that the computed probabilities are accurate and have high power to discriminate between correctly, ambiguously, and incorrectly assigned heparin/HS GAGs. This analysis makes it possible to filter large MS/MS database search results with predictable false identification error rates.

  18. Electrochemical model of the polyaniline based organic memristive device

    SciTech Connect

    Demin, V. A. E-mail: victor.erokhin@fis.unipr.it; Erokhin, V. V. E-mail: victor.erokhin@fis.unipr.it; Kashkarov, P. K.; Kovalchuk, M. V.

    2014-08-14

    The electrochemical organic memristive device with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, including the neuromorphic networks capable for learning. In this work, a new theoretical model of the polyaniline memristive is presented. The developed model of organic memristive functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment but also the quantitative similarities of the resultant current values. It is shown how the memristive could behave at zero potential difference relative to the reference electrode. This improved model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices.

  19. Considerations when choosing a genetic model organism for metabolomics studies.

    PubMed

    Reed, Laura K; Baer, Charles F; Edison, Arthur S

    2017-02-01

    Model organisms are important in many areas of chemical biology. In metabolomics, model organisms can provide excellent samples for methods development as well as the foundation of comparative phylometabolomics, which will become possible as metabolomics applications expand. Comparative studies of conserved and unique metabolic pathways will help in the annotation of metabolites as well as provide important new targets of investigation in biology and biomedicine. However, most chemical biologists are not familiar with genetics, which needs to be considered when choosing a model organism. In this review we summarize the strengths and weaknesses of several genetic systems, including natural isolates, recombinant inbred lines, and genetic mutations. We also discuss methods to detect targets of selection on the metabolome.

  20. Modelling the fate of oxidisable organic contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Prommer, H.; Miller, C. T.; Engesgaard, P.; Brun, A.; Zheng, C.

    Subsurface contamination by organic chemicals is a pervasive environmental problem, susceptible to remediation by natural or enhanced attenuation approaches or more highly engineered methods such as pump-and-treat, amongst others. Such remediation approaches, along with risk assessment or the pressing need to address complex scientific questions, have driven the development of integrated modelling tools that incorporate physical, biological and geochemical processes. We provide a comprehensive modelling framework, including geochemical reactions and interphase mass transfer processes such as sorption/desorption, non-aqueous phase liquid dissolution and mineral precipitatation/dissolution, all of which can be in equilibrium or kinetically controlled. This framework is used to simulate microbially mediated transformation/degradation processes and the attendant microbial population growth and decay. Solution algorithms, particularly the split-operator (SO) approach, are described, along with a brief résumé of numerical solution methods. Some of the available numerical models are described, mainly those constructed using available flow, transport and geochemical reaction packages. The general modelling framework is illustrated by pertinent examples, showing the degradation of dissolved organics by microbial activity limited by the availability of nutrients or electron acceptors (i.e., changing redox states), as well as concomitant secondary reactions. Two field-scale modelling examples are discussed, the Vejen landfill (Denmark) and an example where metal contamination is remediated by redox changes wrought by injection of a dissolved organic compound. A summary is provided of current and likely future challenges to modelling of oxidisable organics in the subsurface.

  1. Data-based empirical model reduction as an approach to data mining

    NASA Astrophysics Data System (ADS)

    Ghil, M.

    2012-12-01

    Science is very much about finding order in chaos, patterns in oodles of data, signal in noise, and so on. One can see any scientific description as a model of the data, whether verbal, statistical or dynamical. In this talk, I will provide an approach to such descriptions that relies on constructing nonlinear, stochastically forced models, via empirical model reduction (EMR). EMR constructs a low-order nonlinear system of prognostic equations driven by stochastic forcing; it estimates both the dynamical operator and the properties of the driving noise directly from observations or from a high-order model's simulation. The multi-level EMR structure for modeling the stochastic forcing allows one to capture feedback between high- and low-frequency components of the variability, thus parameterizing the "fast scales," often referred to as the "noise," in terms of the memory of the "slow" scales, referred to as the "signal." EMR models have been shown to capture quite well features of the high-dimensional data sets involved, in the frequency domain as well as in the spatial domain. Illustrative examples will involve capturing correctly patterns in data sets that are either purely observational or generated by high-end models. They will be selected from intraseasonal variability of the mid-latitude atmosphere, seasonal-to-interannual variability of the sea surface temperature field, and air-sea interaction in the Southern Ocean. The work described in this talk is joint with M.D. Chekroun, D. Kondrashov, S. Kravtsov, and A.W. Robertson. Recent results on using a modified and improved form of EMR modeling for predictive purposes will be provided in a separate talk by D. Kondrashov, M. Chekroun and M. Ghil on "Data-Driven Model Reduction and Climate Prediction: Nonlinear Stochastic, Energy-Conserving Models With Memory Effects."Detailed budget of mean phase-space tendencies for the plane spanned by EOFs 1 and 4 of an intermediate-complexity model of mid-latitude flow

  2. Surface complexation modeling or organic acid sorption to goethite

    SciTech Connect

    Evanko, C.R.; Dzombak, D.A.

    1999-06-15

    Surface complexation modeling was performed using the Generalized Two-Layer Model for a series of low molecular weight organic acids. Sorption of these organic acids to goethite was investigated in a previous study to assess the influence of particular structural features on sorption. Here, the ability to describe the observed sorption behavior for compounds with similar structural features using surface complexation modeling was investigated. A set of surface reactions and equilibrium constants yielding optimal data fits was obtained for each organic acid over a range of total sorbate concentrations. Surface complexation modeling successfully described sorption of a number of the simple organic acids, but an additional hydrophobic component was needed to describe sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption did not decrease with increasing total sorbate concentration and/or exceeded surface site saturation. Hydrophobic interactions appeared to be most significant for the compound containing a 5-carbon aliphatic chain. Comparison of optimized equilibrium constants for similar surface species showed that model results were consistent with observed sorption behavior: equilibrium constants were highest for compounds having adjacent carboxylic groups, lower for compounds with adjacent phenolic groups, and lowest for compounds with phenolic groups in the ortho position relative to a carboxylic group. Surface complexation modeling was also performed to fit sorption data for Suwannee River fulvic acid. The data could be described well using reactions and

  3. Surface Complexation Modeling of Organic Acid Sorption to Goethite.

    PubMed

    Evanko; Dzombak

    1999-06-15

    Surface complexation modeling was performed using the Generalized Two-Layer Model for a series of low molecular weight organic acids. Sorption of these organic acids to goethite was investigated in a previous study to assess the influence of particular structural features on sorption. Here, the ability to describe the observed sorption behavior for compounds with similar structural features using surface complexation modeling was investigated. A set of surface reactions and equilibrium constants yielding optimal data fits was obtained for each organic acid over a range of total sorbate concentrations. Surface complexation modeling successfully described sorption of a number of the simple organic acids, but an additional hydrophobic component was needed to describe sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption did not decrease with increasing total sorbate concentration and/or exceeded surface site saturation. Hydrophobic interactions appeared to be most significant for the compound containing a 5-carbon aliphatic chain. Comparison of optimized equilibrium constants for similar surface species showed that model results were consistent with observed sorption behavior: equilibrium constants were highest for compounds having adjacent carboxylic groups, lower for compounds with adjacent phenolic groups, and lowest for compounds with phenolic groups in the ortho position relative to a carboxylic group. Surface complexation modeling was also performed to fit sorption data for Suwannee River fulvic acid. The data could be described well using reactions and constants similar to those for pyromellitic acid. This four-carboxyl group compound may be useful as a model for fulvic acid with respect to sorption. Other simple organic acids having multiple carboxylic and phenolic functional groups were identified as potential models for humic

  4. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  5. Modeling aspects of estuarine eutrophication. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-05-01

    The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)

  6. Fractured rock hydrogeology: Modeling studies. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning the use of mathematical and conceptual models in describing the hydraulic parameters of fluid flow in fractured rock. Topics include the use of tracers, solute and mass transport studies, and slug test analyses. The use of modeling techniques in injection well performance prediction is also discussed. (Contains 250 citations and includes a subject term index and title list.)

  7. Millennial Students' Mental Models of Search: Implications for Academic Librarians and Database Developers

    ERIC Educational Resources Information Center

    Holman, Lucy

    2011-01-01

    Today's students exhibit generational differences in the way they search for information. Observations of first-year students revealed a proclivity for simple keyword or phrases searches with frequent misspellings and incorrect logic. Although no students had strong mental models of search mechanisms, those with stronger models did construct more…

  8. MOAtox: A Comprehensive Mode of Action and Acute Aquatic Toxicity Database for Predictive Model Development

    EPA Science Inventory

    tThe mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity andas an alternative to chemical class-based predictive toxicity modeling. However, the development ofquantitative structure activity relationship (QSAR) and other models has been limite...

  9. An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design

    NASA Technical Reports Server (NTRS)

    Lin, Risheng; Afjeh, Abdollah A.

    2003-01-01

    Crucial to an efficient aircraft simulation-based design is a robust data modeling methodology for both recording the information and providing data transfer readily and reliably. To meet this goal, data modeling issues involved in the aircraft multidisciplinary design are first analyzed in this study. Next, an XML-based. extensible data object model for multidisciplinary aircraft design is constructed and implemented. The implementation of the model through aircraft databinding allows the design applications to access and manipulate any disciplinary data with a lightweight and easy-to-use API. In addition, language independent representation of aircraft disciplinary data in the model fosters interoperability amongst heterogeneous systems thereby facilitating data sharing and exchange between various design tools and systems.

  10. Modeling of Spatially Correlated Energetic Disorder in Organic Semiconductors.

    PubMed

    Kordt, Pascal; Andrienko, Denis

    2016-01-12

    Mesoscale modeling of organic semiconductors relies on solving an appropriately parametrized master equation. Essential ingredients of the parametrization are site energies (driving forces), which enter the charge transfer rate between pairs of neighboring molecules. Site energies are often Gaussian-distributed and are spatially correlated. Here, we propose an algorithm that generates these energies with a given Gaussian distribution and spatial correlation function. The method is tested on an amorphous organic semiconductor, DPBIC, illustrating that the accurate description of correlations is essential for the quantitative modeling of charge transport in amorphous mesophases.

  11. Modelling and design of all-organic electromechanic transducers

    NASA Astrophysics Data System (ADS)

    Fortuna, L.; Graziani, S.; La Rosa, M.; Nicolosi, D.; Sicurella, G.; Umana, E.

    2009-04-01

    The recent development of innovative organic materials with intriguing features such as their flexibility, lightness, low cost and easy manufacturability, has driven researchers to develop innovative smart applications based on such kind of materials. In this work, all-organic electromechanical transducers, with both sensing and acting capabilities are proposed. The actuator and sensor models have been identified by using a grey box approach, as a function of membrane geometric parameters. The obtained models have been validated through comparison among estimated and experimental data.

  12. Construction and analysis of a human hepatotoxicity database suitable for QSAR modeling using post-market safety data.

    PubMed

    Zhu, Xiao; Kruhlak, Naomi L

    2014-07-03

    Drug-induced liver injury (DILI) is one of the most common drug-induced adverse events (AEs) leading to life-threatening conditions such as acute liver failure. It has also been recognized as the single most common cause of safety-related post-market withdrawals or warnings. Efforts to develop new predictive methods to assess the likelihood of a drug being a hepatotoxicant have been challenging due to the complexity and idiosyncrasy of clinical manifestations of DILI. The FDA adverse event reporting system (AERS) contains post-market data that depict the morbidity of AEs. Here, we developed a scalable approach to construct a hepatotoxicity database using post-market data for the purpose of quantitative structure-activity relationship (QSAR) modeling. A set of 2029 unique and modelable drug entities with 13,555 drug-AE combinations was extracted from the AERS database using 37 hepatotoxicity-related query preferred terms (PTs). In order to determine the optimal classification scheme to partition positive from negative drugs, a manually-curated DILI calibration set composed of 105 negatives and 177 positives was developed based on the published literature. The final classification scheme combines hepatotoxicity-related PT data with supporting information that optimize the predictive performance across the calibration set. Data for other toxicological endpoints related to liver injury such as liver enzyme abnormalities, cholestasis, and bile duct disorders, were also extracted and classified. Collectively, these datasets can be used to generate a battery of QSAR models that assess a drug's potential to cause DILI.

  13. Energy supply and demand modeling. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    Not Available

    1994-12-01

    The bibliography contains citations concerning the use of mathematical models in trend analysis and forecasting of energy supply and demand factors. Models are presented for the industrial, transportation, and residential sectors. Aspects of long term energy strategies and markets are discussed at the global, national, state, and regional levels. Energy demand and pricing, and econometrics of energy, are explored for electric utilities and natural resources, such as coal, oil, and natural gas. Energy resources are modeled both for fuel usage and for reserves. (Contains 250 citations and includes a subject term index and title list.)

  14. Energy supply and demand modeling. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    Not Available

    1994-01-01

    The bibliography contains citations concerning the use of mathematical models in trend analysis and forecasting of energy supply and demand factors. Models are presented for the industrial, transportation, and residential sectors. Aspects of long term energy strategies and markets are discussed at the global, national, state, and regional levels. Energy demand and pricing, and econometrics of energy, are explored for electric utilities and natural resources, such as coal, oil, and natural gas. Energy resources are modeled both for fuel usage and for reserves. (Contains 250 citations and includes a subject term index and title list.)

  15. Prediction of organ toxicity endpoints by QSAR modeling based on precise chemical-histopathology annotations.

    PubMed

    Myshkin, Eugene; Brennan, Richard; Khasanova, Tatiana; Sitnik, Tatiana; Serebriyskaya, Tatiana; Litvinova, Elena; Guryanov, Alexey; Nikolsky, Yuri; Nikolskaya, Tatiana; Bureeva, Svetlana

    2012-09-01

    The ability to accurately predict the toxicity of drug candidates from their chemical structure is critical for guiding experimental drug discovery toward safer medicines. Under the guidance of the MetaTox consortium (Thomson Reuters, CA, USA), which comprised toxicologists from the pharmaceutical industry and government agencies, we created a comprehensive ontology of toxic pathologies for 19 organs, classifying pathology terms by pathology type and functional organ substructure. By manual annotation of full-text research articles, the ontology was populated with chemical compounds causing specific histopathologies. Annotated compound-toxicity associations defined histologically from rat and mouse experiments were used to build quantitative structure-activity relationship models predicting subcategories of liver and kidney toxicity: liver necrosis, liver relative weight gain, liver lipid accumulation, nephron injury, kidney relative weight gain, and kidney necrosis. All models were validated using two independent test sets and demonstrated overall good performance: initial validation showed 0.80-0.96 sensitivity (correctly predicted toxic compounds) and 0.85-1.00 specificity (correctly predicted non-toxic compounds). Later validation against a test set of compounds newly added to the database in the 2 years following initial model generation showed 75-87% sensitivity and 60-78% specificity. General hepatotoxicity and nephrotoxicity models were less accurate, as expected for more complex endpoints.

  16. Workshop meeting report Organs-on-Chips: human disease models.

    PubMed

    van de Stolpe, Anja; den Toonder, Jaap

    2013-09-21

    The concept of "Organs-on-Chips" has recently evolved and has been described as 3D (mini-) organs or tissues consisting of multiple and different cell types interacting with each other under closely controlled conditions, grown in a microfluidic chip, and mimicking the complex structures and cellular interactions in and between different cell types and organs in vivo, enabling the real time monitoring of cellular processes. In combination with the emerging iPSC (induced pluripotent stem cell) field this development offers unprecedented opportunities to develop human in vitro models for healthy and diseased organ tissues, enabling the investigation of fundamental mechanisms in disease development, drug toxicity screening, drug target discovery and drug development, and the replacement of animal testing. Capturing the genetic background of the iPSC donor in the organ or disease model carries the promise to move towards "in vitro clinical trials", reducing costs for drug development and furthering the concept of personalized medicine and companion diagnostics. During the Lorentz workshop (Leiden, September 2012) an international multidisciplinary group of experts discussed the current state of the art, available and emerging technologies, applications and how to proceed in the field. Organ-on-a-chip platform technologies are expected to revolutionize cell biology in general and drug development in particular.

  17. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    SciTech Connect

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  18. Air Force Materiel Command (AFMC) Modeling, Simulation, and Analysis (MS&A) Interactive Database.

    DTIC Science & Technology

    1995-09-01

    Systems Acquisition Manager’s Guide for the Use of Models and Simulation developed by Colonel Laut K Piplani , Lt Colonel Joseph G. Mercer, and Lt...Lalit K Piplani , Lt Colonel Joseph G. Mercer, and Lt Colonel Richard O. Roop, Systems Acquisition Manager’s Guide for the Use of Models and Simulation...Perkinson, Richard C. Data Analysis: The Key to Data Base Design. Wellesley MA: QED Information Sciences, Inc., 1984. 17. Piplani , Col Lalit K . Lt Col

  19. Implementing marine organic aerosols into the GEOS-Chem model

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Johnson, M. S.; Crippa, M.; Prévôt, A. S. H.; Meskhidze, N.

    2014-09-01

    Marine organic aerosols (MOA) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem model. The implemented emission scheme improved the large underprediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOA observed during the summertime at an inland site near Paris, France. Our study shows that MOA have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having > 10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly-emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.

  20. Implementing marine organic aerosols into the GEOS-Chem model

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Johnson, M. S.; Crippa, M.; Prévôt, A. S. H.; Meskhidze, N.

    2015-03-01

    Marine-sourced organic aerosols (MOAs) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem (Global Earth Observing System Chemistry) model. The implemented emission scheme improved the large underprediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOAs observed during the summertime at an inland site near Paris, France. Our study shows that MOAs have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having >10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.

  1. Quantitative proteomics by metabolic labeling of model organisms.

    PubMed

    Gouw, Joost W; Krijgsveld, Jeroen; Heck, Albert J R

    2010-01-01

    In the biological sciences, model organisms have been used for many decades and have enabled the gathering of a large proportion of our present day knowledge of basic biological processes and their derailments in disease. Although in many of these studies using model organisms, the focus has primarily been on genetics and genomics approaches, it is important that methods become available to extend this to the relevant protein level. Mass spectrometry-based proteomics is increasingly becoming the standard to comprehensively analyze proteomes. An important transition has been made recently by moving from charting static proteomes to monitoring their dynamics by simultaneously quantifying multiple proteins obtained from differently treated samples. Especially the labeling with stable isotopes has proved an effective means to accurately determine differential expression levels of proteins. Among these, metabolic incorporation of stable isotopes in vivo in whole organisms is one of the favored strategies. In this perspective, we will focus on methodologies to stable isotope label a variety of model organisms in vivo, ranging from relatively simple organisms such as bacteria and yeast to Caenorhabditis elegans, Drosophila, and Arabidopsis up to mammals such as rats and mice. We also summarize how this has opened up ways to investigate biological processes at the protein level in health and disease, revealing conservation and variation across the evolutionary tree of life.

  2. Implementing Marine Organic Aerosols Into the GEOS-Chem Model

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew S.

    2015-01-01

    Marine-sourced organic aerosols (MOA) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem model. The implemented emission scheme improved the large under-prediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOA observed during the summertime at an inland site near Paris, France. Our study shows that MOA have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having > 10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly-emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.

  3. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    PubMed

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  4. GENERAL: Self-organized Criticality Model for Ocean Internal Waves

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Lin, Min; Qiao, Fang-Li; Hou, Yi-Jun

    2009-03-01

    In this paper, we present a simple spring-block model for ocean internal waves based on the self-organized criticality (SOC). The oscillations of the water blocks in the model display power-law behavior with an exponent of -2 in the frequency domain, which is similar to the current and sea water temperature spectra in the actual ocean and the universal Garrett and Munk deep ocean internal wave model [Geophysical Fluid Dynamics 2 (1972) 225; J. Geophys. Res. 80 (1975) 291]. The influence of the ratio of the driving force to the spring coefficient to SOC behaviors in the model is also discussed.

  5. CyanoBase: the cyanobacteria genome database update 2010.

    PubMed

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  6. Porphyry copper deposits of the world: database, map, and grade and tonnage models

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir Iosifovich; Moring, Barry C.

    2005-01-01

    Mineral deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types are significantly different, and (2) many types occur in different geologic settings that can be identified from geologic maps. Mineral deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables or for robust estimation of undiscovered deposits-thus we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral deposit models play the central role in transforming geoscience information to a form useful to policy makers. The foundation of mineral deposit models is information about known deposits-the purpose of this publication is to make this kind of information available in digital form for porphyry copper deposits. This report is an update of an earlier publication about porphyry copper deposits. In this report we have added 84 new porphyry copper deposits and removed 12 deposits. In addition, some errors have been corrected and a number of deposits have had some information, such as grades, tonnages, locations, or ages revised. This publication contains a computer file of information on porphyry copper deposits from around the world. It also presents new grade and tonnage models for porphyry copper deposits and for three subtypes of porphyry copper

  7. Porphyry Copper Deposits of the World: Database and Grade and Tonnage Models, 2008

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.

    2008-01-01

    This report is an update of earlier publications about porphyry copper deposits (Singer, Berger, and Moring, 2002; Singer, D.A., Berger, V.I., and Moring, B.C., 2005). The update was necessary because of new information about substantial increases in resources in some deposits and because we revised locations of some deposits so that they are consistent with images in GoogleEarth. In this report we have added new porphyry copper deposits and removed a few incorrectly classed deposits. In addition, some errors have been corrected and a number of deposits have had some information, such as grades, tonnages, locations, or ages revised. Colleagues have helped identify places where improvements were needed. Mineral deposit models are important in exploration planning and quantitative resource assessments for a number of reasons including: (1) grades and tonnages among deposit types are significantly different, and (2) many types occur in different geologic settings that can be identified from geologic maps. Mineral deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables or for robust estimation of undiscovered deposits?thus we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral deposit models play the central role in transforming geoscience information to a form useful to policy makers. The foundation of

  8. Sediment-Hosted Zinc-Lead Deposits of the World - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.

    2009-01-01

    This report provides information on sediment-hosted zinc-lead mineral deposits based on the geologic settings that are observed on regional geologic maps. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to make this kind of information available in digital form for sediment-hosted zinc-lead deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments: Grades and tonnages among deposit types are significantly different, and many types occur in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables, or for robust estimation of undiscovered deposits - thus, we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral-deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral-deposit models play the central role in transforming geoscience information to a form useful to policy makers. This publication contains a computer file of information on sediment-hosted zinc-lead deposits from around the world. It also presents new grade and tonnage models for nine types of these deposits and a file allowing locations of all deposits to be plotted in Google Earth. The data are presented in FileMaker Pro, Excel and text files to make the information available to as many as possible. The

  9. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  10. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  11. Green Algae as Model Organisms for Biological Fluid Dynamics.

    PubMed

    Goldstein, Raymond E

    2015-01-01

    In the past decade the volvocine green algae, spanning from the unicellular Chlamydomonas to multicellular Volvox, have emerged as model organisms for a number of problems in biological fluid dynamics. These include flagellar propulsion, nutrient uptake by swimming organisms, hydrodynamic interactions mediated by walls, collective dynamics and transport within suspensions of microswimmers, the mechanism of phototaxis, and the stochastic dynamics of flagellar synchronization. Green algae are well suited to the study of such problems because of their range of sizes (from 10 μm to several millimetres), their geometric regularity, the ease with which they can be cultured and the availability of many mutants that allow for connections between molecular details and organism-level behavior. This review summarizes these recent developments and highlights promising future directions in the study of biological fluid dynamics, especially in the context of evolutionary biology, that can take advantage of these remarkable organisms.

  12. Green Algae as Model Organisms for Biological Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Goldstein, Raymond E.

    2015-01-01

    In the past decade, the volvocine green algae, spanning from the unicellular Chlamydomonas to multicellular Volvox, have emerged as model organisms for a number of problems in biological fluid dynamics. These include flagellar propulsion, nutrient uptake by swimming organisms, hydrodynamic interactions mediated by walls, collective dynamics and transport within suspensions of microswimmers, the mechanism of phototaxis, and the stochastic dynamics of flagellar synchronization. Green algae are well suited to the study of such problems because of their range of sizes (from 10 μm to several millimeters), their geometric regularity, the ease with which they can be cultured, and the availability of many mutants that allow for connections between molecular details and organism-level behavior. This review summarizes these recent developments and highlights promising future directions in the study of biological fluid dynamics, especially in the context of evolutionary biology, that can take advantage of these remarkable organisms.

  13. Green Algae as Model Organisms for Biological Fluid Dynamics*

    PubMed Central

    Goldstein, Raymond E.

    2015-01-01

    In the past decade the volvocine green algae, spanning from the unicellular Chlamydomonas to multicellular Volvox, have emerged as model organisms for a number of problems in biological fluid dynamics. These include flagellar propulsion, nutrient uptake by swimming organisms, hydrodynamic interactions mediated by walls, collective dynamics and transport within suspensions of microswimmers, the mechanism of phototaxis, and the stochastic dynamics of flagellar synchronization. Green algae are well suited to the study of such problems because of their range of sizes (from 10 μm to several millimetres), their geometric regularity, the ease with which they can be cultured and the availability of many mutants that allow for connections between molecular details and organism-level behavior. This review summarizes these recent developments and highlights promising future directions in the study of biological fluid dynamics, especially in the context of evolutionary biology, that can take advantage of these remarkable organisms. PMID:26594068

  14. There Is No Simple Model of the Plasma Membrane Organization

    PubMed Central

    Bernardino de la Serna, Jorge; Schütz, Gerhard J.; Eggeling, Christian; Cebecauer, Marek

    2016-01-01

    Ever since technologies enabled the characterization of eukaryotic plasma membranes, heterogeneities in the distributions of its constituents were observed. Over the years this led to the proposal of various models describing the plasma membrane organization such as lipid shells, picket-and-fences, lipid rafts, or protein islands, as addressed in numerous publications and reviews. Instead of emphasizing on one model we in this review give a brief overview over current models and highlight how current experimental work in one or the other way do not support the existence of a single overarching model. Instead, we highlight the vast variety of membrane properties and components, their influences and impacts. We believe that highlighting such controversial discoveries will stimulate unbiased research on plasma membrane organization and functionality, leading to a better understanding of this essential cellular structure. PMID:27747212

  15. The Tübingen Model-Atom Database: A Revised Aluminum Model Atom and its Application for the Spectral Analysis of White Dwarfs

    NASA Astrophysics Data System (ADS)

    Löbling, L.

    2017-03-01

    Aluminum (Al) nucleosynthesis takes place during the asymptotic-giant-branch (AGB) phase of stellar evolution. Al abundance determinations in hot white dwarf stars provide constraints to understand this process. Precise abundance measurements require advanced non-local thermodynamic stellar-atmosphere models and reliable atomic data. In the framework of the German Astrophysical Virtual Observatory (GAVO), the Tübingen Model-Atom Database (TMAD) contains ready-to- use model atoms for elements from hydrogen to barium. A revised, elaborated Al model atom has recently been added. We present preliminary stellar-atmosphere models and emergent Al line spectra for the hot white dwarfs G191–B2B and RE 0503–289.

  16. Model of the Dynamic Construction Process of Texts and Scaling Laws of Words Organization in Language Systems

    PubMed Central

    Li, Shan; Lin, Ruokuang; Bian, Chunhua; Ma, Qianli D. Y.

    2016-01-01

    Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language. PMID:28006026

  17. Model of the Dynamic Construction Process of Texts and Scaling Laws of Words Organization in Language Systems.

    PubMed

    Li, Shan; Lin, Ruokuang; Bian, Chunhua; Ma, Qianli D Y; Ivanov, Plamen Ch

    2016-01-01

    Scaling laws characterize diverse complex systems in a broad range of fields, including physics, biology, finance, and social science. The human language is another example of a complex system of words organization. Studies on written texts have shown that scaling laws characterize the occurrence frequency of words, words rank, and the growth of distinct words with increasing text length. However, these studies have mainly concentrated on the western linguistic systems, and the laws that govern the lexical organization, structure and dynamics of the Chinese language remain not well understood. Here we study a database of Chinese and English language books. We report that three distinct scaling laws characterize words organization in the Chinese language. We find that these scaling laws have different exponents and crossover behaviors compared to English texts, indicating different words organization and dynamics of words in the process of text growth. We propose a stochastic feedback model of words organization and text growth, which successfully accounts for the empirically observed scaling laws with their corresponding scaling exponents and characteristic crossover regimes. Further, by varying key model parameters, we reproduce differences in the organization and scaling laws of words between the Chinese and English language. We also identify functional relationships between model parameters and the empirically observed scaling exponents, thus providing new insights into the words organization and growth dynamics in the Chinese and English language.

  18. Screening and analysis of 940 organic micro-pollutants in river sediments in Vietnam using an automated identification and quantification database system for GC-MS.

    PubMed

    Duong, Hanh Thi; Kadokami, Kiwao; Pan, Shuangye; Matsuura, Naoki; Nguyen, Trung Quang

    2014-07-01

    In order to obtain a detailed picture of pollution by organic micro-pollutants in Vietnamese rivers, 940 semi-volatile organic compounds in river sediments collected from four major cities were examined by a comprehensive gas chromatography-mass spectrometry-database. The number of detected chemicals at each site ranged from 49 to 158 (median 96 out of 940) with 185 analytes detected at least once in the survey. The substances detected with high frequency (over 80%) and high concentrations were n-alkanes, phthalates, sterols and PAHs. For most substances, sediments from metropolitan areas (Hanoi and Ho Chi Minh City) were more heavily contaminated than those in rural and suburban areas. Sterols were observed in nearly 100% of sediments at extremely high concentrations, suggesting that the studied rivers were contaminated by sewage. Pyrethroids (permethrin-1 and -2) were the most dominant insecticides found in inner canals of Hanoi and Ho Chi Minh City. Deltamethrin was only detected at a site in Hanoi at an elevated concentration. This reflects that pyrethroids are used for the protection of private and public health rather than for agriculture. p,p'-DDE and p,p'-DDD were the dominant members of the DDT family of chemicals detected, indicating no recent inputs of DDTs in the study areas. PCBs residues were lower than those in other Asian countries, which suggest historically much lower use of PCBs in Vietnam. PAHs pollution in urban areas is caused by the runoff of petroleum products and vehicle exhaust gases, whereas in rural and suburban areas, the combustion of fossil fuels and biomass is major sources of PAHs. Overall, the study confirmed that rivers in Vietnam were heavily polluted mainly by domestic wastewater.

  19. A dynamical phyllotaxis model to determine floral organ number.

    PubMed

    Kitazawa, Miho S; Fujimoto, Koichi

    2015-05-01

    How organisms determine particular organ numbers is a fundamental key to the development of precise body structures; however, the developmental mechanisms underlying organ-number determination are unclear. In many eudicot plants, the primordia of sepals and petals (the floral organs) first arise sequentially at the edge of a circular, undifferentiated region called the floral meristem, and later transition into a concentric arrangement called a whorl, which includes four or five organs. The properties controlling the transition to whorls comprising particular numbers of organs is little explored. We propose a development-based model of floral organ-number determination, improving upon earlier models of plant phyllotaxis that assumed two developmental processes: the sequential initiation of primordia in the least crowded space around the meristem and the constant growth of the tip of the stem. By introducing mutual repulsion among primordia into the growth process, we numerically and analytically show that the whorled arrangement emerges spontaneously from the sequential initiation of primordia. Moreover, by allowing the strength of the inhibition exerted by each primordium to decrease as the primordium ages, we show that pentamerous whorls, in which the angular and radial positions of the primordia are consistent with those observed in sepal and petal primordia in Silene coeli-rosa, Caryophyllaceae, become the dominant arrangement. The organ number within the outmost whorl, corresponding to the sepals, takes a value of four or five in a much wider parameter space than that in which it takes a value of six or seven. These results suggest that mutual repulsion among primordia during growth and a temporal decrease in the strength of the inhibition during initiation are required for the development of the tetramerous and pentamerous whorls common in eudicots.

  20. Accessing, using, and creating chemical property databases for computational toxicology modeling.

    PubMed

    Williams, Antony J; Ekins, Sean; Spjuth, Ola; Willighagen, Egon L

    2012-01-01

    Toxicity data is expensive to generate, is increasingly seen as precompetitive, and is frequently used for the generation of computational models in a discipline known as computational toxicology. Repositories of chemical property data are valuable for supporting computational toxicologists by providing access to data regarding potential toxicity issues with compounds as well as for the purpose of building structure-toxicity relationships and associated prediction models. These relationships use mathematical, statistical, and modeling computational approaches and can be used to understand the mechanisms by which chemicals cause harm and, ultimately, enable prediction of adverse effects of these chemicals to human health and/or the environment. Such approaches are of value as they offer an opportunity to prioritize chemicals for testing. An increasing amount of data used by computational toxicologists is being published into the public domain and, in parallel, there is a greater availability of Open Source software for the generation of computational models. This chapter provides an overview of the types of data and software available and how these may be used to produce predictive toxicology models for the community.

  1. Modeling organic matter stabilization during windrow composting of livestock effluents.

    PubMed

    Oudart, D; Paul, E; Robin, P; Paillat, J M

    2012-01-01

    Composting is a complex bioprocess, requiring a lot of empirical experiments to optimize the process. A dynamical mathematical model for the biodegradation of the organic matter during the composting process has been developed. The initial organic matter expressed by chemical oxygen demand (COD) is decomposed into rapidly and slowly degraded compartments and an inert one. The biodegradable COD is hydrolysed and consumed by microorganisms and produces metabolic water and carbon dioxide. This model links a biochemical characterization of the organic matter by Van Soest fractionating with COD. The comparison of experimental and simulation results for carbon dioxide emission, dry matter and carbon content balance showed good correlation. The initial sizes of the biodegradable COD compartments are explained by the soluble, hemicellulose-like and lignin fraction. Their sizes influence the amplitude of the carbon dioxide emission peak. The initial biomass is a sensitive variable too, influencing the time at which the emission peak occurs.

  2. An Ontology for Modeling Complex Inter-relational Organizations

    NASA Astrophysics Data System (ADS)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  3. A two-site bipolaron model for organic magnetoresistance

    NASA Astrophysics Data System (ADS)

    Wagemans, W.; Bloom, F. L.; Bobbert, P. A.; Wohlgenannt, M.; Koopmans, B.

    2008-04-01

    The recently proposed bipolaron model for large "organic magnetoresistance" (OMAR) at room temperature is extended to an analytically solvable two-site scheme. It is shown that even this extremely simplified approach reproduces some of the key features of OMAR, viz., the possibility to have both positive and negative magnetoresistance, as well as its universal line shapes. Specific behavior and limiting cases are discussed. Extensions of the model, to guide future experiments and numerical Monte Carlo studies, are suggested.

  4. The NorWeST Stream Temperature Database, Model, and Climate Scenarios for the Northwest U.S. (Invited)

    NASA Astrophysics Data System (ADS)

    Isaak, D.; Wenger, S.; Peterson, E.; Ver Hoef, J.; Luce, C.; Hostetler, S. W.; Kershner, J.; Dunham, J.; Nagel, D.; Roper, B.

    2013-12-01

    Anthropogenic climate change is warming the Earth's rivers and streams and threatens significant changes to aquatic biodiversity. Effective threat response will require prioritization of limited conservation resources and coordinated interagency efforts guided by accurate information about climate, and climate change, at scales relevant to the distributions of species across landscapes. Here, we describe the NorWeST (i.e., NorthWest Stream Temperature) project to develop a comprehensive interagency stream temperature database and high-resolution climate scenarios across Washington, Oregon, Idaho, Montana, and Wyoming (~400,000 stream kilometers). The NorWeST database consists of stream temperature data contributed by >60 state, federal, tribal, and private resource agencies and may be the largest of its kind in the world (>45,000,000 hourly temperature recordings at >15,000 unique monitoring sites). These data are being used with spatial statistical network models to accurately downscale (R2 = 90%; RMSE < 1 C) global climate patterns to all perennially flowing reaches within river networks at 1-kilometer resolution. Historic stream temperature scenarios are developed using air temperature data from RegCM3 runs for the NCEP historical reanalysis and future scenarios (2040s and 2080s) are developed by applying bias corrected air temperature and discharge anomalies from ensemble climate and hydrology model runs for A1B and A2 warming trajectories. At present, stream temperature climate scenarios have been developed for 230,000 stream kilometers across Idaho and western Montana using data from more than 7,000 monitoring sites. The raw temperature data and stream climate scenarios are made available as ArcGIS geospatial products for download through the NorWeST website as individual river basins are completed (http://www.fs.fed.us/rm/boise/AWAE/projects/NorWeST.shtml). By providing open access to temperature data and scenarios, the project is fostering new research on

  5. An Integrated Model for Effective Knowledge Management in Chinese Organizations

    ERIC Educational Resources Information Center

    An, Xiaomi; Deng, Hepu; Wang, Yiwen; Chao, Lemen

    2013-01-01

    Purpose: The purpose of this paper is to provide organizations in the Chinese cultural context with a conceptual model for an integrated adoption of existing knowledge management (KM) methods and to improve the effectiveness of their KM activities. Design/methodology/approaches: A comparative analysis is conducted between China and the western…

  6. A Process Model for the Comprehension of Organic Chemistry Notation

    ERIC Educational Resources Information Center

    Havanki, Katherine L.

    2012-01-01

    This dissertation examines the cognitive processes individuals use when reading organic chemistry equations and factors that affect these processes, namely, visual complexity of chemical equations and participant characteristics (expertise, spatial ability, and working memory capacity). A six stage process model for the comprehension of organic…

  7. Promoting Representational Competence with Molecular Models in Organic Chemistry

    ERIC Educational Resources Information Center

    Stull, Andrew T.; Gainer, Morgan; Padalkar, Shamin; Hegarty, Mary

    2016-01-01

    Mastering the many different diagrammatic representations of molecules used in organic chemistry is challenging for students. This article summarizes recent research showing that manipulating 3-D molecular models can facilitate the understanding and use of these representations. Results indicate that students are more successful in translating…

  8. Modeling emissions of volatile organic compounds from silage

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Volatile organic compounds (VOCs), necessary reactants for photochemical smog formation, are emitted from numerous sources. Limited available data suggest that dairy farms emit VOCs with cattle feed, primarily silage, being the primary source. Process-based models of VOC transfer within and from si...

  9. Analytical modeling of organic solar cells and photodiodes

    NASA Astrophysics Data System (ADS)

    Altazin, S.; Clerc, R.; Gwoziecki, R.; Pananakakis, G.; Ghibaudo, G.; Serbutoviez, C.

    2011-10-01

    An analytical and physically based expression of organic solar cell I-V characteristic under dark and illuminated conditions has been derived. This model has been found in very good agreement with both experimental data and drift-diffusion numerical simulations accounting for the coupling with Poisson equation and optical propagation.

  10. Waste Reduction Model (WARM) Resources for Small Businesses and Organizations

    EPA Pesticide Factsheets

    This page provides a brief overview of how EPA’s Waste Reduction Model (WARM) can be used by small businesses and organizations. The page includes a brief summary of uses of WARM for the audience and links to other resources.

  11. Supramolecular organization of functional organic materials in the bulk and at organic/organic interfaces: a modeling and computer simulation approach.

    PubMed

    Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio

    2014-01-01

    The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials.

  12. Lamination of organic solar cells and organic light emitting devices: Models and experiments

    SciTech Connect

    Oyewole, O. K.; Yu, D.; Du, J.; Asare, J.; Fashina, A.; Anye, V. C.; Zebaze Kana, M. G.; Soboyejo, W. O.

    2015-08-21

    In this paper, a combined experimental, computational, and analytical approach is used to provide new insights into the lamination of organic solar cells and light emitting devices at macro- and micro-scales. First, the effects of applied lamination force (on contact between the laminated layers) are studied. The crack driving forces associated with the interfacial cracks (at the bi-material interfaces) are estimated along with the critical interfacial crack driving forces associated with the separation of thin films, after layer transfer. The conditions for successful lamination are predicted using a combination of experiments and computational models. Guidelines are developed for the lamination of low-cost organic electronic structures.

  13. A general framework for modelling the vertical organic matter profile in mineral and organic soils

    NASA Astrophysics Data System (ADS)

    Braakhekke, Maarten; Ahrens, Bernhard

    2016-04-01

    The vertical distribution of soil organic matter (SOM) within the mineral soil and surface organic layer is an important property of terrestrial ecosystems that affects carbon and nutrient cycling and soil heat and moisture transport. The overwhelming majority of models of SOM dynamics are zero-dimensional, i.e. they do not resolve heterogeneity of SOM concentration along the vertical profile. In recent years, however, a number of new vertically explicit SOM models or vertically explicit versions of existing models have been published. These models describe SOM in units of concentration (mass per unit volume) by means of a reactive-transport model that includes diffusion and/or advection terms for SOM transport, and vertically resolves SOM inputs and factors that influence decomposition. An important assumption behind these models is that the volume of soil elements is constant over time, i.e. not affected by SOM dynamics. This assumption only holds if the SOM content is negligible compared to the mineral content. When this is not the case, SOM input or loss in a soil element may cause a change in volume of the element rather than a change in SOM concentration. Furthermore, these volume changes can cause vertical shifts of material relative to the surface. This generally causes material in an organic layer to gradually move downward, even in absence of mixing processes. Since the classical reactive-transport model of the SOM profile can only be applied to the mineral soil, the surface organic layer is usually either treated separately or not explicitly considered. We present a new and elegant framework that treats the surface organic layer and mineral soil as one continuous whole. It explicitly accounts for volume changes due to SOM dynamics and changes in bulk density. The vertical shifts resulting from these volume changes are included in an Eulerian representation as an additional advective transport flux. Our approach offers a more elegant and realistic

  14. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    NASA Astrophysics Data System (ADS)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  15. A Conceptual Model for Describing Processes of Crop Improvement in Database Structures

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Rising research costs, broadening research goals, intellectual property rights, and other concerns have increased the need for robust approaches to manage data from crop improvement. In developing the International Crop Information System (ICIS), a flexible data model was developed to allow any conc...

  16. A Model Public School, Data-Based Early Education Program for Rural Handicapped Children. Final Report.

    ERIC Educational Resources Information Center

    Cone, John D.

    The document contains the final report of a model educational program for handicapped preschoolers in Preston County (West Virginia). Section I offers the history of the project and background information on the geographic location, operating characteristics, children served, and staffing pattern. Section II outlines the original project…

  17. Fast decision tree-based method to index large DNA-protein sequence databases using hybrid distributed-shared memory programming model.

    PubMed

    Jaber, Khalid Mohammad; Abdullah, Rosni; Rashid, Nur'Aini Abdul

    2014-01-01

    In recent times, the size of biological