Science.gov

Sample records for model organism database

  1. Curation accuracy of model organism databases.

    PubMed

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819

  2. BeetleBase: the model organism database for Tribolium castaneum.

    PubMed

    Wang, Liangjiang; Wang, Suzhi; Li, Yonghua; Paradesi, Martin S R; Brown, Susan J

    2007-01-01

    BeetleBase (http://www.bioinformatics.ksu.edu/BeetleBase/) is an integrated resource for the Tribolium research community. The red flour beetle (Tribolium castaneum) is an important model organism for genetics, developmental biology, toxicology and comparative genomics, the genome of which has recently been sequenced. BeetleBase is constructed to integrate the genomic sequence data with information about genes, mutants, genetic markers, expressed sequence tags and publications. BeetleBase uses the Chado data model and software components developed by the Generic Model Organism Database (GMOD) project. This strategy not only reduces the time required to develop the database query tools but also makes the data structure of BeetleBase compatible with that of other model organism databases. BeetleBase will be useful to the Tribolium research community for genome annotation as well as comparative genomics. PMID:17090595

  3. Xenbase: expansion and updates of the Xenopus model organism database

    PubMed Central

    James-Zorn, Christina; Ponferrada, Virgilio G.; Jarabek, Chris J.; Burns, Kevin A.; Segerdell, Erik J.; Lee, Jacqueline; Snyder, Kevin; Bhattacharyya, Bishnu; Karpinka, J. Brad; Fortriede, Joshua; Bowes, Jeff B.; Zorn, Aaron M.; Vize, Peter D.

    2013-01-01

    Xenbase (http://www.xenbase.org) is a model organism database that provides genomic, molecular, cellular and developmental biology content to biomedical researchers working with the frog, Xenopus and Xenopus data to workers using other model organisms. As an amphibian Xenopus serves as a useful evolutionary bridge between invertebrates and more complex vertebrates such as birds and mammals. Xenbase content is collated from a variety of external sources using automated and semi-automated pipelines then processed via a combination of automated and manual annotation. A link-matching system allows for the wide variety of synonyms used to describe biological data on unique features, such as a gene or an anatomical entity, to be used by the database in an equivalent manner. Recent updates to the database include the Xenopus laevis genome, a new Xenopus tropicalis genome build, epigenomic data, collections of RNA and protein sequences associated with genes, more powerful gene expression searches, a community and curated wiki, an extensive set of manually annotated gene expression patterns and a new database module that contains data on over 700 antibodies that are useful for exploring Xenopus cell and developmental biology. PMID:23125366

  4. ZFIN: enhancements and updates to the zebrafish model organism database

    PubMed Central

    Bradford, Yvonne; Conlin, Tom; Dunn, Nathan; Fashena, David; Frazer, Ken; Howe, Douglas G.; Knight, Jonathan; Mani, Prita; Martin, Ryan; Moxon, Sierra A. T.; Paddock, Holly; Pich, Christian; Ramachandran, Sridhar; Ruef, Barbara J.; Ruzicka, Leyla; Bauer Schaper, Holle; Schaper, Kevin; Shao, Xiang; Singer, Amy; Sprague, Judy; Sprunger, Brock; Van Slyke, Ceri; Westerfield, Monte

    2011-01-01

    ZFIN, the Zebrafish Model Organism Database, http://zfin.org, serves as the central repository and web-based resource for zebrafish genetic, genomic, phenotypic and developmental data. ZFIN manually curates comprehensive data for zebrafish genes, phenotypes, genotypes, gene expression, antibodies, anatomical structures and publications. A wide-ranging collection of web-based search forms and tools facilitates access to integrated views of these data promoting analysis and scientific discovery. Data represented in ZFIN are derived from three primary sources: curation of zebrafish publications, individual research laboratories and collaborations with bioinformatics organizations. Data formats include text, images and graphical representations. ZFIN is a dynamic resource with data added daily as part of our ongoing curation process. Software updates are frequent. Here, we describe recent additions to ZFIN including (i) enhanced access to images, (ii) genomic features, (iii) genome browser, (iv) transcripts, (v) antibodies and (vi) a community wiki for protocols and antibodies. PMID:21036866

  5. ZFIN, the Zebrafish Model Organism Database: updates and new directions

    PubMed Central

    Ruzicka, Leyla; Bradford, Yvonne M.; Frazer, Ken; Howe, Douglas G.; Paddock, Holly; Ramachandran, Sridhar; Singer, Amy; Toro, Sabrina; Van Slyke, Ceri E.; Eagle, Anne E.; Fashena, David; Kalita, Patrick; Knight, Jonathan; Mani, Prita; Martin, Ryan; Moxon, Sierra A. T.; Pich, Christian; Schaper, Kevin; Shao, Xiang; Westerfield, Monte

    2015-01-01

    The Zebrafish Model Organism Database (ZFIN; http://zfin.org) is the central resource for genetic and genomic data from zebrafish (Danio rerio) research. ZFIN staff curate detailed information about genes, mutants, genotypes, reporter lines, sequences, constructs, antibodies, knockdown reagents, expression patterns, phenotypes, gene product function, and orthology from publications. Researchers can submit mutant, transgenic, expression, and phenotype data directly to ZFIN and use the ZFIN Community Wiki to share antibody and protocol information. Data can be accessed through topic-specific searches, a new site-wide search, and the data-mining resource ZebrafishMine (http://zebrafishmine.org). Data download and web service options are also available. ZFIN collaborates with major bioinformatics organizations to verify and integrate genomic sequence data, provide nomenclature support, establish reciprocal links and participate in the development of standardized structured vocabularies (ontologies) used for data annotation and searching. ZFIN-curated gene, function, expression, and phenotype data are available for comparative exploration at several multi-species resources. The use of zebrafish as a model for human disease is increasing. ZFIN is supporting this growing area with three major projects: adding easy access to computed orthology data from gene pages, curating details of the gene expression pattern changes in mutant fish, and curating zebrafish models of human diseases. PMID:26097180

  6. ZFIN, The zebrafish model organism database: Updates and new directions.

    PubMed

    Ruzicka, Leyla; Bradford, Yvonne M; Frazer, Ken; Howe, Douglas G; Paddock, Holly; Ramachandran, Sridhar; Singer, Amy; Toro, Sabrina; Van Slyke, Ceri E; Eagle, Anne E; Fashena, David; Kalita, Patrick; Knight, Jonathan; Mani, Prita; Martin, Ryan; Moxon, Sierra A T; Pich, Christian; Schaper, Kevin; Shao, Xiang; Westerfield, Monte

    2015-08-01

    The Zebrafish Model Organism Database (ZFIN; http://zfin.org) is the central resource for genetic and genomic data from zebrafish (Danio rerio) research. ZFIN staff curate detailed information about genes, mutants, genotypes, reporter lines, sequences, constructs, antibodies, knockdown reagents, expression patterns, phenotypes, gene product function, and orthology from publications. Researchers can submit mutant, transgenic, expression, and phenotype data directly to ZFIN and use the ZFIN Community Wiki to share antibody and protocol information. Data can be accessed through topic-specific searches, a new site-wide search, and the data-mining resource ZebrafishMine (http://zebrafishmine.org). Data download and web service options are also available. ZFIN collaborates with major bioinformatics organizations to verify and integrate genomic sequence data, provide nomenclature support, establish reciprocal links, and participate in the development of standardized structured vocabularies (ontologies) used for data annotation and searching. ZFIN-curated gene, function, expression, and phenotype data are available for comparative exploration at several multi-species resources. The use of zebrafish as a model for human disease is increasing. ZFIN is supporting this growing area with three major projects: adding easy access to computed orthology data from gene pages, curating details of the gene expression pattern changes in mutant fish, and curating zebrafish models of human diseases. PMID:26097180

  7. Choosing a Genome Browser for a Model Organism Database (MOD): Surveying the Maize Community

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As the maize genome sequencing is nearing its completion, the Maize Genetics and Genomics Database (MaizeGDB), the Model Organism Database for maize, integrated a genome browser to its already existing Web interface and database. The addition of the MaizeGDB Genome Browser to MaizeGDB will allow it ...

  8. Model organism databases: essential resources that need the support of both funders and users.

    PubMed

    Oliver, Stephen G; Lock, Antonia; Harris, Midori A; Nurse, Paul; Wood, Valerie

    2016-01-01

    Modern biomedical research depends critically on access to databases that house and disseminate genetic, genomic, molecular, and cell biological knowledge. Even as the explosion of available genome sequences and associated genome-scale data continues apace, the sustainability of professionally maintained biological databases is under threat due to policy changes by major funding agencies. Here, we focus on model organism databases to demonstrate the myriad ways in which biological databases not only act as repositories but actively facilitate advances in research. We present data that show that reducing financial support to model organism databases could prove to be not just scientifically, but also economically, unsound. PMID:27334346

  9. MaizeGDB update: New tools, data, and interface for the maize model organism database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...

  10. MaizeGDB, the maize model organism database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    MaizeGDB is the maize research community's database for maize genetic and genomic information. In this seminar I will outline our current endeavors including a full website redesign, the status of maize genome assembly and annotation projects, and work toward genome functional annotation. Mechanis...

  11. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  12. The Generic Genome Browser: A Building Block for a Model Organism System Database

    PubMed Central

    Stein, Lincoln D.; Mungall, Christopher; Shu, ShengQiang; Caudy, Michael; Mangone, Marco; Day, Allen; Nickerson, Elizabeth; Stajich, Jason E.; Harris, Todd W.; Arva, Adrian; Lewis, Suzanna

    2002-01-01

    The Generic Model Organism System Database Project (GMOD) seeks to develop reusable software components for model organism system databases. In this paper we describe the Generic Genome Browser (GBrowse), a Web-based application for displaying genomic annotations and other features. For the end user, features of the browser include the ability to scroll and zoom through arbitrary regions of a genome, to enter a region of the genome by searching for a landmark or performing a full text search of all features, and the ability to enable and disable tracks and change their relative order and appearance. The user can upload private annotations to view them in the context of the public ones, and publish those annotations to the community. For the data provider, features of the browser software include reliance on readily available open source components, simple installation, flexible configuration, and easy integration with other components of a model organism system Web site. GBrowse is freely available under an open source license. The software, its documentation, and support are available at http://www.gmod.org. PMID:12368253

  13. MaizeGDB update: new tools, data and interface for the maize model organism database

    PubMed Central

    Andorf, Carson M.; Cannon, Ethalinda K.; Portwood, John L.; Gardiner, Jack M.; Harper, Lisa C.; Schaeffer, Mary L.; Braun, Bremen L.; Campbell, Darwin A.; Vinnakota, Abhinav G.; Sribalusu, Venktanaga V.; Huerta, Miranda; Cho, Kyoung Tak; Wimalanathan, Kokulapalan; Richter, Jacqueline D.; Mauch, Emily D.; Rao, Bhavani S.; Birkett, Scott M.; Sen, Taner Z.; Lawrence-Dill, Carolyn J.

    2016-01-01

    MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, the original maize genetics database MaizeDB was created. In 2003, the combined contents of MaizeDB and the sequence data from ZmDB were made accessible as a single resource named MaizeGDB. Over the next decade, MaizeGDB became more sequence driven while still maintaining traditional maize genetics datasets. This enabled the project to meet the continued growing and evolving needs of the maize research community, yet the interface and underlying infrastructure remained unchanged. In 2015, the MaizeGDB team completed a multi-year effort to update the MaizeGDB resource by reorganizing existing data, upgrading hardware and infrastructure, creating new tools, incorporating new data types (including diversity data, expression data, gene models, and metabolic pathways), and developing and deploying a modern interface. In addition to coordinating a data resource, the MaizeGDB team coordinates activities and provides technical support to the maize research community. MaizeGDB is accessible online at http://www.maizegdb.org. PMID:26432828

  14. MaizeGDB update: new tools, data and interface for the maize model organism database.

    PubMed

    Andorf, Carson M; Cannon, Ethalinda K; Portwood, John L; Gardiner, Jack M; Harper, Lisa C; Schaeffer, Mary L; Braun, Bremen L; Campbell, Darwin A; Vinnakota, Abhinav G; Sribalusu, Venktanaga V; Huerta, Miranda; Cho, Kyoung Tak; Wimalanathan, Kokulapalan; Richter, Jacqueline D; Mauch, Emily D; Rao, Bhavani S; Birkett, Scott M; Sen, Taner Z; Lawrence-Dill, Carolyn J

    2016-01-01

    MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, the original maize genetics database MaizeDB was created. In 2003, the combined contents of MaizeDB and the sequence data from ZmDB were made accessible as a single resource named MaizeGDB. Over the next decade, MaizeGDB became more sequence driven while still maintaining traditional maize genetics datasets. This enabled the project to meet the continued growing and evolving needs of the maize research community, yet the interface and underlying infrastructure remained unchanged. In 2015, the MaizeGDB team completed a multi-year effort to update the MaizeGDB resource by reorganizing existing data, upgrading hardware and infrastructure, creating new tools, incorporating new data types (including diversity data, expression data, gene models, and metabolic pathways), and developing and deploying a modern interface. In addition to coordinating a data resource, the MaizeGDB team coordinates activities and provides technical support to the maize research community. MaizeGDB is accessible online at http://www.maizegdb.org. PMID:26432828

  15. Protein Model Database

    SciTech Connect

    Fidelis, K; Adzhubej, A; Kryshtafovych, A; Daniluk, P

    2005-02-23

    The phenomenal success of the genome sequencing projects reveals the power of completeness in revolutionizing biological science. Currently it is possible to sequence entire organisms at a time, allowing for a systemic rather than fractional view of their organization and the various genome-encoded functions. There is an international plan to move towards a similar goal in the area of protein structure. This will not be achieved by experiment alone, but rather by a combination of efforts in crystallography, NMR spectroscopy, and computational modeling. Only a small fraction of structures are expected to be identified experimentally, the remainder to be modeled. Presently there is no organized infrastructure to critically evaluate and present these data to the biological community. The goal of the Protein Model Database project is to create such infrastructure, including (1) public database of theoretically derived protein structures; (2) reliable annotation of protein model quality, (3) novel structure analysis tools, and (4) access to the highest quality modeling techniques available.

  16. MyMpn: a database for the systems biology model organism Mycoplasma pneumoniae.

    PubMed

    Wodke, Judith A H; Alibés, Andreu; Cozzuto, Luca; Hermoso, Antonio; Yus, Eva; Lluch-Senar, Maria; Serrano, Luis; Roma, Guglielmo

    2015-01-01

    MyMpn (http://mympn.crg.eu) is an online resource devoted to studying the human pathogen Mycoplasma pneumoniae, a minimal bacterium causing lower respiratory tract infections. Due to its small size, its ability to grow in vitro, and the amount of data produced over the past decades, M. pneumoniae is an interesting model organisms for the development of systems biology approaches for unicellular organisms. Our database hosts a wealth of omics-scale datasets generated by hundreds of experimental and computational analyses. These include data obtained from gene expression profiling experiments, gene essentiality studies, protein abundance profiling, protein complex analysis, metabolic reactions and network modeling, cell growth experiments, comparative genomics and 3D tomography. In addition, the intuitive web interface provides access to several visualization and analysis tools as well as to different data search options. The availability and--even more relevant--the accessibility of properly structured and organized data are of up-most importance when aiming to understand the biology of an organism on a global scale. Therefore, MyMpn constitutes a unique and valuable new resource for the large systems biology and microbiology community. PMID:25378328

  17. SubtiWiki 2.0--an integrated database for the model organism Bacillus subtilis.

    PubMed

    Michna, Raphael H; Zhu, Bingyao; Mäder, Ulrike; Stülke, Jörg

    2016-01-01

    To understand living cells, we need knowledge of each of their parts as well as about the interactions of these parts. To gain rapid and comprehensive access to this information, annotation databases are required. Here, we present SubtiWiki 2.0, the integrated database for the model bacterium Bacillus subtilis (http://subtiwiki.uni-goettingen.de/). SubtiWiki provides text-based access to published information about the genes and proteins of B. subtilis as well as presentations of metabolic and regulatory pathways. Moreover, manually curated protein-protein interactions diagrams are linked to the protein pages. Finally, expression data are shown with respect to gene expression under 104 different conditions as well as absolute protein quantification for cytoplasmic proteins. To facilitate the mobile use of SubtiWiki, we have now expanded it by Apps that are available for iOS and Android devices. Importantly, the App allows to link private notes and pictures to the gene/protein pages. Today, SubtiWiki has become one of the most complete collections of knowledge on a living organism in one single resource. PMID:26433225

  18. ZFIN, the Zebrafish Model Organism Database: increased support for mutants and transgenics.

    PubMed

    Howe, Douglas G; Bradford, Yvonne M; Conlin, Tom; Eagle, Anne E; Fashena, David; Frazer, Ken; Knight, Jonathan; Mani, Prita; Martin, Ryan; Moxon, Sierra A Taylor; Paddock, Holly; Pich, Christian; Ramachandran, Sridhar; Ruef, Barbara J; Ruzicka, Leyla; Schaper, Kevin; Shao, Xiang; Singer, Amy; Sprunger, Brock; Van Slyke, Ceri E; Westerfield, Monte

    2013-01-01

    ZFIN, the Zebrafish Model Organism Database (http://zfin.org), is the central resource for zebrafish genetic, genomic, phenotypic and developmental data. ZFIN curators manually curate and integrate comprehensive data involving zebrafish genes, mutants, transgenics, phenotypes, genotypes, gene expressions, morpholinos, antibodies, anatomical structures and publications. Integrated views of these data, as well as data gathered through collaborations and data exchanges, are provided through a wide selection of web-based search forms. Among the vertebrate model organisms, zebrafish are uniquely well suited for rapid and targeted generation of mutant lines. The recent rapid production of mutants and transgenic zebrafish is making management of data associated with these resources particularly important to the research community. Here, we describe recent enhancements to ZFIN aimed at improving our support for mutant and transgenic lines, including (i) enhanced mutant/transgenic search functionality; (ii) more expressive phenotype curation methods; (iii) new downloads files and archival data access; (iv) incorporation of new data loads from laboratories undertaking large-scale generation of mutant or transgenic lines and (v) new GBrowse tracks for transgenic insertions, genes with antibodies and morpholinos. PMID:23074187

  19. Xenbase, the Xenopus model organism database; new virtualized system, data types and genomes

    PubMed Central

    Karpinka, J. Brad; Fortriede, Joshua D.; Burns, Kevin A.; James-Zorn, Christina; Ponferrada, Virgilio G.; Lee, Jacqueline; Karimi, Kamran; Zorn, Aaron M.; Vize, Peter D.

    2015-01-01

    Xenbase (http://www.xenbase.org), the Xenopus frog model organism database, integrates a wide variety of data from this biomedical model genus. Two closely related species are represented: the allotetraploid Xenopus laevis that is widely used for microinjection and tissue explant-based protocols, and the diploid Xenopus tropicalis which is used for genetics and gene targeting. The two species are extremely similar and protocols, reagents and results from each species are often interchangeable. Xenbase imports, indexes, curates and manages data from both species; all of which are mapped via unique IDs and can be queried in either a species-specific or species agnostic manner. All our services have now migrated to a private cloud to achieve better performance and reliability. We have added new content, including providing full support for morpholino reagents, used to inhibit mRNA translation or splicing and binding to regulatory microRNAs. New genomes assembled by the JGI for both species and are displayed in Gbrowse and are also available for searches using BLAST. Researchers can easily navigate from genome content to gene page reports, literature, experimental reagents and many other features using hyperlinks. Xenbase has also greatly expanded image content for figures published in papers describing Xenopus research via PubMedCentral. PMID:25313157

  20. Developing a biocuration workflow for AgBase, a non-model organism database

    PubMed Central

    Pillai, Lakshmi; Chouvarine, Philippe; Tudor, Catalina O.; Schmidt, Carl J.; Vijay-Shanker, K.; McCarthy, Fiona M.

    2012-01-01

    AgBase provides annotation for agricultural gene products using the Gene Ontology (GO) and Plant Ontology, as appropriate. Unlike model organism species, agricultural species have a body of literature that does not just focus on gene function; to improve efficiency, we use text mining to identify literature for curation. The first component of our annotation interface is the gene prioritization interface that ranks gene products for annotation. Biocurators select the top-ranked gene and mark annotation for these genes as ‘in progress’ or ‘completed’; links enable biocurators to move directly to our biocuration interface (BI). Our BI includes all current GO annotation for gene products and is the main interface to add/modify AgBase curation data. The BI also displays Extracting Genic Information from Text (eGIFT) results for each gene product. eGIFT is a web-based, text-mining tool that associates ranked, informative terms (iTerms) and the articles and sentences containing them, with genes. Moreover, iTerms are linked to GO terms, where they match either a GO term name or a synonym. This enables AgBase biocurators to rapidly identify literature for further curation based on possible GO terms. Because most agricultural species do not have standardized literature, eGIFT searches all gene names and synonyms to associate articles with genes. As many of the gene names can be ambiguous, eGIFT applies a disambiguation step to remove matches that do not correspond to this gene, and filtering is applied to remove abstracts that mention a gene in passing. The BI is linked to our Journal Database (JDB) where corresponding journal citations are stored. Just as importantly, biocurators also add to the JDB citations that have no GO annotation. The AgBase BI also supports bulk annotation upload to facilitate our Inferred from electronic annotation of agricultural gene products. All annotations must pass standard GO Consortium quality checking before release in Ag

  1. Unlocking the potential of survival data for model organisms through a new database and online analysis platform: SurvCurv

    PubMed Central

    Ziehm, Matthias; Thornton, Janet M

    2013-01-01

    Lifespan measurements, also called survival records, are a key phenotype in research on aging. If external hazards are excluded, aging alone determines the mortality in a population of model organisms. Understanding the biology of aging is highly desirable because of the benefits for the wide range of aging-related diseases. However, it is also extremely challenging because of the underlying complexity. Here, we describe SurvCurv, a new database and online resource focused on model organisms collating survival data for storage and analysis. All data in SurvCurv are manually curated and annotated. The database, available at http://www.ebi.ac.uk/thornton-srv/databases/SurvCurv/, offers various functions including plotting, Cox proportional hazards analysis, mathematical mortality models and statistical tests. It facilitates reanalysis and allows users to analyse their own data and compare it with the largest repository of model-organism data from published experiments, thus unlocking the potential of survival data and demographics in model organisms. PMID:23826631

  2. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms.

    PubMed

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-01-01

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes. PMID:24296905

  3. Combining next-generation sequencing and online databases for microsatellite development in non-model organisms

    PubMed Central

    Rico, Ciro; Normandeau, Eric; Dion-Côté, Anne-Marie; Rico, María Inés; Côté, Guillaume; Bernatchez, Louis

    2013-01-01

    Next-generation sequencing (NGS) is revolutionising marker development and the rapidly increasing amount of transcriptomes published across a wide variety of taxa is providing valuable sequence databases for the identification of genetic markers without the need to generate new sequences. Microsatellites are still the most important source of polymorphic markers in ecology and evolution. Motivated by our long-term interest in the adaptive radiation of a non-model species complex of whitefishes (Coregonus spp.), in this study, we focus on microsatellite characterisation and multiplex optimisation using transcriptome sequences generated by Illumina® and Roche-454, as well as online databases of Expressed Sequence Tags (EST) for the study of whitefish evolution and demographic history. We identified and optimised 40 polymorphic loci in multiplex PCR reactions and validated the robustness of our analyses by testing several population genetics and phylogeographic predictions using 494 fish from five lakes and 2 distinct ecotypes. PMID:24296905

  4. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    PubMed Central

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately “publish” their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students’ efforts to engage the scientific process and pursue additional research opportunities beyond the course. PMID:24591511

  5. SubtiWiki-a database for the model organism Bacillus subtilis that links pathway, interaction and expression information.

    PubMed

    Michna, Raphael H; Commichau, Fabian M; Tödter, Dominik; Zschiedrich, Christopher P; Stülke, Jörg

    2014-01-01

    Genome annotation and access to information from large-scale experimental approaches at the genome level are essential to improve our understanding of living cells and organisms. This is even more the case for model organisms that are the basis to study pathogens and technologically important species. We have generated SubtiWiki, a database for the Gram-positive model bacterium Bacillus subtilis (http://subtiwiki.uni-goettingen.de/). In addition to the established companion modules of SubtiWiki, SubtiPathways and SubtInteract, we have now created SubtiExpress, a third module, to visualize genome scale transcription data that are of unprecedented quality and density. Today, SubtiWiki is one of the most complete collections of knowledge on a living organism in one single resource. PMID:24178028

  6. SubtiWiki–a database for the model organism Bacillus subtilis that links pathway, interaction and expression information

    PubMed Central

    Michna, Raphael H.; Commichau, Fabian M.; Tödter, Dominik; Zschiedrich, Christopher P.; Stülke, Jörg

    2014-01-01

    Genome annotation and access to information from large-scale experimental approaches at the genome level are essential to improve our understanding of living cells and organisms. This is even more the case for model organisms that are the basis to study pathogens and technologically important species. We have generated SubtiWiki, a database for the Gram-positive model bacterium Bacillus subtilis (http://subtiwiki.uni-goettingen.de/). In addition to the established companion modules of SubtiWiki, SubtiPathways and SubtInteract, we have now created SubtiExpress, a third module, to visualize genome scale transcription data that are of unprecedented quality and density. Today, SubtiWiki is one of the most complete collections of knowledge on a living organism in one single resource. PMID:24178028

  7. MaizeGDB: The Maize Model Organism Database for Basic, Translational, and Applied Research

    PubMed Central

    Lawrence, Carolyn J.; Harper, Lisa C.; Schaeffer, Mary L.; Sen, Taner Z.; Seigfried, Trent E.; Campbell, Darwin A.

    2008-01-01

    In 2001 maize became the number one production crop in the world with the Food and Agriculture Organization of the United Nations reporting over 614 million tonnes produced. Its success is due to the high productivity per acre in tandem with a wide variety of commercial uses. Not only is maize an excellent source of food, feed, and fuel, but also its by-products are used in the production of various commercial products. Maize's unparalleled success in agriculture stems from basic research, the outcomes of which drive breeding and product development. In order for basic, translational, and applied researchers to benefit from others' investigations, newly generated data must be made freely and easily accessible. MaizeGDB is the maize research community's central repository for genetics and genomics information. The overall goals of MaizeGDB are to facilitate access to the outcomes of maize research by integrating new maize data into the database and to support the maize research community by coordinating group activities. PMID:18769488

  8. Database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1991-01-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  9. Use of model organism and disease databases to support matchmaking for human disease gene discovery.

    PubMed

    Mungall, Christopher J; Washington, Nicole L; Nguyen-Xuan, Jeremy; Condit, Christopher; Smedley, Damian; Köhler, Sebastian; Groza, Tudor; Shefchek, Kent; Hochheiser, Harry; Robinson, Peter N; Lewis, Suzanna E; Haendel, Melissa A

    2015-10-01

    The Matchmaker Exchange application programming interface (API) allows searching a patient's genotypic or phenotypic profiles across clinical sites, for the purposes of cohort discovery and variant disease causal validation. This API can be used not only to search for matching patients, but also to match against public disease and model organism data. This public disease data enable matching known diseases and variant-phenotype associations using phenotype semantic similarity algorithms developed by the Monarch Initiative. The model data can provide additional evidence to aid diagnosis, suggest relevant models for disease mechanism and treatment exploration, and identify collaborators across the translational divide. The Monarch Initiative provides an implementation of this API for searching multiple integrated sources of data that contextualize the knowledge about any given patient or patient family into the greater biomedical knowledge landscape. While this corpus of data can aid diagnosis, it is also the beginning of research to improve understanding of rare human diseases. PMID:26269093

  10. SubtiWiki 2.0—an integrated database for the model organism Bacillus subtilis

    PubMed Central

    Michna, Raphael H.; Zhu, Bingyao; Mäder, Ulrike; Stülke, Jörg

    2016-01-01

    To understand living cells, we need knowledge of each of their parts as well as about the interactions of these parts. To gain rapid and comprehensive access to this information, annotation databases are required. Here, we present SubtiWiki 2.0, the integrated database for the model bacterium Bacillus subtilis (http://subtiwiki.uni-goettingen.de/). SubtiWiki provides text-based access to published information about the genes and proteins of B. subtilis as well as presentations of metabolic and regulatory pathways. Moreover, manually curated protein-protein interactions diagrams are linked to the protein pages. Finally, expression data are shown with respect to gene expression under 104 different conditions as well as absolute protein quantification for cytoplasmic proteins. To facilitate the mobile use of SubtiWiki, we have now expanded it by Apps that are available for iOS and Android devices. Importantly, the App allows to link private notes and pictures to the gene/protein pages. Today, SubtiWiki has become one of the most complete collections of knowledge on a living organism in one single resource. PMID:26433225

  11. A Database for Propagation Models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Rucker, James

    1997-01-01

    The Propagation Models Database is designed to allow the scientists and experimenters in the propagation field to process their data through many known and accepted propagation models. The database is an Excel 5.0 based software that houses user-callable propagation models of propagation phenomena. It does not contain a database of propagation data generated out of the experiments. The database not only provides a powerful software tool to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables and producing impressive and crisp hard copy for presentation and filing.

  12. Integration of an Evidence Base into a Probabilistic Risk Assessment Model. The Integrated Medical Model Database: An Organized Evidence Base for Assessing In-Flight Crew Health Risk and System Design

    NASA Technical Reports Server (NTRS)

    Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.

  13. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  14. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani S.

    1992-01-01

    In June 1991, a paper at the fifteenth NASA Propagation Experimenters Meeting (NAPEX 15) was presented outlining the development of a database for propagation models. The database is designed to allow the scientists and experimenters in the propagation field to process their data through any known and accepted propagation model. The architecture of the database also incorporates the possibility of changing the standard models in the database to fit the scientist's or the experimenter's needs. The database not only provides powerful software to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables, and producing impressive and crisp hard copy for presentation and filing.

  15. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application. PMID:27197511

  16. The Organization of Free Text Databases.

    ERIC Educational Resources Information Center

    Paijmans, H.

    1988-01-01

    Discusses the design and implementation of database management systems for low-organized and natural language text on small computers. Emphasis is on the efficient storage and access of primary (text) and secondary (dictionary) files. Possible expansions to the system to increase its use in linguistic research are considered. (six references)…

  17. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

    1995-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

  18. Software Engineering Laboratory (SEL) database organization and user's guide

    NASA Technical Reports Server (NTRS)

    So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas

    1989-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.

  19. Community Organizing for Database Trial Buy-In by Patrons

    ERIC Educational Resources Information Center

    Pionke, J. J.

    2015-01-01

    Database trials do not often garner a lot of feedback. Using community-organizing techniques can not only potentially increase the amount of feedback received but also deepen the relationship between the librarian and his or her constituent group. This is a case study of the use of community-organizing techniques in a series of database trials for…

  20. DEPOT: A Database of Environmental Parameters, Organizations and Tools

    SciTech Connect

    CARSON,SUSAN D.; HUNTER,REGINA LEE; MALCZYNSKI,LEONARD A.; POHL,PHILLIP I.; QUINTANA,ENRICO; SOUZA,CAROLINE A.; HIGLEY,KATHRYN; MURPHIE,WILLIAM

    2000-12-19

    The Database of Environmental Parameters, Organizations, and Tools (DEPOT) has been developed by the Department of Energy (DOE) as a central warehouse for access to data essential for environmental risk assessment analyses. Initial efforts have concentrated on groundwater and vadose zone transport data and bioaccumulation factors. DEPOT seeks to provide a source of referenced data that, wherever possible, includes the level of uncertainty associated with these parameters. Based on the amount of data available for a particular parameter, uncertainty is expressed as a standard deviation or a distribution function. DEPOT also provides DOE site-specific performance assessment data, pathway-specific transport data, and links to environmental regulations, disposal site waste acceptance criteria, other environmental parameter databases, and environmental risk assessment models.

  1. Conceptual and logical level of database modeling

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  2. Organ system heterogeneity DB: a database for the visualization of phenotypes at the organ system level.

    PubMed

    Mannil, Deepthi; Vogt, Ingo; Prinz, Jeanette; Campillos, Monica

    2015-01-01

    Perturbations of mammalian organisms including diseases, drug treatments and gene perturbations in mice affect organ systems differently. Some perturbations impair relatively few organ systems while others lead to highly heterogeneous or systemic effects. Organ System Heterogeneity DB (http://mips.helmholtz-muenchen.de/Organ_System_Heterogeneity/) provides information on the phenotypic effects of 4865 human diseases, 1667 drugs and 5361 genetically modified mouse models on 26 different organ systems. Disease symptoms, drug side effects and mouse phenotypes are mapped to the System Organ Class (SOC) level of the Medical Dictionary of Regulatory Activities (MedDRA). Then, the organ system heterogeneity value, a measurement of the systemic impact of a perturbation, is calculated from the relative frequency of phenotypic features across all SOCs. For perturbations of interest, the database displays the distribution of phenotypic effects across organ systems along with the heterogeneity value and the distance between organ system distributions. In this way, it allows, in an easy and comprehensible fashion, the comparison of the phenotypic organ system distributions of diseases, drugs and their corresponding genetically modified mouse models of associated disease genes and drug targets. The Organ System Heterogeneity DB is thus a platform for the visualization and comparison of organ system level phenotypic effects of drugs, diseases and genes. PMID:25313158

  3. Organ system heterogeneity DB: a database for the visualization of phenotypes at the organ system level

    PubMed Central

    Mannil, Deepthi; Vogt, Ingo; Prinz, Jeanette; Campillos, Monica

    2015-01-01

    Perturbations of mammalian organisms including diseases, drug treatments and gene perturbations in mice affect organ systems differently. Some perturbations impair relatively few organ systems while others lead to highly heterogeneous or systemic effects. Organ System Heterogeneity DB (http://mips.helmholtz-muenchen.de/Organ_System_Heterogeneity/) provides information on the phenotypic effects of 4865 human diseases, 1667 drugs and 5361 genetically modified mouse models on 26 different organ systems. Disease symptoms, drug side effects and mouse phenotypes are mapped to the System Organ Class (SOC) level of the Medical Dictionary of Regulatory Activities (MedDRA). Then, the organ system heterogeneity value, a measurement of the systemic impact of a perturbation, is calculated from the relative frequency of phenotypic features across all SOCs. For perturbations of interest, the database displays the distribution of phenotypic effects across organ systems along with the heterogeneity value and the distance between organ system distributions. In this way, it allows, in an easy and comprehensible fashion, the comparison of the phenotypic organ system distributions of diseases, drugs and their corresponding genetically modified mouse models of associated disease genes and drug targets. The Organ System Heterogeneity DB is thus a platform for the visualization and comparison of organ system level phenotypic effects of drugs, diseases and genes. PMID:25313158

  4. Models And Results Database System.

    Energy Science and Technology Software Center (ESTSC)

    2001-03-27

    Version 00 MAR-D 4.16 is a program that is used primarily for Probabilistic Risk Assessment (PRA) data loading. This program defines a common relational database structure that is used by other PRA programs. This structure allows all of the software to access and manipulate data created by other software in the system without performing a lengthy conversion. The MAR-D program also provides the facilities for loading and unloading of PRA data from the relational databasemore » structure used to store the data to an ASCII format for interchange with other PRA software. The primary function of MAR-D is to create a data repository for NUREG-1150 and other permanent data by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS and FRANTIC.« less

  5. Web resources for model organism studies.

    PubMed

    Tang, Bixia; Wang, Yanqing; Zhu, Junwei; Zhao, Wenming

    2015-02-01

    An ever-growing number of resources on model organisms have emerged with the continued development of sequencing technologies. In this paper, we review 13 databases of model organisms, most of which are reported by the National Institutes of Health of the United States (NIH; http://www.nih.gov/science/models/). We provide a brief description for each database, as well as detail its data source and types, functions, tools, and availability of access. In addition, we also provide a quality assessment about these databases. Significantly, the organism databases instituted in the early 1990s--such as the Mouse Genome Database (MGD), Saccharomyces Genome Database (SGD), and FlyBase--have developed into what are now comprehensive, core authority resources. Furthermore, all of the databases mentioned here update continually according to user feedback and with advancing technologies. PMID:25707592

  6. Databases for Computational Thermodynamics and Diffusion Modeling

    NASA Astrophysics Data System (ADS)

    Kattner, U. R.; Boettinger, W. J.; Morral, J. E.

    2002-11-01

    Databases for computational thermodynamics and diffusion modeling can be applied to predict phase diagrams for alloy design and alloy behavior during processing and service. Databases that are currently available to scientists, engineers and students need to be expanded and improved. The approach of the workshop was to first identify the database and information delivery tool needs of industry and education. The workshop format was a series of invited talks given to the group as a whole followed by general discussions of needs and benefits to provide a roadmap of future activities.

  7. Data dictionary model for relational databases

    SciTech Connect

    Seesing, P.R.

    1983-10-01

    Data Dictionary is an important tool in information engineering, especially when the data is maintained in relational structures. Using the formal concepts from which relational DBMS technology is derived, it is possible to construct a meta-informational dictionary to model relational database. This general model can be used to evaluate the design of existing and planned databases, as well as modeling business or scientific data. Such a dictionary model is described and its potential uses and benefits are explored with emphasis on integrity constraints. Issues related to the balancing of such formal conceptualization against real-world operations are presented.

  8. Development and Mining of a Volatile Organic Compound Database

    PubMed Central

    Abdullah, Azian Azamimi; Altaf-Ul-Amin, Md.; Ono, Naoaki; Sato, Tetsuo; Sugiura, Tadao; Morita, Aki Hirai; Katsuragi, Tetsuo; Muto, Ai; Nishioka, Takaaki; Kanaya, Shigehiko

    2015-01-01

    Volatile organic compounds (VOCs) are small molecules that exhibit high vapor pressure under ambient conditions and have low boiling points. Although VOCs contribute only a small proportion of the total metabolites produced by living organisms, they play an important role in chemical ecology specifically in the biological interactions between organisms and ecosystems. VOCs are also important in the health care field as they are presently used as a biomarker to detect various human diseases. Information on VOCs is scattered in the literature until now; however, there is still no available database describing VOCs and their biological activities. To attain this purpose, we have developed KNApSAcK Metabolite Ecology Database, which contains the information on the relationships between VOCs and their emitting organisms. The KNApSAcK Metabolite Ecology is also linked with the KNApSAcK Core and KNApSAcK Metabolite Activity Database to provide further information on the metabolites and their biological activities. The VOC database can be accessed online. PMID:26495281

  9. The Arabidopsis Information Resource (TAIR): a model organism database providing a centralized, curated gateway to Arabidopsis biology, research materials and community.

    PubMed

    Rhee, Seung Yon; Beavis, William; Berardini, Tanya Z; Chen, Guanghong; Dixon, David; Doyle, Aisling; Garcia-Hernandez, Margarita; Huala, Eva; Lander, Gabriel; Montoya, Mary; Miller, Neil; Mueller, Lukas A; Mundodi, Suparna; Reiser, Leonore; Tacklind, Julie; Weems, Dan C; Wu, Yihe; Xu, Iris; Yoo, Daniel; Yoon, Jungwon; Zhang, Peifen

    2003-01-01

    Arabidopsis thaliana is the most widely-studied plant today. The concerted efforts of over 11 000 researchers and 4000 organizations around the world are generating a rich diversity and quantity of information and materials. This information is made available through a comprehensive on-line resource called the Arabidopsis Information Resource (TAIR) (http://arabidopsis.org), which is accessible via commonly used web browsers and can be searched and downloaded in a number of ways. In the last two years, efforts have been focused on increasing data content and diversity, functionally annotating genes and gene products with controlled vocabularies, and improving data retrieval, analysis and visualization tools. New information include sequence polymorphisms including alleles, germplasms and phenotypes, Gene Ontology annotations, gene families, protein information, metabolic pathways, gene expression data from microarray experiments and seed and DNA stocks. New data visualization and analysis tools include SeqViewer, which interactively displays the genome from the whole chromosome down to 10 kb of nucleotide sequence and AraCyc, a metabolic pathway database and map tool that allows overlaying expression data onto the pathway diagrams. Finally, we have recently incorporated seed and DNA stock information from the Arabidopsis Biological Resource Center (ABRC) and implemented a shopping-cart style on-line ordering system. PMID:12519987

  10. An information model based weld schedule database

    SciTech Connect

    Kleban, S.D.; Knorovsky, G.A.; Hicken, G.K.; Gershanok, G.A.

    1997-08-01

    As part of a computerized system (SmartWeld) developed at Sandia National Laboratories to facilitate agile manufacturing of welded assemblies, a weld schedule database (WSDB) was also developed. SmartWeld`s overall goals are to shorten the design-to-product time frame and to promote right-the-first-time weldment design and manufacture by providing welding process selection guidance to component designers. The associated WSDB evolved into a substantial subproject by itself. At first, it was thought that the database would store perhaps 50 parameters about a weld schedule. This was a woeful underestimate: the current WSDB has over 500 parameters defined in 73 tables. This includes data bout the weld, the piece parts involved, the piece part geometry, and great detail about the schedule and intervals involved in performing the weld. This complex database was built using information modeling techniques. Information modeling is a process that creates a model of objects and their roles for a given domain (i.e. welding). The Natural-Language Information Analysis methodology (NIAM) technique was used, which is characterized by: (1) elementary facts being stated in natural language by the welding expert, (2) determinism (the resulting model is provably repeatable, i.e. it gives the same answer every time), and (3) extensibility (the model can be added to without changing existing structure). The information model produced a highly normalized relational schema that was translated to Oracle{trademark} Relational Database Management Systems for implementation.

  11. The Database for Reaching Experiments and Models

    PubMed Central

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis. PMID:24244351

  12. Hydroacoustic forcing function modeling using DNS database

    NASA Technical Reports Server (NTRS)

    Zawadzki, I.; Gershfield, J. L.; Na, Y.; Wang, M.

    1996-01-01

    A wall pressure frequency spectrum model (Blake 1971 ) has been evaluated using databases from Direct Numerical Simulations (DNS) of a turbulent boundary layer (Na & Moin 1996). Good agreement is found for moderate to strong adverse pressure gradient flows in the absence of separation. In the separated flow region, the model underpredicts the directly calculated spectra by an order of magnitude. The discrepancy is attributed to the violation of the model assumptions in that part of the flow domain. DNS computed coherence length scales and the normalized wall pressure cross-spectra are compared with experimental data. The DNS results are consistent with experimental observations.

  13. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe

    PubMed Central

    Aksoy, Ece

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they’re collected from the “Land Use/Cover Area frame Statistical Survey” (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and “Soil Transformations in European Catchments” (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960–1990 and 2000–2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas

  14. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe.

    PubMed

    Aksoy, Ece; Yigini, Yusuf; Montanarella, Luca

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they're collected from the "Land Use/Cover Area frame Statistical Survey" (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and "Soil Transformations in European Catchments" (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960-1990 and 2000-2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas; agricultural

  15. Assessment of the SFC database for analysis and modeling

    NASA Technical Reports Server (NTRS)

    Centeno, Martha A.

    1994-01-01

    SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.

  16. Spatial Database Modeling for Indoor Navigation Systems

    NASA Astrophysics Data System (ADS)

    Gotlib, Dariusz; Gnat, Miłosz

    2013-12-01

    For many years, cartographers are involved in designing GIS and navigation systems. Most GIS applications use the outdoor data. Increasingly, similar applications are used inside buildings. Therefore it is important to find the proper model of indoor spatial database. The development of indoor navigation systems should utilize advanced teleinformation, geoinformatics, geodetic and cartographical knowledge. The authors present the fundamental requirements for the indoor data model for navigation purposes. Presenting some of the solutions adopted in the world they emphasize that navigation applications require specific data to present the navigation routes in the right way. There is presented original solution for indoor data model created by authors on the basis of BISDM model. Its purpose is to expand the opportunities for use in indoor navigation.

  17. Asteroid models from the Lowell photometric database

    NASA Astrophysics Data System (ADS)

    Ďurech, J.; Hanuš, J.; Oszkiewicz, D.; Vančo, R.

    2016-03-01

    Context. Information about shapes and spin states of individual asteroids is important for the study of the whole asteroid population. For asteroids from the main belt, most of the shape models available now have been reconstructed from disk-integrated photometry by the lightcurve inversion method. Aims: We want to significantly enlarge the current sample (~350) of available asteroid models. Methods: We use the lightcurve inversion method to derive new shape models and spin states of asteroids from the sparse-in-time photometry compiled in the Lowell Photometric Database. To speed up the time-consuming process of scanning the period parameter space through the use of convex shape models, we use the distributed computing project Asteroids@home, running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. This way, the period-search interval is divided into hundreds of smaller intervals. These intervals are scanned separately by different volunteers and then joined together. We also use an alternative, faster, approach when searching the best-fit period by using a model of triaxial ellipsoid. By this, we can independently confirm periods found with convex models and also find rotation periods for some of those asteroids for which the convex-model approach gives too many solutions. Results: From the analysis of Lowell photometric data of the first 100 000 numbered asteroids, we derived 328 new models. This almost doubles the number of available models. We tested the reliability of our results by comparing models that were derived from purely Lowell data with those based on dense lightcurves, and we found that the rate of false-positive solutions is very low. We also present updated plots of the distribution of spin obliquities and pole ecliptic longitudes that confirm previous findings about a non-uniform distribution of spin axes. However, the models reconstructed from noisy sparse data are heavily biased towards more elongated bodies with high

  18. A Model Based Mars Climate Database for the Mission Design

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access

  19. Physiological Parameters Database for PBPK Modeling (External Review Draft)

    EPA Science Inventory

    EPA released for public comment a physiological parameters database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence. It also contains similar data for an...

  20. A computational framework for a database of terrestrial biosphere models

    NASA Astrophysics Data System (ADS)

    Metzler, Holger; Müller, Markus; Ceballos-Núñez, Verónika; Sierra, Carlos A.

    2016-04-01

    Most terrestrial biosphere models consist of a set of coupled ordinary first order differential equations. Each equation represents a pool containing carbon with a certain turnover rate. Although such models share some basic mathematical structures, they can have very different properties such as number of pools, cycling rates, and internal fluxes. We present a computational framework that helps analyze the structure and behavior of terrestrial biosphere models using as an example the process of soil organic matter decomposition. The same framework can also be used for other sub-processes such as carbon fixation or allocation. First, the models have to be fed into a database consisting of simple text files with a common structure. Then they are read in using Python and transformed into an internal 'Model Class' that can be used to automatically create an overview stating the model's structure, state variables, internal and external fluxes. SymPy, a Python library for symbolic mathematics, helps to also calculate the Jacobian matrix at possibly given steady states and the eigenvalues of this matrix. If complete parameter sets are available, the model can also be run using R to simulate its behavior under certain conditions and to support a deeper stability analysis. In this case, the framework is also able to provide phase-plane plots if appropriate. Furthermore, an overview of all the models in the database can be given to help identify their similarities and differences.

  1. Integrated Space Asset Management Database and Modeling

    NASA Technical Reports Server (NTRS)

    MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane

    2015-01-01

    Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for

  2. Synthesized Population Databases: A US Geospatial Database for Agent-Based Models

    PubMed Central

    Wheaton, William D.; Cajka, James C.; Chasteen, Bernadette M.; Wagener, Diane K.; Cooley, Philip C.; Ganapathi, Laxminarayana; Roberts, Douglas J.; Allpress, Justine L.

    2010-01-01

    Agent-based models simulate large-scale social systems. They assign behaviors and activities to “agents” (individuals) within the population being modeled and then allow the agents to interact with the environment and each other in complex simulations. Agent-based models are frequently used to simulate infectious disease outbreaks, among other uses. RTI used and extended an iterative proportional fitting method to generate a synthesized, geospatially explicit, human agent database that represents the US population in the 50 states and the District of Columbia in the year 2000. Each agent is assigned to a household; other agents make up the household occupants. For this database, RTI developed the methods for generating synthesized households and personsassigning agents to schools and workplaces so that complex interactions among agents as they go about their daily activities can be taken into accountgenerating synthesized human agents who occupy group quarters (military bases, college dormitories, prisons, nursing homes).In this report, we describe both the methods used to generate the synthesized population database and the final data structure and data content of the database. This information will provide researchers with the information they need to use the database in developing agent-based models. Portions of the synthesized agent database are available to any user upon request. RTI will extract a portion (a county, region, or state) of the database for users who wish to use this database in their own agent-based models. PMID:20505787

  3. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    SciTech Connect

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  4. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms

    PubMed Central

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S.

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. ‘Fish Karyome’ database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome PMID:26980518

  5. Fish Karyome version 2.1: a chromosome database of fishes and other aquatic organisms.

    PubMed

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Rashid, Iliyas; Sharma, Jyoti; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra; Murali, S

    2016-01-01

    A voluminous information is available on karyological studies of fishes; however, limited efforts were made for compilation and curation of the available karyological data in a digital form. 'Fish Karyome' database was the preliminary attempt to compile and digitize the available karyological information on finfishes belonging to the Indian subcontinent. But the database had limitations since it covered data only on Indian finfishes with limited search options. Perceiving the feedbacks from the users and its utility in fish cytogenetic studies, the Fish Karyome database was upgraded by applying Linux, Apache, MySQL and PHP (pre hypertext processor) (LAMP) technologies. In the present version, the scope of the system was increased by compiling and curating the available chromosomal information over the globe on fishes and other aquatic organisms, such as echinoderms, molluscs and arthropods, especially of aquaculture importance. Thus, Fish Karyome version 2.1 presently covers 866 chromosomal records for 726 species supported with 253 published articles and the information is being updated regularly. The database provides information on chromosome number and morphology, sex chromosomes, chromosome banding, molecular cytogenetic markers, etc. supported by fish and karyotype images through interactive tools. It also enables the users to browse and view chromosomal information based on habitat, family, conservation status and chromosome number. The system also displays chromosome number in model organisms, protocol for chromosome preparation and allied techniques and glossary of cytogenetic terms. A data submission facility has also been provided through data submission panel. The database can serve as a unique and useful resource for cytogenetic characterization, sex determination, chromosomal mapping, cytotaxonomy, karyo-evolution and systematics of fishes. Database URL: http://mail.nbfgr.res.in/Fish_Karyome. PMID:26980518

  6. Organ growth functions in maturing male Sprague-Dawley rats based on a collective database.

    PubMed

    Mirfazaelian, Ahmad; Fisher, Jeffrey W

    2007-06-01

    Ten different organ weights (liver, spleen, kidneys, heart, lungs, brain, adrenals, testes, epididymes, and seminal vesicles) of male Sprague-Dawley (S-D) rats of different ages (1-280 d) were extracted based on a thorough literature survey database. A generalized Michaelis-Menten (GMM) model, used to fit organ weights versus age in a previous study (Schoeffner et al., 1999) based on a limited data, was used to find the best fit model for the present expanded data compilation. The GMM model has the functional form: Wt = (Wt(o).K(gamma) + Wt(max).Age(gamma))/(K(gamma) + Age(gamma)) where Wt is organ/tissue weight at a specified age, Wt(o) and Wt(max) are weight at birth and maximal growth, respectively, and K and gamma are constants. Organ weights were significantly correlated with their respective ages for all organs and tissues. GMM-derived organ growth and percent body weight (%BW) fractions of different tissues were plotted against animal age and compared with experimental values as well as previously published models. The GMM-based organ growth and %BW fraction profiles were in general agreement with our empirical data as well as with previous studies. The present model was compared with the GMM model developed previously for six organs--liver, spleen, kidneys, heart, lungs, and brain--based on a limited data, and no significant difference was noticed between the two sets of predictions. It was concluded that the GMM models presented herein for different male S-D rats organs (liver, spleen, kidneys, heart, lungs, brain, adrenals, testes, epididymes, and seminal vesicles) are capable of predicting organ weights and %BW ratios accurately at different ages. PMID:17497417

  7. Software Engineering Laboratory (SEL) database organization and user's guide, revision 2

    NASA Technical Reports Server (NTRS)

    Morusiewicz, Linda; Bristow, John

    1992-01-01

    The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.

  8. Sequence modelling and an extensible data model for genomic database

    SciTech Connect

    Li, Peter Wei-Der Lawrence Berkeley Lab., CA )

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  9. Sequence modelling and an extensible data model for genomic database

    SciTech Connect

    Li, Peter Wei-Der |

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  10. Database Information for Small Organizations: Extension Bridges the Utilization Gap.

    ERIC Educational Resources Information Center

    DePaulo, Peter J.

    1992-01-01

    Describes a demonstration program in which information needed to respond to questions is based on secondary data that comes from databases of agencies such as the Census Bureau and Dun and Bradstreet. (JOW)

  11. EPA's Drinking Water Treatability Database and Treatment Cost Models

    EPA Science Inventory

    USEPA Drinking Water Treatability Database and Drinking Water Treatment Cost Models are valuable tools for determining the effectiveness and cost of treatment for contaminants of emerging concern. The models will be introduced, explained, and demonstrated.

  12. Object-Oriented Geographical Database Model

    NASA Technical Reports Server (NTRS)

    Johnson, M. L.; Bryant, N.; Sapounas, D.

    1996-01-01

    Terbase is an Object-Oriented database system under development at the Jet Propulsion Laboratory (JPL). Terbase is designed for flexibility, reusability, maintenace ease, multi-user collaboration and independence, and efficiency. This paper details the design and development of Terbase as a geographic data server...

  13. Integrated Space Asset Management Database and Modeling

    NASA Astrophysics Data System (ADS)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  14. Integrated Space Asset Management Database and Modeling

    NASA Astrophysics Data System (ADS)

    Gagliano, L.; MacLeod, T.; Mason, S.; Percy, T.; Prescott, J.

    The Space Asset Management Database (SAM-D) was implemented in order to effectively track known objects in space by ingesting information from a variety of databases and performing calculations to determine the expected position of the object at a specified time. While SAM-D performs this task very well, it is limited by technology and is not available outside of the local user base. Modeling and simulation can be powerful tools to exploit the information contained in SAM-D. However, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. A more capable data management infrastructure would extend SAM-D to support the larger data sets to be generated by the COI. A service-oriented architecture model will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. Based on a web-centric approach, the entire COI will be able to access the data and related analytics. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. SIMON is a Government off the Shelf information sharing platform in use throughout DoD and DHS information sharing and situation awareness communities. SIMON providing fine grained control to data owners allowing them to determine exactly how and when their data is shared. SIMON supports a micro-service approach to system development, meaning M&S and analytic services can be easily built or adapted. It is uniquely positioned to fill this need as an information-sharing platform with a proven track record of successful situational awareness system deployments. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust SA environment for the NASA SA COI that can be extended and expanded indefinitely. First Results of Coherent Uplink from a

  15. A Robust Damage Assessment Model for Corrupted Database Systems

    NASA Astrophysics Data System (ADS)

    Fu, Ge; Zhu, Hong; Li, Yingjiu

    An intrusion tolerant database uses damage assessment techniques to detect damage propagation scales in a corrupted database system. Traditional damage assessment approaches in a intrusion tolerant database system can only locate damages which are caused by reading corrupted data. In fact, there are many other damage spreading patterns that have not been considered in traditional damage assessment model. In this paper, we systematically analyze inter-transaction dependency relationships that have been neglected in the previous research and propose four different dependency relationships between transactions which may cause damage propagation. We extend existing damage assessment model based on the four novel dependency relationships. The essential properties of our model is also discussed.

  16. A Web Database To Manage and Organize ANSI Standards Collections.

    ERIC Educational Resources Information Center

    Matylonek, John C.; Peasley, Maren

    2001-01-01

    Discusses collections of standards by ANSI (American National Standards Institute) and the problems they create for technical libraries. Describes a custom-designed Web database at Oregon State University that is linked to online catalog records, thus enhancing access to the standards collection. (LRW)

  17. Impact of Prior Knowledge of Informational Content and Organization on Learning Search Principles in a Database.

    ERIC Educational Resources Information Center

    Linde, Lena; Bergstrom, Monica

    1988-01-01

    The importance of prior knowledge of informational content and organization for search performance on a database was evaluated for 17 undergraduates. Pretraining related to content, and information did facilitate learning logical search principles in a relational database; contest pretraining was more efficient. (SLD)

  18. PHYTOTOX: DATABASE DEALING WITH THE EFFECT OF ORGANIC CHEMICALS ON TERRESTRIAL VASCULAR PLANTS

    EPA Science Inventory

    A new database, PHYTOTOX, dealing with the direct effects of exogenously supplied organic chemicals on terrestrial vascular plants is described. The database consists of two files, a Reference File and Effects File. The Reference File is a bibliographic file of published research...

  19. UTAB: A COMPUTER DATABASE ON RESIDUES OF XENOBIOTIC ORGANIC CHEMICALS AND HEAVY METALS IN PLANTS

    EPA Science Inventory

    UTAB can be used to estimate the accumulation of chemicals in vegetation and their subsequent movement through the food chain. he UTAB Database contains information concerned with the uptake/accumulation, translocation, adhesion, and biotransformation of both xenobiotic organic c...

  20. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  1. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  2. Outline of a Model for Lexical Databases.

    ERIC Educational Resources Information Center

    Ide, Nancy; And Others

    1993-01-01

    Reports on a project showing that relational data models, including unnormalized models that allow the nesting of relations, cannot fully capture the structural properties of lexical information. A feature-based model that allows for a full representation of some nesting and defines a factoring mechanism is described and demonstrated. (38…

  3. GIS-based Conceptual Database Model for Planetary Geoscientific Mapping

    NASA Astrophysics Data System (ADS)

    van Gasselt, Stephan; Nass, Andrea; Neukum, Gerhard

    2010-05-01

    concerning, e.g., map products (product and cartograpic representation), sensor-data products, stratigraphy definitions for each planet (facies, formation, ...), and mapping units. Domains and subtypes as well as a set of two dozens relationships define their interaction and allow a high level of constraints that aid to limit errors by domain- and topologic boundary conditions without limiting the abilitiy of the mapper to perform his/her task. The geodatabase model is part of a data model currently under development and design in the context of providing tools and definitions for mapping, cartographic representations and data exploitation. The database model as an integral part is designed for portability with respect to geoscientific mapping tasks in general and can be applied to every GIS project dealing with terrestrial planetary objects. It will be accompanied by definitions and representations on the cartographic level as well as tools and utilities for providing easy accessible workflows focussing on query, organization, maintainance, integration of planetary data and meta information. The data model's layout is modularized with individual components dealing with symbol representations (geology and geomorphology), metadata accessibility and modification, definition of stratigraphic entitites and their relationships as well as attribute domains, extensions for planetary mapping and analysis tasks as well as integration of data information on the level of vector representations for easy accessible querying, data processing in connection with ISIS/GDAL and data integration.

  4. Imprecision and Uncertainty in the UFO Database Model.

    ERIC Educational Resources Information Center

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects, and thus…

  5. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  6. SIFT vehicle recognition with semi-synthetic model database

    NASA Astrophysics Data System (ADS)

    Price, Rebecca L.; Rovito, Todd V.

    2012-06-01

    Object recognition is an important problem that has many applications that are of interest to the United States Air Force (USAF). Recently the USAF released its update to Technology Horizons, a report that is designed to guide the science and technology direction of the Air Force. Technology Horizons specifically calls out for the need to use autonomous systems in essentially all aspects of Air Force operations [1]. Object recognition is a key enabler to autonomous exploitation of intelligence, surveillance, and reconnaissance (ISR) data which might make the automatic searching of millions of hours of video practical. In particular this paper focuses on vehicle recognition with Lowe's Scale-invariant feature transform (SIFT) using a model database that was generated with semi-synthetic data. To create the model database we used a desktop laser scanner to create a high resolution 3D facet model. Then the 3D facet model was imported into LuxRender, a physics accurate ray tracing tool, and several views were rendered to create a model database. SIFT was selected because the algorithm is invariant to scale, noise, and illumination making it possible to create a model database of only a hundred original viewing locations which keeps the size of the model database reasonable.

  7. Performance modeling for large database systems

    NASA Astrophysics Data System (ADS)

    Schaar, Stephen; Hum, Frank; Romano, Joe

    1997-02-01

    One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.

  8. Examining the Factors That Contribute to Successful Database Application Implementation Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Nworji, Alexander O.

    2013-01-01

    Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…

  9. BioProject and BioSample databases at NCBI: facilitating capture and organization of metadata.

    PubMed

    Barrett, Tanya; Clark, Karen; Gevorgyan, Robert; Gorelenkov, Vyacheslav; Gribov, Eugene; Karsch-Mizrachi, Ilene; Kimelman, Michael; Pruitt, Kim D; Resenchuk, Sergei; Tatusova, Tatiana; Yaschenko, Eugene; Ostell, James

    2012-01-01

    As the volume and complexity of data sets archived at NCBI grow rapidly, so does the need to gather and organize the associated metadata. Although metadata has been collected for some archival databases, previously, there was no centralized approach at NCBI for collecting this information and using it across databases. The BioProject database was recently established to facilitate organization and classification of project data submitted to NCBI, EBI and DDBJ databases. It captures descriptive information about research projects that result in high volume submissions to archival databases, ties together related data across multiple archives and serves as a central portal by which to inform users of data availability. Concomitantly, the BioSample database is being developed to capture descriptive information about the biological samples investigated in projects. BioProject and BioSample records link to corresponding data stored in archival repositories. Submissions are supported by a web-based Submission Portal that guides users through a series of forms for input of rich metadata describing their projects and samples. Together, these databases offer improved ways for users to query, locate, integrate and interpret the masses of data held in NCBI's archival repositories. The BioProject and BioSample databases are available at http://www.ncbi.nlm.nih.gov/bioproject and http://www.ncbi.nlm.nih.gov/biosample, respectively. PMID:22139929

  10. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    NASA Technical Reports Server (NTRS)

    Cognata, T.; Bue, G.; Makinen, J.

    2011-01-01

    The human thermal database developed at the Johnson Space Center (JSC) is used to evaluate a set of widely used human thermal models. This database will facilitate a more accurate evaluation of human thermoregulatory response using in a variety of situations, including those situations that might otherwise prove too dangerous for actual testing--such as extreme hot or cold splashdown conditions. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models. Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality.

  11. Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape

    SciTech Connect

    XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.

    2001-11-01

    The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.

  12. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  13. Spatial-temporal database model based on geodatabase

    NASA Astrophysics Data System (ADS)

    Zhu, Hongmei; Luo, Yu

    2009-10-01

    Entities in the real world have non-spatial attributes, as well as spatial and temporal features. A spatial-temporal data model aims at describing appropriately these intrinsic characteristics within the entities and model them on a conceptual level so that the model can present both static information and dynamic information that occurs over time. In this paper, we devise a novel spatial-temporal data model which is based on Geodatabase. The model employs object-oriented analysis method, combining object concept with event. The entity is defined as a feature class encapsulating attributes and operations. The operations detect change and store the changes automatically in a historic database in Geodatabase. Furthermore, the model takes advantage of the existing strengths of the relational database at the bottom level of Geodatabase, such as trigger and constraint, to monitor events on the attributes or locations and respond to the events correctly. A case of geographic database for Kunming municipal sewerage geographic information system is implemented by the model. The database reveals excellent performance on managing data and tracking the details of change. It provides a perfect data platform for querying, recurring history and predicting the trend of future. The instance demonstrates the spatial-temporal data model is efficient and practicable.

  14. A Database Model for Medical Consultation.

    ERIC Educational Resources Information Center

    Anvari, Morteza

    1991-01-01

    Describes a relational data model that can be used for knowledge representation and manipulation in rule-based medical consultation systems. Fuzzy queries or attribute values and fuzzy set theory are discussed, functional dependencies are described, and an example is presented of a system for diagnosing causes of eye inflammation. (15 references)…

  15. Visual Analysis of Residuals from Data-Based Models in Complex Industrial Processes

    NASA Astrophysics Data System (ADS)

    Ordoñez, Daniel G.; Cuadrado, Abel A.; Díaz, Ignacio; García, Francisco J.; Díez, Alberto B.; Fuertes, Juan J.

    2012-10-01

    The use of data-based models for visualization purposes in an industrial background is discussed. Results using Self-Organizing Maps (SOM) show how through a good design of the model and a proper visualization of the residuals generated by the model itself, the behavior of essential parameters of the process can be easily tracked in a visual way. Real data from a cold rolling facility have been used to prove the advantages of these techniques.

  16. 3MdB: the Mexican Million Models database

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  17. Materials Database Development for Ballistic Impact Modeling

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael

    2007-01-01

    A set of experimental data is being generated under the Fundamental Aeronautics Program Supersonics project to help create and validate accurate computational impact models of jet engine impact events. The data generated will include material property data generated at a range of different strain rates, from 1x10(exp -4)/sec to 5x10(exp 4)/sec, over a range of temperatures. In addition, carefully instrumented ballistic impact tests will be conducted on flat plates and curved structures to provide material and structural response information to help validate the computational models. The material property data and the ballistic impact data will be generated using materials from the same lot, as far as possible. It was found in preliminary testing that the surface finish of test specimens has an effect on measured high strain rate tension response of AL2024. Both the maximum stress and maximum elongation are greater on specimens with a smoother finish. This report gives an overview of the testing that is being conducted and presents results of preliminary testing of the surface finish study.

  18. Evaluating Service Organization Models

    PubMed Central

    TOUATI, NASSERA; PINEAULT, RAYNALD; CHAMPAGNE, FRANÇOIS; DENIS, JEAN-LOUIS; BROUSSELLE, ASTRID; CONTANDRIOPOULOS, ANDRÉ-PIERRE; GENEAU, ROBERT

    2016-01-01

    Based on the example of the evaluation of service organization models, this article shows how a configurational approach overcomes the limits of traditional methods which for the most part have studied the individual components of various models considered independently of one another. These traditional methods have led to results (observed effects) that are difficult to interpret. The configurational approach, in contrast, is based on the hypothesis that effects are associated with a set of internally coherent model features that form various configurations. These configurations, like their effects, are context-dependent. We explore the theoretical basis of the configuration approach in order to emphasize its relevance, and discuss the methodological challenges inherent in the application of this approach through an in-depth analysis of the scientific literature. We also propose methodological solutions to these challenges. We illustrate from an example how a configurational approach has been used to evaluate primary care models. Finally, we begin a discussion on the implications of this new evaluation approach for the scientific and decision-making communities.

  19. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  20. SPECTRAFACTORY.NET: A DATABASE OF MOLECULAR MODEL SPECTRA

    SciTech Connect

    Cami, J.; Van Malderen, R.; Markwick, A. J. E-mail: Andrew.Markwick@manchester.ac.uk

    2010-04-01

    We present a homogeneous database of synthetic molecular absorption and emission spectra from the optical to mm wavelengths for a large range of temperatures and column densities relevant for various astrophysical purposes, but in particular for the analysis, identification, and first-order analysis of molecular bands in spectroscopic observations. All spectra are calculated in the LTE limit from several molecular line lists, and are presented at various spectral resolving powers corresponding to several specific instrument simulations. The database is available online at http://www.spectrafactory.net, where users can freely browse, search, display, and download the spectra. We describe how additional model spectra can be requested for (automatic) calculation and inclusion. The database already contains over half a million model spectra for 39 molecules (96 different isotopologues) over the wavelength range 350 nm-3 mm ({approx}3-30000 cm{sup -1})

  1. Conventionally altered organisms: Database on survival, dispersal, fate, and pathogenicity

    SciTech Connect

    Kuklinski, D.M.; King, K.H.; Addison, J.T.; Travis, C.C.

    1990-03-29

    Although increases in development and sales of biotechnology are projected, research on the risks associated with deliberate releases of biotechnology products lags behind. Thus, there is an urgent need to provide investigators and regulators with guidelines for classifying and evaluating risks associated with the release of genetically engineered microorganisms (GEMs). If the release of GEMs into the environmental poses risks similar to that of releasing conventionally altered organisms (CAOs), then a study evaluating the hazards associated with environmentally released CAOs would be most useful. This paper provides a survey of the available data on survival, dispersal, pathogenicity, and other characteristics which affect the fate of microbes after release. Although the present study is not exhaustive, it does provide an indication of the type and amount of data available on CAOs. 350 refs.

  2. Database integration in a multimedia-modeling environment

    SciTech Connect

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include tools to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.

  3. Technical Work Plan for: Thermodynamic Database for Chemical Modeling

    SciTech Connect

    C.F. Jovecolon

    2006-09-07

    The objective of the work scope covered by this Technical Work Plan (TWP) is to correct and improve the Yucca Mountain Project (YMP) thermodynamic databases, to update their documentation, and to ensure reasonable consistency among them. In addition, the work scope will continue to generate database revisions, which are organized and named so as to be transparent to internal and external users and reviewers. Regarding consistency among databases, it is noted that aqueous speciation and mineral solubility data for a given system may differ according to how solubility was determined, and the method used for subsequent retrieval of thermodynamic parameter values from measured data. Of particular concern are the details of the determination of ''infinite dilution'' constants, which involve the use of specific methods for activity coefficient corrections. That is, equilibrium constants developed for a given system for one set of conditions may not be consistent with constants developed for other conditions, depending on the species considered in the chemical reactions and the methods used in the reported studies. Hence, there will be some differences (for example in log K values) between the Pitzer and ''B-dot'' database parameters for the same reactions or species.

  4. NGNP Risk Management Database: A Model for Managing Risk

    SciTech Connect

    John Collins

    2009-09-01

    To facilitate the implementation of the Risk Management Plan, the Next Generation Nuclear Plant (NGNP) Project has developed and employed an analytical software tool called the NGNP Risk Management System (RMS). A relational database developed in Microsoft® Access, the RMS provides conventional database utility including data maintenance, archiving, configuration control, and query ability. Additionally, the tool’s design provides a number of unique capabilities specifically designed to facilitate the development and execution of activities outlined in the Risk Management Plan. Specifically, the RMS provides the capability to establish the risk baseline, document and analyze the risk reduction plan, track the current risk reduction status, organize risks by reference configuration system, subsystem, and component (SSC) and Area, and increase the level of NGNP decision making.

  5. Modeling past, current, and future time in medical databases.

    PubMed Central

    Kouramajian, V.; Fowler, J.

    1994-01-01

    Recent research has focused on increasing the power of medical information systems by incorporating time into the database system. A problem with much of this research is that it fails to differentiate between historical time and future time. The concept of bitemporal lifespan presented in this paper overcomes this deficiency. Bitemporal lifespan supports the concepts of valid time and transaction time and allows the integration of past, current, and future information in a unified model. The concept of bitemporal lifespan is presented within the framework of the Extended Entity-Relationship model. This model permits the characterization of temporal properties of entities, relationships, and attributes. Bitemporal constraints are defined that must hold between entities forming "isa" hierarchies and between entities and relationships. Finally, bitemporal extensions are presented for database query languages in order to provide natural high-level operators for bitemporal query expressions. PMID:7949941

  6. SAPling: a Scan-Add-Print barcoding database system to label and track asexual organisms

    PubMed Central

    Thomas, Michael A.; Schötz, Eva-Maria

    2011-01-01

    SUMMARY We have developed a ‘Scan-Add-Print’ database system, SAPling, to track and monitor asexually reproducing organisms. Using barcodes to uniquely identify each animal, we can record information on the life of the individual in a computerized database containing its entire family tree. SAPling has enabled us to carry out large-scale population dynamics experiments with thousands of planarians and keep track of each individual. The database stores information such as family connections, birth date, division date and generation. We show that SAPling can be easily adapted to other asexually reproducing organisms and has a strong potential for use in large-scale and/or long-term population and senescence studies as well as studies of clonal diversity. The software is platform-independent, designed for reliability and ease of use, and provided open source from our webpage to allow project-specific customization. PMID:21993779

  7. Comparing global soil models to soil carbon profile databases

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Harden, J. W.; He, Y.; Lawrence, D. M.; Nave, L. E.; O'Donnell, J. A.; Treat, C.; Sulman, B. N.; Kane, E. S.

    2015-12-01

    As global soil models begin to consider the dynamics of carbon below the surface layers, it is crucial to assess the realism of these models. We focus on the vertical profiles of soil C predicted across multiple biomes form the Community Land Model (CLM4.5), using different values for a parameter that controls the rate of decomposition at depth versus at the surface, and compare these to observationally-derived diagnostics derived from the International Soil Carbon Database (ISCN) to assess the realism of model predictions of carbon depthattenuation, and the ability of observations to provide a constraint on rates of decomposition at depth.

  8. Artificial intelligence techniques for modeling database user behavior

    NASA Technical Reports Server (NTRS)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  9. INTERCOMPARISON OF ALTERNATIVE VEGETATION DATABASES FOR REGIONAL AIR QUALITY MODELING

    EPA Science Inventory

    Vegetation cover data are used to characterize several regional air quality modeling processes, including the calculation of heat, moisture, and momentum fluxes with the Mesoscale Meteorological Model (MM5) and the estimate of biogenic volatile organic compound and nitric oxide...

  10. CyanOmics: an integrated database of omics for the model cyanobacterium Synechococcus sp. PCC 7002

    PubMed Central

    Yang, Yaohua; Feng, Jie; Li, Tao; Ge, Feng; Zhao, Jindong

    2015-01-01

    Cyanobacteria are an important group of organisms that carry out oxygenic photosynthesis and play vital roles in both the carbon and nitrogen cycles of the Earth. The annotated genome of Synechococcus sp. PCC 7002, as an ideal model cyanobacterium, is available. A series of transcriptomic and proteomic studies of Synechococcus sp. PCC 7002 cells grown under different conditions have been reported. However, no database of such integrated omics studies has been constructed. Here we present CyanOmics, a database based on the results of Synechococcus sp. PCC 7002 omics studies. CyanOmics comprises one genomic dataset, 29 transcriptomic datasets and one proteomic dataset and should prove useful for systematic and comprehensive analysis of all those data. Powerful browsing and searching tools are integrated to help users directly access information of interest with enhanced visualization of the analytical results. Furthermore, Blast is included for sequence-based similarity searching and Cluster 3.0, as well as the R hclust function is provided for cluster analyses, to increase CyanOmics’s usefulness. To the best of our knowledge, it is the first integrated omics analysis database for cyanobacteria. This database should further understanding of the transcriptional patterns, and proteomic profiling of Synechococcus sp. PCC 7002 and other cyanobacteria. Additionally, the entire database framework is applicable to any sequenced prokaryotic genome and could be applied to other integrated omics analysis projects. Database URL: http://lag.ihb.ac.cn/cyanomics PMID:25632108

  11. Using the Cambridge Structural Database to Teach Molecular Geometry Concepts in Organic Chemistry

    ERIC Educational Resources Information Center

    Wackerly, Jay Wm.; Janowicz, Philip A.; Ritchey, Joshua A.; Caruso, Mary M.; Elliott, Erin L.; Moore, Jeffrey S.

    2009-01-01

    This article reports a set of two homework assignments that can be used in a second-year undergraduate organic chemistry class. These assignments were designed to help reinforce concepts of molecular geometry and to give students the opportunity to use a technological database and data mining to analyze experimentally determined chemical…

  12. Fitting the Balding-Nichols model to forensic databases.

    PubMed

    Rohlfs, Rori V; Aguiar, Vitor R C; Lohmueller, Kirk E; Castro, Amanda M; Ferreira, Alessandro C S; Almeida, Vanessa C O; Louro, Iuri D; Nielsen, Rasmus

    2015-11-01

    Large forensic databases provide an opportunity to compare observed empirical rates of genotype matching with those expected under forensic genetic models. A number of researchers have taken advantage of this opportunity to validate some forensic genetic approaches, particularly to ensure that estimated rates of genotype matching between unrelated individuals are indeed slight overestimates of those observed. However, these studies have also revealed systematic error trends in genotype probability estimates. In this analysis, we investigate these error trends and show how they result from inappropriate implementation of the Balding-Nichols model in the context of database-wide matching. Specifically, we show that in addition to accounting for increased allelic matching between individuals with recent shared ancestry, studies must account for relatively decreased allelic matching between individuals with more ancient shared ancestry. PMID:26186694

  13. ASGARD: an open-access database of annotated transcriptomes for emerging model arthropod species.

    PubMed

    Zeng, Victor; Extavour, Cassandra G

    2012-01-01

    The increased throughput and decreased cost of next-generation sequencing (NGS) have shifted the bottleneck genomic research from sequencing to annotation, analysis and accessibility. This is particularly challenging for research communities working on organisms that lack the basic infrastructure of a sequenced genome, or an efficient way to utilize whatever sequence data may be available. Here we present a new database, the Assembled Searchable Giant Arthropod Read Database (ASGARD). This database is a repository and search engine for transcriptomic data from arthropods that are of high interest to multiple research communities but currently lack sequenced genomes. We demonstrate the functionality and utility of ASGARD using de novo assembled transcriptomes from the milkweed bug Oncopeltus fasciatus, the cricket Gryllus bimaculatus and the amphipod crustacean Parhyale hawaiensis. We have annotated these transcriptomes to assign putative orthology, coding region determination, protein domain identification and Gene Ontology (GO) term annotation to all possible assembly products. ASGARD allows users to search all assemblies by orthology annotation, GO term annotation or Basic Local Alignment Search Tool. User-friendly features of ASGARD include search term auto-completion suggestions based on database content, the ability to download assembly product sequences in FASTA format, direct links to NCBI data for predicted orthologs and graphical representation of the location of protein domains and matches to similar sequences from the NCBI non-redundant database. ASGARD will be a useful repository for transcriptome data from future NGS studies on these and other emerging model arthropods, regardless of sequencing platform, assembly or annotation status. This database thus provides easy, one-stop access to multi-species annotated transcriptome information. We anticipate that this database will be useful for members of multiple research communities, including developmental

  14. Organizing the Extremely Large LSST Database forReal-Time Astronomical Processing

    SciTech Connect

    Becla, Jacek; Lim, Kian-Tat; Monkewitz, Serge; Nieto-Santisteban, Maria; Thakar, Ani; /Johns Hopkins U.

    2007-11-07

    The Large Synoptic Survey Telescope (LSST) will catalog billions of astronomical objects and trillions of sources, all of which will be stored and managed by a database management system. One of the main challenges is real-time alert generation. To generate alerts, up to 100K new difference detections have to be cross-correlated with the huge historical catalogs, and then further processed to prune false alerts. This paper explains the challenges, the implementation of the LSST Association Pipeline and the database organization strategies we are planning to use to meet the real-time requirements, including data partitioning, parallelization, and pre-loading.

  15. Teaching biology with model organisms

    NASA Astrophysics Data System (ADS)

    Keeley, Dolores A.

    The purpose of this study is to identify and use model organisms that represent each of the kingdoms biologists use to classify organisms, while experiencing the process of science through guided inquiry. The model organisms will be the basis for studying the four high school life science core ideas as identified by the Next Generation Science Standards (NGSS): LS1-From molecules to organisms, LS2-Ecosystems, LS3- Heredity, and LS4- Biological Evolution. NGSS also have identified four categories of science and engineering practices which include developing and using models and planning and carrying out investigations. The living organisms will be utilized to increase student interest and knowledge within the discipline of Biology. Pre-test and posttest analysis utilizing student t-test analysis supported the hypothesis. This study shows increased student learning as a result of using living organisms as models for classification and working in an inquiry-based learning environment.

  16. Mouse Genome Database: from sequence to phenotypes and disease models

    PubMed Central

    Eppig, Janan T.; Richardson, Joel E.; Kadin, James A.; Smith, Cynthia L.; Blake, Judith A.; Bult, Carol J.

    2015-01-01

    The Mouse Genome Database (MGD, www.informatics.jax.org) is the international scientific database for genetic, genomic, and biological data on the laboratory mouse to support the research requirements of the biomedical community. To accomplish this goal, MGD provides broad data coverage, serves as the authoritative standard for mouse nomenclature for genes, mutants, and strains, and curates and integrates many types of data from literature and electronic sources. Among the key data sets MGD supports are: the complete catalog of mouse genes and genome features, comparative homology data for mouse and vertebrate genes, the authoritative set of Gene Ontology (GO) annotations for mouse gene functions, a comprehensive catalog of mouse mutations and their phenotypes, and a curated compendium of mouse models of human diseases. Here we describe the data acquisition process, specifics about MGD’s key data areas, methods to access and query MGD data, and outreach and user help facilities. PMID:26150326

  17. Mouse Genome Database: From sequence to phenotypes and disease models.

    PubMed

    Eppig, Janan T; Richardson, Joel E; Kadin, James A; Smith, Cynthia L; Blake, Judith A; Bult, Carol J

    2015-08-01

    The Mouse Genome Database (MGD, www.informatics.jax.org) is the international scientific database for genetic, genomic, and biological data on the laboratory mouse to support the research requirements of the biomedical community. To accomplish this goal, MGD provides broad data coverage, serves as the authoritative standard for mouse nomenclature for genes, mutants, and strains, and curates and integrates many types of data from literature and electronic sources. Among the key data sets MGD supports are: the complete catalog of mouse genes and genome features, comparative homology data for mouse and vertebrate genes, the authoritative set of Gene Ontology (GO) annotations for mouse gene functions, a comprehensive catalog of mouse mutations and their phenotypes, and a curated compendium of mouse models of human diseases. Here, we describe the data acquisition process, specifics about MGD's key data areas, methods to access and query MGD data, and outreach and user help facilities. PMID:26150326

  18. NGNP Risk Management Database: A Model for Managing Risk

    SciTech Connect

    John Collins; John M. Beck

    2011-11-01

    The Next Generation Nuclear Plant (NGNP) Risk Management System (RMS) is a database used to maintain the project risk register. The RMS also maps risk reduction activities to specific identified risks. Further functionality of the RMS includes mapping reactor suppliers Design Data Needs (DDNs) to risk reduction tasks and mapping Phenomena Identification Ranking Table (PIRTs) to associated risks. This document outlines the basic instructions on how to use the RMS. This document constitutes Revision 1 of the NGNP Risk Management Database: A Model for Managing Risk. It incorporates the latest enhancements to the RMS. The enhancements include six new custom views of risk data - Impact/Consequence, Tasks by Project Phase, Tasks by Status, Tasks by Project Phase/Status, Tasks by Impact/WBS, and Tasks by Phase/Impact/WBS.

  19. Organization's Orderly Interest Exploration: Inception, Development and Insights of AIAA's Topics Database

    NASA Technical Reports Server (NTRS)

    Marshall, Jospeh R.; Morris, Allan T.

    2007-01-01

    Since 2003, AIAA's Computer Systems and Software Systems Technical Committees (TCs) have developed a database that aids technical committee management to map technical topics to their members. This Topics/Interest (T/I) database grew out of a collection of charts and spreadsheets maintained by the TCs. Since its inception, the tool has evolved into a multi-dimensional database whose dimensions include the importance, interest and expertise of TC members and whether or not a member and/or a TC is actively involved with the topic. In 2005, the database was expanded to include the TCs in AIAA s Information Systems Group and then expanded further to include all AIAA TCs. It was field tested at an AIAA Technical Activities Committee (TAC) Workshop in early 2006 through live access by over 80 users. Through the use of the topics database, TC and program committee (PC) members can accomplish relevant tasks such as: to identify topic experts (for Aerospace America articles or external contacts), to determine the interest of its members, to identify overlapping topics between diverse TCs and PCs, to guide new member drives and to reveal emerging topics. This paper will describe the origins, inception, initial development, field test and current version of the tool as well as elucidate the benefits and insights gained by using the database to aid the management of various TC functions. Suggestions will be provided to guide future development of the database for the purpose of providing dynamics and system level benefits to AIAA that currently do not exist in any technical organization.

  20. Lagrangian modelling tool for IAGOS database added-value products

    NASA Astrophysics Data System (ADS)

    Fontaine, Alain; Auby, Antoine; Petetin, Hervé; Sauvage, Bastien; Thouret, Valérie; Boulanger, Damien

    2015-04-01

    Since 1994, the IAGOS (In-Service Aircraft for a Global Observing System, http://www.iagos.fr) project has produced in-situ measurements of chemical as ozone, carbon monoxide or nitrogen oxides species through more than 40000 commercial aircraft flights. In order to help analysing these observations a tool which links the observed pollutants to their sources was developped based on the Stohl et al. (2003) methodology. Build on the lagrangian particle dispersion model FLEXPART coupled with ECMWF meteorological fields, this tool simulates contributions of anthropogenic and biomass burning emissions from the ECCAD database, to the measured carbon monoxide mixing ratio along each IAGOS flight. Thanks to automated processes, 20-days backward simulation are run from the observation, separating individual contributions from the different source regions. The main goal is to supply added-value product to the IAGOS database showing pollutants geographical origin and emission type and link trends in the atmospheric composition to changes in the transport pathways and to the evolution of emissions. This tool may also be used for statistical validation for intercomparisons of emission inventories, where they can be compared to the in-situ observations from the IAGOS database.

  1. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases

    PubMed Central

    Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-01-01

    Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757

  2. Using chemical organization theory for model checking

    PubMed Central

    Kaleta, Christoph; Richter, Stephan; Dittrich, Peter

    2009-01-01

    Motivation: The increasing number and complexity of biomodels makes automatic procedures for checking the models' properties and quality necessary. Approaches like elementary mode analysis, flux balance analysis, deficiency analysis and chemical organization theory (OT) require only the stoichiometric structure of the reaction network for derivation of valuable information. In formalisms like Systems Biology Markup Language (SBML), however, information about the stoichiometric coefficients required for an analysis of chemical organizations can be hidden in kinetic laws. Results: First, we introduce an algorithm that uncovers stoichiometric information that might be hidden in the kinetic laws of a reaction network. This allows us to apply OT to SBML models using modifiers. Second, using the new algorithm, we performed a large-scale analysis of the 185 models contained in the manually curated BioModels Database. We found that for 41 models (22%) the set of organizations changes when modifiers are considered correctly. We discuss one of these models in detail (BIOMD149, a combined model of the ERK- and Wnt-signaling pathways), whose set of organizations drastically changes when modifiers are considered. Third, we found inconsistencies in 5 models (3%) and identified their characteristics. Compared with flux-based methods, OT is able to identify those species and reactions more accurately [in 26 cases (14%)] that can be present in a long-term simulation of the model. We conclude that our approach is a valuable tool that helps to improve the consistency of biomodels and their repositories. Availability: All data and a JAVA applet to check SBML-models is available from http://www.minet.uni-jena.de/csb/prj/ot/tools Contact: dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19468053

  3. An atmospheric tritium release database for model comparisons

    SciTech Connect

    Murphy, C.E. Jr.; Wortham, G.R.

    1991-12-19

    A database of vegetation, soil, and air tritium concentrations at gridded coordinate locations following nine accidental atmospheric releases is described. While none of the releases caused a significant dose to the public, the data collected is valuable for comparison with the results of tritium transport models used for risk assessment. The largest, potential, individual off-site dose from any of the releases was calculated to be 1.6 mrem. The population dose from this same release was 46 person-rem which represents 0.04% of the natural background radiation dose to the population in the path of the release.

  4. An atmospheric tritium release database for model comparisons. Revision 1

    SciTech Connect

    Murphy, C.E. Jr.; Wortham, G.R.

    1995-01-01

    A database of vegetation, soil, and air tritium concentrations at gridded coordinate locations following nine accidental atmospheric releases is described. While none of the releases caused a significant dose to the public, the data collected are valuable for comparison with the results of tritium transport models used for risk assessment. The largest, potential, individual off-site dose from any of the releases was calculated to be 1.6 mrem. The population dose from this same release was 46 person-rem which represents 0.04% of the natural background radiation dose to the population in the path of the release.

  5. Developing High-resolution Soil Database for Regional Crop Modeling in East Africa

    NASA Astrophysics Data System (ADS)

    Han, E.; Ines, A. V. M.

    2014-12-01

    The most readily available soil data for regional crop modeling in Africa is the World Inventory of Soil Emission potentials (WISE) dataset, which has 1125 soil profiles for the world, but does not extensively cover countries Ethiopia, Kenya, Uganda and Tanzania in East Africa. Another dataset available is the HC27 (Harvest Choice by IFPRI) in a gridded format (10km) but composed of generic soil profiles based on only three criteria (texture, rooting depth, and organic carbon content). In this paper, we present a development and application of a high-resolution (1km), gridded soil database for regional crop modeling in East Africa. Basic soil information is extracted from Africa Soil Information Service (AfSIS), which provides essential soil properties (bulk density, soil organic carbon, soil PH and percentages of sand, silt and clay) for 6 different standardized soil layers (5, 15, 30, 60, 100 and 200 cm) in 1km resolution. Soil hydraulic properties (e.g., field capacity and wilting point) are derived from the AfSIS soil dataset using well-proven pedo-transfer functions and are customized for DSSAT-CSM soil data requirements. The crop model is used to evaluate crop yield forecasts using the new high resolution soil database and compared with WISE and HC27. In this paper we will present also the results of DSSAT loosely coupled with a hydrologic model (VIC) to assimilate root-zone soil moisture. Creating a grid-based soil database, which provides a consistent soil input for two different models (DSSAT and VIC) is a critical part of this work. The created soil database is expected to contribute to future applications of DSSAT crop simulation in East Africa where food security is highly vulnerable.

  6. Database and Interim Glass Property Models for Hanford HLW Glasses

    SciTech Connect

    Hrma, Pavel R.; Piepel, Gregory F.; Vienna, John D.; Cooley, Scott K.; Kim, Dong-Sang; Russell, Renee L.

    2001-07-24

    The purpose of this report is to provide a methodology for an increase in the efficiency and a decrease in the cost of vitrifying high-level waste (HLW) by optimizing HLW glass formulation. This methodology consists in collecting and generating a database of glass properties that determine HLW glass processability and acceptability and relating these properties to glass composition. The report explains how the property-composition models are developed, fitted to data, used for glass formulation optimization, and continuously updated in response to changes in HLW composition estimates and changes in glass processing technology. Further, the report reviews the glass property-composition literature data and presents their preliminary critical evaluation and screening. Finally the report provides interim property-composition models for melt viscosity, for liquidus temperature (with spinel and zircon primary crystalline phases), and for the product consistency test normalized releases of B, Na, and Li. Models were fitted to a subset of the screened database deemed most relevant for the current HLW composition region.

  7. Modeling, Measurements, and Fundamental Database Development for Nonequilibrium Hypersonic Aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Bose, Deepak

    2012-01-01

    The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above

  8. A future of the model organism model

    PubMed Central

    Rine, Jasper

    2014-01-01

    Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. However, researchers must take special care and implement new resources to enable the newest members of the community to engage fully with the remarkable legacy of information in these fields. PMID:24577733

  9. MOSAIC: An organic geochemical and sedimentological database for marine surface sediments

    NASA Astrophysics Data System (ADS)

    Tavagna, Maria Luisa; Usman, Muhammed; De Avelar, Silvania; Eglinton, Timothy

    2015-04-01

    Modern ocean sediments serve as the interface between the biosphere and the geosphere, play a key role in biogeochemical cycles and provide a window on how contemporary processes are written into the sedimentary record. Research over past decades has resulted in a wealth of information on the content and composition of organic matter in marine sediments, with ever-more sophisticated techniques continuing to yield information of greater detail and as an accelerating pace. However, there has been no attempt to synthesize this wealth of information. We are establishing a new database that incorporates information relevant to local, regional and global-scale assessment of the content, source and fate of organic materials accumulating in contemporary marine sediments. In the MOSAIC (Modern Ocean Sediment Archive and Inventory of Carbon) database, particular emphasis is placed on molecular and isotopic information, coupled with relevant contextual information (e.g., sedimentological properties) relevant to elucidating factors that influence the efficiency and nature of organic matter burial. The main features of MOSAIC include: (i) Emphasis on continental margin sediments as major loci of carbon burial, and as the interface between terrestrial and oceanic realms; (ii) Bulk to molecular-level organic geochemical properties and parameters, including concentration and isotopic compositions; (iii) Inclusion of extensive contextual data regarding the depositional setting, in particular with respect to sedimentological and redox characteristics. The ultimate goal is to create an open-access instrument, available on the web, to be utilized for research and education by the international community who can both contribute to, and interrogate the database. The submission will be accomplished by means of a pre-configured table available on the MOSAIC webpage. The information on the filled tables will be checked and eventually imported, via the Structural Query Language (SQL), into

  10. Solid waste projection model: Database version 1. 0 technical reference manual

    SciTech Connect

    Carr, F.; Bowman, A.

    1990-11-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.0 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User's Guide. 14 figs., 6 tabs.

  11. Data-based Non-Markovian Model Inference

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  12. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    SciTech Connect

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division; Purdue Univ.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  13. LANL High-Level Model (HLM) database development letter report

    SciTech Connect

    1995-10-01

    Traditional methods of evaluating munitions have been able to successfully compare like munition`s capabilities. On the modern battlefield, however, many different types of munitions compete for the same set of targets. Assessing the overall stockpile capability and proper mix of these weapons is not a simple task, as their use depends upon the specific geographic region of the world, the threat capabilities, the tactics and operational strategy used by both the US and Threat commanders, and of course the type and quantity of munitions available to the CINC. To sort out these types of issues, a hierarchical set of dynamic, two-sided combat simulations are generally used. The DoD has numerous suitable models for this purpose, but rarely are the models focused on munitions expenditures. Rather, they are designed to perform overall platform assessments and force mix evaluations. However, in some cases, the models could be easily adapted to provide this information, since it is resident in the model`s database. Unfortunately, these simulations` complexity (their greatest strength) precludes quick turnaround assessments of the type and scope required by senior decision-makers.

  14. BioModels Database: a repository of mathematical models of biological processes.

    PubMed

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months. PMID:23715986

  15. An atmospheric tritium release database for model comparisons

    SciTech Connect

    Murphy, C.E. Jr.; Wortham, G.R.

    1997-10-13

    A database of vegetation, soil, and air tritium concentrations at gridded coordinate locations following nine accidental atmospheric releases is described. The concentration data is supported by climatological data taken during and immediately after the releases. In six cases, the release data is supplemented with meteorological data taken at seven towers scattered throughout the immediate area of the releases and data from a single television tower instrumented at eight heights. While none of the releases caused a significant dose to the public, the data collected is valuable for comparison with the results of tritium transport models used for risk assessment. The largest, potential off-site dose from any of the releases was calculated to be 1.6 mrem. The population dose from this same release was 46 person-rem which represents 0.04 percent of the natural background dose to the population in the path of the release.

  16. BioModels Database: An enhanced, curated and annotated resource for published quantitative kinetic models

    PubMed Central

    2010-01-01

    Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation

  17. A Conceptual Model of the Information Requirements of Nursing Organizations

    PubMed Central

    Miller, Emmy

    1989-01-01

    Three related issues play a role in the identification of the information requirements of nursing organizations. These issues are the current state of computer systems in health care organizations, the lack of a well-defined data set for nursing, and the absence of models representing data and information relevant to clinical and administrative nursing practice. This paper will examine current methods of data collection, processing, and storage in clinical and administrative nursing practice for the purpose of identifying the information requirements of nursing organizations. To satisfy these information requirements, database technology can be used; however, a model for database design is needed that reflects the conceptual framework of nursing and the professional concerns of nurses. A conceptual model of the types of data necessary to produce the desired information will be presented and the relationships among data will be delineated.

  18. Virtual Organizations: Trends and Models

    NASA Astrophysics Data System (ADS)

    Nami, Mohammad Reza; Malekpour, Abbaas

    The Use of ICT in business has changed views about traditional business. With VO, organizations with out physical, geographical, or structural constraint can collaborate with together in order to fulfill customer requests in a networked environment. This idea improves resource utilization, reduces development process and costs, and saves time. Virtual Organization (VO) is always a form of partnership and managing partners and handling partnerships are crucial. Virtual organizations are defined as a temporary collection of enterprises that cooperate and share resources, knowledge, and competencies to better respond to business opportunities. This paper presents an overview of virtual organizations and main issues in collaboration such as security and management. It also presents a number of different model approaches according to their purpose and applications.

  19. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    SciTech Connect

    Olson, R.J.

    2002-03-19

    }-grid cells for which inventory, modeling, or remote-sensing tools were used to scale up the point measurements. Documentation of the content and organization of the EMDI databases are provided.

  20. Data model and relational database design for the New England Water-Use Data System (NEWUDS)

    USGS Publications Warehouse

    Tessler, Steven

    2001-01-01

    The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  1. Discovery of Possible Gene Relationships through the Application of Self-Organizing Maps to DNA Microarray Databases

    PubMed Central

    Chavez-Alvarez, Rocio; Chavoya, Arturo; Mendez-Vazquez, Andres

    2014-01-01

    DNA microarrays and cell cycle synchronization experiments have made possible the study of the mechanisms of cell cycle regulation of Saccharomyces cerevisiae by simultaneously monitoring the expression levels of thousands of genes at specific time points. On the other hand, pattern recognition techniques can contribute to the analysis of such massive measurements, providing a model of gene expression level evolution through the cell cycle process. In this paper, we propose the use of one of such techniques –an unsupervised artificial neural network called a Self-Organizing Map (SOM)–which has been successfully applied to processes involving very noisy signals, classifying and organizing them, and assisting in the discovery of behavior patterns without requiring prior knowledge about the process under analysis. As a test bed for the use of SOMs in finding possible relationships among genes and their possible contribution in some biological processes, we selected 282 S. cerevisiae genes that have been shown through biological experiments to have an activity during the cell cycle. The expression level of these genes was analyzed in five of the most cited time series DNA microarray databases used in the study of the cell cycle of this organism. With the use of SOM, it was possible to find clusters of genes with similar behavior in the five databases along two cell cycles. This result suggested that some of these genes might be biologically related or might have a regulatory relationship, as was corroborated by comparing some of the clusters obtained with SOMs against a previously reported regulatory network that was generated using biological knowledge, such as protein-protein interactions, gene expression levels, metabolism dynamics, promoter binding, and modification, regulation and transport of proteins. The methodology described in this paper could be applied to the study of gene relationships of other biological processes in different organisms. PMID:24699245

  2. Filling a missing link between biogeochemical, climate and ecosystem studies: a global database of atmospheric water-soluble organic nitrogen

    NASA Astrophysics Data System (ADS)

    Cornell, Sarah

    2015-04-01

    It is time to collate a global community database of atmospheric water-soluble organic nitrogen deposition. Organic nitrogen (ON) has long been known to be globally ubiquitous in atmospheric aerosol and precipitation, with implications for air and water quality, climate, biogeochemical cycles, ecosystems and human health. The number of studies of atmospheric ON deposition has increased steadily in recent years, but to date there is no accessible global dataset, for either bulk ON or its major components. Improved qualitative and quantitative understanding of the organic nitrogen component is needed to complement the well-established knowledge base pertaining to other components of atmospheric deposition (cf. Vet et al 2014). Without this basic information, we are increasingly constrained in addressing the current dynamics and potential interactions of atmospheric chemistry, climate and ecosystem change. To see the full picture we need global data synthesis, more targeted data gathering, and models that let us explore questions about the natural and anthropogenic dynamics of atmospheric ON. Collectively, our research community already has a substantial amount of atmospheric ON data. Published reports extend back over a century and now have near-global coverage. However, datasets available from the literature are very piecemeal and too often lack crucially important information that would enable aggregation or re-use. I am initiating an open collaborative process to construct a community database, so we can begin to systematically synthesize these datasets (generally from individual studies at a local and temporally limited scale) to increase their scientific usability and statistical power for studies of global change and anthropogenic perturbation. In drawing together our disparate knowledge, we must address various challenges and concerns, not least about the comparability of analysis and sampling methodologies, and the known complexity of composition of ON. We

  3. Modeling of heavy organic deposition

    SciTech Connect

    Chung, F.T.H.

    1992-01-01

    Organic deposition is often a major problem in petroleum production and processing. This problem is manifested by current activities in gas flooding and heavy oil production. The need for understanding the nature of asphaltenes and asphaltics and developing solutions to the deposition problem is well recognized. Prediction technique is crucial to solution development. In the past 5 years, some progress in modeling organic deposition has been made. A state-of-the-art review of methods for modeling organic deposition is presented in this report. Two new models were developed in this work; one based on a thermodynamic equilibrium principle and the other on the colloidal stability theory. These two models are more general and realistic than others previously reported. Because experimental results on the characteristics of asphaltene are inconclusive, it is still not well known whether the asphaltenes is crude oil exist as a true solution or as a colloidal suspension. Further laboratory work which is designed to study the solubility properties of asphaltenes and to provide additional information for model development is proposed. Some experimental tests have been conducted to study the mechanisms of CO{sub 2}-induced asphaltene precipitation. Coreflooding experiments show that asphaltene precipitation occurs after gas breakthrough. The mechanism of CO{sub 2}-induced asphaltene precipitation is believed to occur by hydrocarbon extraction which causes change in oil composition. Oil swelling due to CO{sub 2} solubilization does not induce asphaltene precipitation.

  4. GIS-based hydrogeological databases and groundwater modelling

    NASA Astrophysics Data System (ADS)

    Gogu, Radu Constantin; Carabin, Guy; Hallet, Vincent; Peters, Valerie; Dassargues, Alain

    2001-12-01

    Reliability and validity of groundwater analysis strongly depend on the availability of large volumes of high-quality data. Putting all data into a coherent and logical structure supported by a computing environment helps ensure validity and availability and provides a powerful tool for hydrogeological studies. A hydrogeological geographic information system (GIS) database that offers facilities for groundwater-vulnerability analysis and hydrogeological modelling has been designed in Belgium for the Walloon region. Data from five river basins, chosen for their contrasting hydrogeological characteristics, have been included in the database, and a set of applications that have been developed now allow further advances. Interest is growing in the potential for integrating GIS technology and groundwater simulation models. A "loose-coupling" tool was created between the spatial-database scheme and the groundwater numerical model interface GMS (Groundwater Modelling System). Following time and spatial queries, the hydrogeological data stored in the database can be easily used within different groundwater numerical models. Résumé. La validité et la reproductibilité de l'analyse d'un aquifère dépend étroitement de la disponibilité de grandes quantités de données de très bonne qualité. Le fait de mettre toutes les données dans une structure cohérente et logique soutenue par les logiciels nécessaires aide à assurer la validité et la disponibilité et fournit un outil puissant pour les études hydrogéologiques. Une base de données pour un système d'information géographique (SIG) hydrogéologique qui offre toutes les facilités pour l'analyse de la vulnérabilité des eaux souterraines et la modélisation hydrogéologique a été établi en Belgique pour la région Wallonne. Les données de cinq bassins de rivières, choisis pour leurs caractéristiques hydrogéologiques différentes, ont été introduites dans la base de données, et un ensemble d

  5. A database for estimating organ dose for coronary angiography and brain perfusion CT scans for arbitrary spectra and angular tube current modulation

    SciTech Connect

    Rupcich, Franco; Badal, Andreu; Kyprianou, Iacovos; Schmidt, Taly Gilat

    2012-09-15

    Purpose: The purpose of this study was to develop a database for estimating organ dose in a voxelized patient model for coronary angiography and brain perfusion CT acquisitions with any spectra and angular tube current modulation setting. The database enables organ dose estimation for existing and novel acquisition techniques without requiring Monte Carlo simulations. Methods: The study simulated transport of monoenergetic photons between 5 and 150 keV for 1000 projections over 360 Degree-Sign through anthropomorphic voxelized female chest and head (0 Degree-Sign and 30 Degree-Sign tilt) phantoms and standard head and body CTDI dosimetry cylinders. The simulations resulted in tables of normalized dose deposition for several radiosensitive organs quantifying the organ dose per emitted photon for each incident photon energy and projection angle for coronary angiography and brain perfusion acquisitions. The values in a table can be multiplied by an incident spectrum and number of photons at each projection angle and then summed across all energies and angles to estimate total organ dose. Scanner-specific organ dose may be approximated by normalizing the database-estimated organ dose by the database-estimated CTDI{sub vol} and multiplying by a physical CTDI{sub vol} measurement. Two examples are provided demonstrating how to use the tables to estimate relative organ dose. In the first, the change in breast and lung dose during coronary angiography CT scans is calculated for reduced kVp, angular tube current modulation, and partial angle scanning protocols relative to a reference protocol. In the second example, the change in dose to the eye lens is calculated for a brain perfusion CT acquisition in which the gantry is tilted 30 Degree-Sign relative to a nontilted scan. Results: Our database provides tables of normalized dose deposition for several radiosensitive organs irradiated during coronary angiography and brain perfusion CT scans. Validation results indicate

  6. Technique to model and design physical database systems

    SciTech Connect

    Wise, T.E.

    1983-12-01

    Database management systems (DBMSs) allow users to define and manipulate records at a logical level of abstraction. A logical record is not stored as users see it but is mapped into a collection of physical records. Physical records are stored in file structures managed by a DBMS. Likewise, DBMS commands which appear to be directed toward one or more logical records actually correspond to a series of operations on the file structures. The structures and operations of a DBMS (i.e., its physical architecture) are not visible to users at the logical level. Traditionally, logical records and DBMS commands are mapped to physical records and operations in one step. In this report, logical records are mapped to physical records in a series of steps over several levels of abstraction. Each level of abstraction is composed of one or more intermediate record types. A hierarchy of record types results that covers the gap between logical and physical records. The first step of our technique identifies the record types and levels of abstraction that describe a DBMS. The second step maps DBMS commands to physical operations in terms of these records and levels of abstraction. The third step encapsulates each record type and its operations into a programming construct called a module. The applications of our technique include modeling existing DBMSs and designing the physical architectures of new DBMSs. To illustrate one application, we describe in detail the architecture of the commercial DBMS INQUIRE.

  7. MOAtox: A comprehensive mode of action and acute aquatic toxicity database for predictive model development.

    PubMed

    Barron, M G; Lilavois, C R; Martin, T M

    2015-04-01

    The mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity and as an alternative to chemical class-based predictive toxicity modeling. However, the development of quantitative structure activity relationship (QSAR) and other models has been limited by the availability of comprehensive high quality MOA and toxicity databases. The current study developed a dataset of MOA assignments for 1213 chemicals that included a diversity of metals, pesticides, and other organic compounds that encompassed six broad and 31 specific MOAs. MOA assignments were made using a combination of high confidence approaches that included international consensus classifications, QSAR predictions, and weight of evidence professional judgment based on an assessment of structure and literature information. A toxicity database of 674 acute values linked to chemical MOA was developed for fish and invertebrates. Additionally, species-specific measured or high confidence estimated acute values were developed for the four aquatic species with the most reported toxicity values: rainbow trout (Oncorhynchus mykiss), fathead minnow (Pimephales promelas), bluegill (Lepomis macrochirus), and the cladoceran (Daphnia magna). Measured acute toxicity values met strict standardization and quality assurance requirements. Toxicity values for chemicals with missing species-specific data were estimated using established interspecies correlation models and procedures (Web-ICE; http://epa.gov/ceampubl/fchain/webice/), with the highest confidence values selected. The resulting dataset of MOA assignments and paired toxicity values are provided in spreadsheet format as a comprehensive standardized dataset available for predictive aquatic toxicology model development. PMID:25700118

  8. Java Web Simulation (JWS); a web based database of kinetic models.

    PubMed

    Snoep, J L; Olivier, B G

    2002-01-01

    Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database. PMID:12241068

  9. Avibase – a database system for managing and organizing taxonomic concepts

    PubMed Central

    Lepage, Denis; Vaidya, Gaurav; Guralnick, Robert

    2014-01-01

    Abstract Scientific names of biological entities offer an imperfect resolution of the concepts that they are intended to represent. Often they are labels applied to entities ranging from entire populations to individual specimens representing those populations, even though such names only unambiguously identify the type specimen to which they were originally attached. Thus the real-life referents of names are constantly changing as biological circumscriptions are redefined and thereby alter the sets of individuals bearing those names. This problem is compounded by other characteristics of names that make them ambiguous identifiers of biological concepts, including emendations, homonymy and synonymy. Taxonomic concepts have been proposed as a way to address issues related to scientific names, but they have yet to receive broad recognition or implementation. Some efforts have been made towards building systems that address these issues by cataloguing and organizing taxonomic concepts, but most are still in conceptual or proof-of-concept stage. We present the on-line database Avibase as one possible approach to organizing taxonomic concepts. Avibase has been successfully used to describe and organize 844,000 species-level and 705,000 subspecies-level taxonomic concepts across every major bird taxonomic checklist of the last 125 years. The use of taxonomic concepts in place of scientific names, coupled with efficient resolution services, is a major step toward addressing some of the main deficiencies in the current practices of scientific name dissemination and use. PMID:25061375

  10. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    PubMed Central

    Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reporting requirements of various transplant registries, 2) for reporting to an increasing number of government agencies and insurance carriers, 3) to obtain updates of our operative experience at regular intervals, 4) to integrate the Histocompatibility and Immunogenetics Laboratory (HLA) for online test result reporting, and 5) to facilitate clinical investigation. PMID:1807741

  11. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  12. Models for financial sustainability of biological databases and resources.

    PubMed

    Chandras, Christina; Weaver, Thomas; Zouberakis, Michael; Smedley, Damian; Schughart, Klaus; Rosenthal, Nadia; Hancock, John M; Kollias, George; Schofield, Paul N; Aidinis, Vassilis

    2009-01-01

    Following the technological advances that have enabled genome-wide analysis in most model organisms over the last decade, there has been unprecedented growth in genomic and post-genomic science with concomitant generation of an exponentially increasing volume of data and material resources. As a result, numerous repositories have been created to store and archive data, organisms and material, which are of substantial value to the whole community. Sustained access, facilitating re-use of these resources, is essential, not only for validation, but for re-analysis, testing of new hypotheses and developing new technologies/platforms. A common challenge for most data resources and biological repositories today is finding financial support for maintenance and development to best serve the scientific community. In this study we examine the problems that currently confront the data and resource infrastructure underlying the biomedical sciences. We discuss the financial sustainability issues and potential business models that could be adopted by biological resources and consider long term preservation issues within the context of mouse functional genomics efforts in Europe. PMID:20157490

  13. Organic Model of Interstellar Grains

    NASA Astrophysics Data System (ADS)

    Yabushita, S.; Inagaki, T.; Kawabe, T.; Wada, K.

    1987-04-01

    Extinction efficiency of grains is calculated from the Mie formula on the premise that the grains are of organic composition. The optical constants adopted for the calculations are those of E. coli, polystyrene and bovine albumin. The grain radius a is assumed to obey a distribution of the form N(a) ∝ a-α and the value of α is chosen so as to make the calculated extinction curve match the observed interstellar extinction curve. Although the calculated curve gives a reasonably good fit to the observed extinction curve for wavelengths less than 2100 Å, at longer wavelength regions, agreement is poor. It is concluded that another component is required for the organic model to be viable.

  14. FOAM (Functional Ontology Assignments for Metagenomes): a Hidden Markov Model (HMM) database with environmental focus.

    PubMed

    Prestat, Emmanuel; David, Maude M; Hultman, Jenni; Taş, Neslihan; Lamendella, Regina; Dvornik, Jill; Mackelprang, Rachel; Myrold, David D; Jumpponen, Ari; Tringe, Susannah G; Holman, Elizabeth; Mavromatis, Konstantinos; Jansson, Janet K

    2014-10-29

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. 'profiles') were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associated functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/. PMID:25260589

  15. FOAM (Functional Ontology Assignments for Metagenomes): A Hidden Markov Model (HMM) database with environmental focus

    SciTech Connect

    Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; Ta , Neslihan; Lamendella, Regina; Dvornik, Jill; Mackelprang, Rachel; Myrold, David D.; Jumpponen, Ari; Tringe, Susannah G.; Holman, Elizabeth; Mavromatis, Konstantinos; Jansson, Janet K.

    2014-09-26

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associated functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.

  16. Historical Land Use Change Estimates for Climate Modelers: Results from The HYDE Database.

    NASA Astrophysics Data System (ADS)

    Klein Goldewijk, K.

    2003-04-01

    It is beyond doubt that human activities always have modified the natural environment, but it has become clear that during the last centuries the intensity and scale of these modifications has increased dramatically. Land cover changes affect climate by their impact on surface energy and moisture budgets, and thus should be included in global climate models. Therefore, a growing need is developed for better knowledge of historical land cover. A database with historical data of the global environment (HYDE) was created, which can be used in global climate models. HYDE covers not only land use (changes), but also general topics such as population, livestock, gross domestic product, and value added of industry and/or services as well. These driving forces occur at several spatial and temporal scales and dimensions, and differ often among regions. This requires a geographically explicit modeling approach. Where possible, data have been organized at the country level, and for the period 1700 to 1990. Some data are also available with geographic detail (Klein Goldewijk, 2001; Klein Goldewijk and Battjes, 1997). Examples of a global reconstruction of 300 years historical land use are presented, using gridded historical population estimates as a proxy for allocation of agricultural land. References: Klein Goldewijk, K., 2001. Estimating Global Land Use over the past 300 years: The HYDE 2.0 database. Global Biogeochemical Cycles 15(2): 417--433. Klein Goldewijk, C.G.M. and J.J. Battjes, 1997. A Hundred Year (1890 1990) Database for Integrated Environmental Assessments (HYDE, version 1.1). RIVM Report no. 422514002. National Institute of Public Health and Environmental Protection (RIVM). 196 pp. Internet: http://www.rivm.nl/env/int/hyde/

  17. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    SciTech Connect

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L.; Loftis, J.P.; Shipe, P.C.; Truett, L.F.

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  18. Solid Waste Projection Model: Database (Version 1.4). Technical reference manual

    SciTech Connect

    Blackburn, C.; Cillan, T.

    1993-09-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC). The SWPM system provides a modeling and analysis environment that supports decisions in the process of evaluating various solid waste management alternatives. This document, one of a series describing the SWPM system, contains detailed information regarding the software and data structures utilized in developing the SWPM Version 1.4 Database. This document is intended for use by experienced database specialists and supports database maintenance, utility development, and database enhancement. Those interested in using the SWPM database should refer to the SWPM Database User`s Guide. This document is available from the PNL Task M Project Manager (D. L. Stiles, 509-372-4358), the PNL Task L Project Manager (L. L. Armacost, 509-372-4304), the WHC Restoration Projects Section Manager (509-372-1443), or the WHC Waste Characterization Manager (509-372-1193).

  19. Functional Analysis and Discovery of Microbial Genes Transforming Metallic and Organic Pollutants: Database and Experimental Tools

    SciTech Connect

    Lawrence P. Wackett; Lynda B.M. Ellis

    2004-12-09

    Microbial functional genomics is faced with a burgeoning list of genes which are denoted as unknown or hypothetical for lack of any knowledge about their function. The majority of microbial genes encode enzymes. Enzymes are the catalysts of metabolism; catabolism, anabolism, stress responses, and many other cell functions. A major problem facing microbial functional genomics is proposed here to derive from the breadth of microbial metabolism, much of which remains undiscovered. The breadth of microbial metabolism has been surveyed by the PIs and represented according to reaction types on the University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD): http://umbbd.ahc.umn.edu/search/FuncGrps.html The database depicts metabolism of 49 chemical functional groups, representing most of current knowledge. Twice that number of chemical groups are proposed here to be metabolized by microbes. Thus, at least 50% of the unique biochemical reactions catalyzed by microbes remain undiscovered. This further suggests that many unknown and hypothetical genes encode functions yet undiscovered. This gap will be partly filled by the current proposal. The UM-BBD will be greatly expanded as a resource for microbial functional genomics. Computational methods will be developed to predict microbial metabolism which is not yet discovered. Moreover, a concentrated effort to discover new microbial metabolism will be conducted. The research will focus on metabolism of direct interest to DOE, dealing with the transformation of metals, metalloids, organometallics and toxic organics. This is precisely the type of metabolism which has been characterized most poorly to date. Moreover, these studies will directly impact functional genomic analysis of DOE-relevant genomes.

  20. Bayesian statistical modeling of disinfection byproduct (DBP) bromine incorporation in the ICR database.

    PubMed

    Francis, Royce A; Vanbriesen, Jeanne M; Small, Mitchell J

    2010-02-15

    Statistical models are developed for bromine incorporation in the trihalomethane (THM), trihaloacetic acids (THAA), dihaloacetic acid (DHAA), and dihaloacetonitrile (DHAN) subclasses of disinfection byproducts (DBPs) using distribution system samples from plants applying only free chlorine as a primary or residual disinfectant in the Information Collection Rule (ICR) database. The objective of this study is to characterize the effect of water quality conditions before, during, and post-treatment on distribution system bromine incorporation into DBP mixtures. Bayesian Markov Chain Monte Carlo (MCMC) methods are used to model individual DBP concentrations and estimate the coefficients of the linear models used to predict the bromine incorporation fraction for distribution system DBP mixtures in each of the four priority DBP classes. The bromine incorporation models achieve good agreement with the data. The most important predictors of bromine incorporation fraction across DBP classes are alkalinity, specific UV absorption (SUVA), and the bromide to total organic carbon ratio (Br:TOC) at the first point of chlorine addition. Free chlorine residual in the distribution system, distribution system residence time, distribution system pH, turbidity, and temperature only slightly influence bromine incorporation. The bromide to applied chlorine (Br:Cl) ratio is not a significant predictor of the bromine incorporation fraction (BIF) in any of the four classes studied. These results indicate that removal of natural organic matter and the location of chlorine addition are important treatment decisions that have substantial implications for bromine incorporation into disinfection byproduct in drinking waters. PMID:20095529

  1. Modeling BVOC isoprene emissions based on a GIS and remote sensing database

    NASA Astrophysics Data System (ADS)

    Wong, Man Sing; Sarker, Md. Latifur Rahman; Nichol, Janet; Lee, Shun-cheng; Chen, Hongwei; Wan, Yiliang; Chan, P. W.

    2013-04-01

    This paper presents a geographic information systems (GIS) model to relate biogenic volatile organic compounds (BVOCs) isoprene emissions to ecosystem type, as well as environmental drivers such as light intensity, temperature, landscape factor and foliar density. Data and techniques have recently become available which can permit new improved estimates of isoprene emissions over Hong Kong. The techniques are based on Guenther et al.'s (1993, 1999) model. The spatially detailed mapping of isoprene emissions over Hong Kong at a resolution of 100 m and a database has been constructed for retrieval of the isoprene maps from February 2007 to January 2008. This approach assigns emission rates directly to ecosystem types not to individual species, since unlike in temperate regions where one or two single species may dominate over large regions, Hong Kong's vegetation is extremely diverse with up to 300 different species in 1 ha. Field measurements of emissions by canister sampling obtained a range of ambient emissions according to different climatic conditions for Hong Kong's main ecosystem types in both urban and rural areas, and these were used for model validation. Results show the model-derived isoprene flux to have high to moderate correlations with field observations (i.e. r2 = 0.77, r2 = 0.63, r2 = 0.37 for all 24 field measurements, subset for summer, and winter data, respectively) which indicate the robustness of the approach when applied to tropical forests at detailed level, as well as the promising role of remote sensing in isoprene mapping. The GIS model and raster database provide a simple and low cost estimation of the BVOC isoprene in Hong Kong at detailed level. City planners and environmental authorities may use the derived models for estimating isoprene transportation, and its interaction with anthropogenic pollutants in urban areas.

  2. DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS TO DATABASES FOR BUILDING STRUCTURE-TOXICITY PREDICTION MODELS

    EPA Science Inventory

    DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
    Ann M. Richard
    US Environmental Protection Agency, Research Triangle Park, NC, USA

    Distributed: Decentralized set of standardized, field-delimited databases,...

  3. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  4. A Database for Propagation Models and Conversion to C++ Programming Language

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    In the past few years, a computer program was produced to contain propagation models and the necessary prediction methods of most propagation phenomena. The propagation model database described here creates a user friendly environment that makes using the database easy for experienced users and novices alike. The database is designed to pass data through the desired models easily and generate relevant results quickly. The database already contains many of the propagation phenomena models accepted by the propagation community and every year new models are added. The major sources of models included are the NASA Propagation Effects Handbook or the International Radio Consultive Committee (CCIR) or publications such as the Institute for Electrical and Electronic Engineers (IEEE).

  5. Organ Impairment—Drug–Drug Interaction Database: A Tool for Evaluating the Impact of Renal or Hepatic Impairment and Pharmacologic Inhibition on the Systemic Exposure of Drugs

    PubMed Central

    Yeung, CK; Yoshida, K; Kusama, M; Zhang, H; Ragueneau-Majlessi, I; Argon, S; Li, L; Chang, P; Le, CD; Zhao, P; Zhang, L; Sugiyama, Y; Huang, S-M

    2015-01-01

    The organ impairment and drug–drug interaction (OI-DDI) database is the first rigorously assembled database of pharmacokinetic drug exposure data from publicly available renal and hepatic impairment studies presented together with the maximum change in drug exposure from drug interaction inhibition studies. The database was used to conduct a systematic comparison of the effect of renal/hepatic impairment and pharmacologic inhibition on drug exposure. Additional applications are feasible with the public availability of this database. PMID:26380158

  6. 2D face database diversification based on 3D face modeling

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-05-01

    Pose and illumination are identified as major problems in 2D face recognition (FR). It has been theoretically proven that the more diversified instances in the training phase, the more accurate and adaptable the FR system appears to be. Based on this common awareness, researchers have developed a large number of photographic face databases to meet the demand for data training purposes. In this paper, we propose a novel scheme for 2D face database diversification based on 3D face modeling and computer graphics techniques, which supplies augmented variances of pose and illumination. Based on the existing samples from identical individuals of the database, a synthesized 3D face model is employed to create composited 2D scenarios with extra light and pose variations. The new model is based on a 3D Morphable Model (3DMM) and genetic type of optimization algorithm. The experimental results show that the complemented instances obviously increase diversification of the existing database.

  7. Publication Trends in Model Organism Research

    PubMed Central

    Dietrich, Michael R.; Ankeny, Rachel A.; Chen, Patrick M.

    2014-01-01

    In 1990, the National Institutes of Health (NIH) gave some organisms special status as designated model organisms. This article documents publication trends for these NIH-designated model organisms over the past 40 years. We find that being designated a model organism by the NIH does not guarantee an increasing publication trend. An analysis of model and nonmodel organisms included in GENETICS since 1960 does reveal a sharp decline in the number of publications using nonmodel organisms yet no decline in the overall species diversity. We suggest that organisms with successful publication records tend to share critical characteristics, such as being well developed as standardized, experimental systems and being used by well-organized communities with good networks of exchange and methods of communication. PMID:25381363

  8. Estimating spatial distribution of soil organic carbon for the Midwestern United States using historical database.

    PubMed

    Kumar, Sandeep

    2015-05-01

    Soil organic carbon (SOC) is the most important parameter influencing soil health, global climate change, crop productivity, and various ecosystem services. Therefore, estimating SOC at larger scales is important. The present study was conducted to estimate the SOC pool at regional scale using the historical database gathered by the National Soil Survey Staff. Specific objectives of the study were to upscale the SOC density (kg C m(-2)) and total SOC pool (PgC) across the Midwestern United States using the geographically weighted regression kriging (GWRK), and compare the results with those obtained from the geographically weighted regression (GWR) using the data for 3485 georeferenced profiles. Results from this study support the conclusion that the GWRK produced satisfactory predictions with lower root mean square error (5.60 kg m(-2)), mean estimation error (0.01 kg m(-2)) and mean absolute estimation error (4.30 kg m(-2)), and higher R(2) (0.58) and goodness-of-prediction statistic (G=0.59) values. The superiority of this approach is evident through a substantial increase in R(2) (0.45) compared to that for the global regression (R(2)=0.28). Croplands of the region store 16.8 Pg SOC followed by shrubs (5.85 Pg) and forests (4.45 Pg). Total SOC pool for the Midwestern region ranges from 31.5 to 31.6 Pg. This study illustrates that the GWRK approach explicitly addresses the spatial dependency and spatial non-stationarity issues for interpolating SOC density across the regional scale. PMID:25655697

  9. SOLID WASTE PROJECTION MODEL: DATABASE USER'S GUIDE (Version 1.0)

    SciTech Connect

    Carr, F.; Stiles, D.

    1991-01-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for preparing to use Version 1 of the SWPM database, for entering and maintaining data, and for performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions, and does not provide instructions in the use of Paradox, the database management system in which the SWPM database is established.

  10. Solid Waste Projection Model: Database User`s Guide. Version 1.4

    SciTech Connect

    Blackburn, C.L.

    1993-10-01

    The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford Company (WHC) specifically to address Hanford solid waste management issues. This document is one of a set of documents supporting the SWPM system and providing instructions in the use and maintenance of SWPM components. This manual contains instructions for using Version 1.4 of the SWPM database: system requirements and preparation, entering and maintaining data, and performing routine database functions. This document supports only those operations which are specific to SWPM database menus and functions and does not Provide instruction in the use of Paradox, the database management system in which the SWPM database is established.

  11. Relational-database model for improving quality assurance and process control in a composite manufacturing environment

    NASA Astrophysics Data System (ADS)

    Gentry, Jeffery D.

    2000-05-01

    A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.

  12. Neuronal database integration: the Senselab EAV data model.

    PubMed Central

    Marenco, L.; Nadkarni, P.; Skoufos, E.; Shepherd, G.; Miller, P.

    1999-01-01

    We discuss an approach towards integrating heterogeneous nervous system data using an augmented Entity-Attribute-Value (EAV) schema design. This approach, widely used in implementing electronic patient record systems (EPRSs), allows the physical schema of the database to be relatively immune to changes in domain knowledge. This is because new kinds of facts are added as data (or as metadata) rather than hard-coded as the names of newly created tables or columns. Because the domain knowledge is stored as metadata, a framework developed in one scientific domain can be ported to another with only modest revision. We describe our progress in creating a code framework that handles browsing and hyperlinking of the different kinds of data. PMID:10566329

  13. An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu

    PubMed Central

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723

  14. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    USGS Publications Warehouse

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  15. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  16. Modeling and Measuring Organization Capital

    ERIC Educational Resources Information Center

    Atkeson, Andrew; Kehoe, Patrick J.

    2005-01-01

    Manufacturing plants have a clear life cycle: they are born small, grow substantially with age, and eventually die. Economists have long thought that this life cycle is driven by organization capital, the accumulation of plant-specific knowledge. The location of plants in the life cycle determines the size of the payments, or organization rents,…

  17. Organization Development: Strategies and Models.

    ERIC Educational Resources Information Center

    Beckhard, Richard

    This book, written for managers, specialists, and students of management, is based largely on the author's experience in helping organization leaders with planned-change efforts, and on related experience of colleagues in the field. Chapter 1 presents the background and causes for the increased concern with organization development and planned…

  18. Database Manager

    ERIC Educational Resources Information Center

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  19. Tree-Structured Digital Organisms Model

    NASA Astrophysics Data System (ADS)

    Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo

    Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.

  20. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level. PMID:26046311

  1. Automatic generation of conceptual database design tools from data model specifications

    SciTech Connect

    Hong, Shuguang.

    1989-01-01

    The problems faced in the design and implementation of database software systems based on object-oriented data models are similar to that of other software design, i.e., difficult, complex, yet redundant effort. Automatic generation of database software system has been proposed as a solution to the problems. In order to generate database software system for a variety of object-oriented data models, two critical issues: data model specification and software generation, must be addressed. SeaWeed is a software system that automatically generates conceptual database design tools from data model specifications. A meta model has been defined for the specification of a class of object-oriented data models. This meta model provides a set of primitive modeling constructs that can be used to express the semantics, or unique characteristics, of specific data models. Software reusability has been adopted for the software generation. The technique of design reuse is utilized to derive the requirement specification of the software to be generated from data model specifications. The mechanism of code reuse is used to produce the necessary reusable software components. This dissertation presents the research results of SeaWeed including the meta model, data model specification, a formal representation of design reuse and code reuse, and the software generation paradigm.

  2. Guide on Data Models in the Selection and Use of Database Management Systems. Final Report.

    ERIC Educational Resources Information Center

    Gallagher, Leonard J.; Draper, Jesse M.

    A tutorial introduction to data models in general is provided, with particular emphasis on the relational and network models defined by the two proposed ANSI (American National Standards Institute) database language standards. Examples based on the network and relational models include specific syntax and semantics, while examples from the other…

  3. Combining computational models, semantic annotations and simulation experiments in a graph database

    PubMed Central

    Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar

    2015-01-01

    Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863

  4. Analysis of the Properties of Working Substances for the Organic Rankine Cycle based Database "REFPROP"

    NASA Astrophysics Data System (ADS)

    Galashov, Nikolay; Tsibulskiy, Svyatoslav; Serova, Tatiana

    2016-02-01

    The object of the study are substances that are used as a working fluid in systems operating on the basis of an organic Rankine cycle. The purpose of research is to find substances with the best thermodynamic, thermal and environmental properties. Research conducted on the basis of the analysis of thermodynamic and thermal properties of substances from the base "REFPROP" and with the help of numerical simulation of combined-cycle plant utilization triple cycle, where the lower cycle is an organic Rankine cycle. Base "REFPROP" describes and allows to calculate the thermodynamic and thermophysical parameters of most of the main substances used in production processes. On the basis of scientific publications on the use of working fluids in an organic Rankine cycle analysis were selected ozone-friendly low-boiling substances: ammonia, butane, pentane and Freon: R134a, R152a, R236fa and R245fa. For these substances have been identified and tabulated molecular weight, temperature of the triple point, boiling point, at atmospheric pressure, the parameters of the critical point, the value of the derivative of the temperature on the entropy of the saturated vapor line and the potential ozone depletion and global warming. It was also identified and tabulated thermodynamic and thermophysical parameters of the steam and liquid substances in a state of saturation at a temperature of 15 °C. This temperature is adopted as the minimum temperature of heat removal in the Rankine cycle when working on the water. Studies have shown that the best thermodynamic, thermal and environmental properties of the considered substances are pentane, butane and R245fa. For a more thorough analysis based on a gas turbine plant NK-36ST it has developed a mathematical model of combined cycle gas turbine (CCGT) triple cycle, where the lower cycle is an organic Rankine cycle, and is used as the air cooler condenser. Air condenser allows stating material at a temperature below 0 °C. Calculation of the

  5. RECEIVING WATER QUALITY DATABASE FOR TESTING OF MATHEMATICAL MODELS

    EPA Science Inventory

    Many mathematical models exist for simulation of quantity and quality parameters of receiving waters. Such models are frequently used in the evaluation of effects on receiving waters of pollution control alternatives such as advanced waste treatment and nonpoint source runoff aba...

  6. FOAM (Functional Ontology Assignments for Metagenomes): A Hidden Markov Model (HMM) database with environmental focus

    DOE PAGESBeta

    Prestat, Emmanuel; David, Maude M.; Hultman, Jenni; Ta , Neslihan; Lamendella, Regina; Dvornik, Jill; Mackelprang, Rachel; Myrold, David D.; Jumpponen, Ari; Tringe, Susannah G.; et al

    2014-09-26

    A new functional gene database, FOAM (Functional Ontology Assignments for Metagenomes), was developed to screen environmental metagenomic sequence datasets. FOAM provides a new functional ontology dedicated to classify gene functions relevant to environmental microorganisms based on Hidden Markov Models (HMMs). Sets of aligned protein sequences (i.e. ‘profiles’) were tailored to a large group of target KEGG Orthologs (KOs) from which HMMs were trained. The alignments were checked and curated to make them specific to the targeted KO. Within this process, sequence profiles were enriched with the most abundant sequences available to maximize the yield of accurate classifier models. An associatedmore » functional ontology was built to describe the functional groups and hierarchy. FOAM allows the user to select the target search space before HMM-based comparison steps and to easily organize the results into different functional categories and subcategories. FOAM is publicly available at http://portal.nersc.gov/project/m1317/FOAM/.« less

  7. The CORE Model to Student Organization Development.

    ERIC Educational Resources Information Center

    Conyne, Robert K.

    Student organization development (SOD) is an emerging technology for conducting intentional student development through positive alteration of student organizations. One model (CORE) for conceptualizing SOD is in use at the Student Development Center of the University of Cincinnati. The CORE model to SOD is comprised of three concentric rings, the…

  8. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  9. Partial automation of database processing of simulation outputs from L-systems models of plant morphogenesis.

    PubMed

    Chen, Yi- Ping Phoebe; Hanan, Jim

    2002-01-01

    Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly. PMID:12069728

  10. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions

    PubMed Central

    Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.

    2014-01-01

    Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498

  11. Database Design Learning: A Project-Based Approach Organized through a Course Management System

    ERIC Educational Resources Information Center

    Dominguez, Cesar; Jaime, Arturo

    2010-01-01

    This paper describes an active method for database design learning through practical tasks development by student teams in a face-to-face course. This method integrates project-based learning, and project management techniques and tools. Some scaffolding is provided at the beginning that forms a skeleton that adapts to a great variety of…

  12. Pricing Models for Electronic Databases on the Internet.

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1998-01-01

    Outlines prevalent electronic information pricing models along with their strengths and weaknesses. Highlights include full-time equivalent (FTE) student counts; pure head counts; print plus fee; concurrent users; IP (information provider) classes; by transaction, connect time or retrieval; other factors, e.g., total budget and materials budget;…

  13. Modeling Powered Aerodynamics for the Orion Launch Abort Vehicle Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Chan, David T.; Walker, Eric L.; Robinson, Philip E.; Wilson, Thomas M.

    2011-01-01

    Modeling the aerodynamics of the Orion Launch Abort Vehicle (LAV) has presented many technical challenges to the developers of the Orion aerodynamic database. During a launch abort event, the aerodynamic environment around the LAV is very complex as multiple solid rocket plumes interact with each other and the vehicle. It is further complicated by vehicle separation events such as between the LAV and the launch vehicle stack or between the launch abort tower and the crew module. The aerodynamic database for the LAV was developed mainly from wind tunnel tests involving powered jet simulations of the rocket exhaust plumes, supported by computational fluid dynamic simulations. However, limitations in both methods have made it difficult to properly capture the aerodynamics of the LAV in experimental and numerical simulations. These limitations have also influenced decisions regarding the modeling and structure of the aerodynamic database for the LAV and led to compromises and creative solutions. Two database modeling approaches are presented in this paper (incremental aerodynamics and total aerodynamics), with examples showing strengths and weaknesses of each approach. In addition, the unique problems presented to the database developers by the large data space required for modeling a launch abort event illustrate the complexities of working with multi-dimensional data.

  14. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  15. Circulation Control Model Experimental Database for CFD Validation

    NASA Technical Reports Server (NTRS)

    Paschal, Keith B.; Neuhart, Danny H.; Beeler, George B.; Allan, Brian G.

    2012-01-01

    A 2D circulation control wing was tested in the Basic Aerodynamic Research Tunnel at the NASA Langley Research Center. A traditional circulation control wing employs tangential blowing along the span over a trailing-edge Coanda surface for the purpose of lift augmentation. This model has been tested extensively at the Georgia Tech Research Institute for the purpose of performance documentation at various blowing rates. The current study seeks to expand on the previous work by documenting additional flow-field data needed for validation of computational fluid dynamics. Two jet momentum coefficients were tested during this entry: 0.047 and 0.114. Boundary-layer transition was investigated and turbulent boundary layers were established on both the upper and lower surfaces of the model. Chordwise and spanwise pressure measurements were made, and tunnel sidewall pressure footprints were documented. Laser Doppler Velocimetry measurements were made on both the upper and lower surface of the model at two chordwise locations (x/c = 0.8 and 0.9) to document the state of the boundary layers near the spanwise blowing slot.

  16. Extracting protein alignment models from the sequence database.

    PubMed Central

    Neuwald, A F; Liu, J S; Lipman, D J; Lawrence, C E

    1997-01-01

    Biologists often gain structural and functional insights into a protein sequence by constructing a multiple alignment model of the family. Here a program called Probe fully automates this process of model construction starting from a single sequence. Central to this program is a powerful new method to locate and align only those, often subtly, conserved patterns essential to the family as a whole. When applied to randomly chosen proteins, Probe found on average about four times as many relationships as a pairwise search and yielded many new discoveries. These include: an obscure subfamily of globins in the roundworm Caenorhabditis elegans ; two new superfamilies of metallohydrolases; a lipoyl/biotin swinging arm domain in bacterial membrane fusion proteins; and a DH domain in the yeast Bud3 and Fus2 proteins. By identifying distant relationships and merging families into superfamilies in this way, this analysis further confirms the notion that proteins evolved from relatively few ancient sequences. Moreover, this method automatically generates models of these ancient conserved regions for rapid and sensitive screening of sequences. PMID:9108146

  17. Primate models in organ transplantation.

    PubMed

    Anderson, Douglas J; Kirk, Allan D

    2013-09-01

    Large animal models have long served as the proving grounds for advances in transplantation, bridging the gap between inbred mouse experimentation and human clinical trials. Although a variety of species have been and continue to be used, the emergence of highly targeted biologic- and antibody-based therapies has required models to have a high degree of homology with humans. Thus, the nonhuman primate has become the model of choice in many settings. This article will provide an overview of nonhuman primate models of transplantation. Issues of primate genetics and care will be introduced, and a brief overview of technical aspects for various transplant models will be discussed. Finally, several prominent immunosuppressive and tolerance strategies used in primates will be reviewed. PMID:24003248

  18. The Cambridge MRI database for animal models of Huntington disease.

    PubMed

    Sawiak, Stephen J; Morton, A Jennifer

    2016-01-01

    We describe the Cambridge animal brain magnetic resonance imaging repository comprising 400 datasets to date from mouse models of Huntington disease. The data include raw images as well as segmented grey and white matter images with maps of cortical thickness. All images and phenotypic data for each subject are freely-available without restriction from (http://www.dspace.cam.ac.uk/handle/1810/243361/). Software and anatomical population templates optimised for animal brain analysis with MRI are also available from this site. PMID:25941090

  19. Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)

    USGS Publications Warehouse

    Tessler, Steven

    2003-01-01

    The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.

  20. Chromism of Model Organic Aerosol

    NASA Astrophysics Data System (ADS)

    Rincon, Angela; Guzman, Marcelo; Hoffmann, Michael; Colussi, Agustin

    2008-03-01

    The optical properties of the atmospheric aerosol play a fundamental role in the Earth's radiative balance. Since more than half of the aerosol mass consists of complex organic matter that absorbs in the ultraviolet and visible regions of the spectrum, it is important to establish the identity of the organic chromophores. Here we report studies on the chromism vs. chemical composition of photolyzed (lambda longer than 305 nm) solutions of pyruvic acid, a widespread aerosol component, under a variety of experimental conditions that include substrate concentration, temperature and the presence of relevant spectator solutes, such ammonium sulfate. We use high resolution mass- and 13C NMR-spectrometries to track chemical speciation in photolyzed solutions as they undergo thermochromic and photobleaching cycles. Since the chemical identity of the components of these mixtures does not change in these cycles, in which photobleached solutions gradually recover their yellow color in the dark with non-conventional kinetics typical of aggregation processes, we infer that visible absorptions likely involve the intermolecular coupling of carbonyl chromophores in supramolecular assemblies made possible by the polyfunctional nature of the products of pyruvic acid photolysis.

  1. The Genomic Threading Database: a comprehensive resource for structural annotations of the genomes from key organisms.

    PubMed

    McGuffin, Liam J; Street, Stefano A; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T

    2004-01-01

    Currently, the Genomic Threading Database (GTD) contains structural assignments for the proteins encoded within the genomes of nine eukaryotes and 101 prokaryotes. Structural annotations are carried out using a modified version of GenTHREADER, a reliable fold recognition method. The Gen THREADER annotation jobs are distributed across multiple clusters of processors using grid technology and the predictions are deposited in a relational database accessible via a web interface at http://bioinf.cs.ucl.ac.uk/GTD. Using this system, up to 84% of proteins encoded within a genome can be confidently assigned to known folds with 72% of the residues aligned. On average in the GTD, 64% of proteins encoded within a genome are confidently assigned to known folds and 58% of the residues are aligned to structures. PMID:14681393

  2. Query Monitoring and Analysis for Database Privacy - A Security Automata Model Approach

    PubMed Central

    Kumar, Anand; Ligatti, Jay; Tu, Yi-Cheng

    2015-01-01

    Privacy and usage restriction issues are important when valuable data are exchanged or acquired by different organizations. Standard access control mechanisms either restrict or completely grant access to valuable data. On the other hand, data obfuscation limits the overall usability and may result in loss of total value. There are no standard policy enforcement mechanisms for data acquired through mutual and copyright agreements. In practice, many different types of policies can be enforced in protecting data privacy. Hence there is the need for an unified framework that encapsulates multiple suites of policies to protect the data. We present our vision of an architecture named security automata model (SAM) to enforce privacy-preserving policies and usage restrictions. SAM analyzes the input queries and their outputs to enforce various policies, liberating data owners from the burden of monitoring data access. SAM allows administrators to specify various policies and enforces them to monitor queries and control the data access. Our goal is to address the problems of data usage control and protection through privacy policies that can be defined, enforced, and integrated with the existing access control mechanisms using SAM. In this paper, we lay out the theoretical foundation of SAM, which is based on an automata named Mandatory Result Automata. We also discuss the major challenges of implementing SAM in a real-world database environment as well as ideas to meet such challenges. PMID:26997936

  3. Transport and Environment Database System (TRENDS): Maritime air pollutant emission modelling

    NASA Astrophysics Data System (ADS)

    Georgakaki, Aliki; Coffey, Robert A.; Lock, Graham; Sorenson, Spencer C.

    This paper reports the development of the maritime module within the framework of the Transport and Environment Database System (TRENDS) project. A detailed database has been constructed for the calculation of energy consumption and air pollutant emissions. Based on an in-house database of commercial vessels kept at the Technical University of Denmark, relationships between the fuel consumption and size of different vessels have been developed, taking into account the fleet's age and service speed. The technical assumptions and factors incorporated in the database are presented, including changes from findings reported in Methodologies for Estimating air pollutant Emissions from Transport (MEET). The database operates on statistical data provided by Eurostat, which describe vessel and freight movements from and towards EU 15 major ports. Data are at port to Maritime Coastal Area (MCA) level, so a bottom-up approach is used. A port to MCA distance database has also been constructed for the purpose of the study. This was the first attempt to use Eurostat maritime statistics for emission modelling; and the problems encountered, since the statistical data collection was not undertaken with a view to this purpose, are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission calculations for bulk carriers entering the port of Helsinki, as an example of the database operation, and aggregate results for different types of movements for France. Overall estimates of SO x and NO x emission caused by shipping traffic between the EU 15 countries are in the area of 1 and 1.5 million tonnes, respectively.

  4. Estimating soil water-holding capacities by linking the Food and Agriculture Organization Soil map of the world with global pedon databases and continuous pedotransfer functions

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Jackson, T. J.; Rawls, W. J.

    2000-12-01

    Spatial soil water-holding capacities were estimated for the Food and Agriculture Organization (FAO) digital Soil Map of the World (SMW) by employing continuous pedotransfer functions (PTF) within global pedon databases and linking these results to the SMW. The procedure first estimated representative soil properties for the FAO soil units by statistical analyses and taxotransfer depth algorithms [Food and Agriculture Organization (FAO), 1996]. The representative soil properties estimated for two layers of depths (0-30 and 30-100 cm) included particle-size distribution, dominant soil texture, organic carbon content, coarse fragments, bulk density, and porosity. After representative soil properties for the FAO soil units were estimated, these values were substituted into three different pedotransfer functions (PTF) models by Rawls et al. [1982], Saxton et al. [1986], and Batjes [1996a]. The Saxton PTF model was finally selected to calculate available water content because it only required particle-size distribution data and results closely agreed with the Rawls and Batjes PTF models that used both particle-size distribution and organic matter data. Soil water-holding capacities were then estimated by multiplying the available water content by the soil layer thickness and integrating over an effective crop root depth of 1 m or less (i.e., encountered shallow impermeable layers) and another soil depth data layer of 2.5 m or less.

  5. Statistical databases

    SciTech Connect

    Kogalovskii, M.R.

    1995-03-01

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  6. Photochemistry of Model Organic Aerosol Systems

    NASA Astrophysics Data System (ADS)

    Mang, S. A.; Bateman, A. P.; Dailo, M.; Do, T.; Nizkorodov, S. A.; Pan, X.; Underwood, J. S.; Walser, M. L.

    2007-05-01

    Up to 90 percent of urban aerosol particles have been shown to contain organic molecules. Reactions of these particles with atmospheric oxidants and/or sunlight result in large changes in their composition, toxicity, and ability to act as cloud condensation nuclei. For this reason, chemistry of model organic aerosol particles initiated by oxidation and direct photolysis is of great interest to atmospheric, climate, and health scientists. Most studies in this area have focused on identifying the products of oxidation of the organic aerosols, while the products of direct photolysis of the resulting molecules remaining in the aerosol particle have been left mostly unexplored. We have explored direct photolytic processes occurring in selected organic aerosol systems using infrared cavity ringdown spectroscopy to identify small gas phase products of photolysis, and mass-spectrometric and photometric techniques to study the condensed phase products. The first model system was secondary organic aerosol formed from the oxidation of several monoterpenes by ozone in the presence and absence of NOx, under different humidities. The second system modeled after oxidatively aged primary organic aerosol particles was a thin film of either alkanes or saturated fatty acids oxidized in several different ways, with the oxidation initiated by ozone, chlorine atom, or OH. In every case, the general conclusion was that the photochemical processing of model organic aerosols is significant. Such direct photolysis processes are believed to age organic aerosol particles on time scales that are short compared to the particles' atmospheric lifetimes.

  7. Gas Chromatography and Mass Spectrometry Measurements and Protocols for Database and Library Development Relating to Organic Species in Support of the Mars Science Laboratory

    NASA Astrophysics Data System (ADS)

    Misra, P.; Garcia, R.; Mahaffy, P. R.

    2010-04-01

    An organic contaminant database and library has been developed for use with the Sample Analysis at Mars (SAM) instrumentation utilizing laboratory-based Gas Chromatography-Mass Spectrometry measurements of pyrolyzed and baked material samples.

  8. TogoTable: cross-database annotation system using the Resource Description Framework (RDF) data model

    PubMed Central

    Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko

    2014-01-01

    TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. PMID:24829452

  9. Modeling personnel turnover in the parametric organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.

  10. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    NASA Astrophysics Data System (ADS)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  11. A genome-scale metabolic flux model of Escherichia coli K–12 derived from the EcoCyc database

    PubMed Central

    2014-01-01

    advantages can be derived from the combination of model organism databases and flux balance modeling represented by MetaFlux. Interpretation of the EcoCyc database as a flux balance model results in a highly accurate metabolic model and provides a rigorous consistency check for information stored in the database. PMID:24974895

  12. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  13. Modeling the High Speed Research Cycle 2B Longitudinal Aerodynamic Database Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.; Proffitt, M. S.

    1999-01-01

    The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.

  14. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  15. The Subject-Object Relationship Interface Model in Database Management Systems.

    ERIC Educational Resources Information Center

    Yannakoudakis, Emmanuel J.; Attar-Bashi, Hussain A.

    1989-01-01

    Describes a model that displays structures necessary to map between the conceptual and external levels in database management systems, using an algorithm that maps the syntactic representations of tuples onto semantic representations. A technique for translating tuples into natural language sentences is introduced, and a system implemented in…

  16. A Multiscale Database of Soil Properties for Regional Environmental Quality Modeling in the Western United States

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA-NRCS STATSGO regional soil database can provide generalized soil information for regional-scale modeling, planning and management of soil and water conservation, and assessment of environmental quality. However, the data available in STATSGO can not be readily extracted nor parameterized to...

  17. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  18. Database on Pathogen and Indicator Organism Survival in Soils and other Environmental Media

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data on survival of pathogen and indicator organism in soils, sediments, organic waste, and waters represent the key information for evaluating management practices and predicting fate and transport of the microorganisms. Such data are, in general, available, but are spread across thousands of publi...

  19. AERMOD: A Dispersion Model for Industrial Source Applications. Part II: Model Performance against 17 Field Study Databases.

    NASA Astrophysics Data System (ADS)

    Perry, Steven G.; Cimorelli, Alan J.; Paine, Robert J.; Brode, Roger W.; Weil, Jeffrey C.; Venkatram, Akula; Wilson, Robert B.; Lee, Russell F.; Peters, Warren D.

    2005-05-01

    The performance of the American Meteorological Society (AMS) and U.S. Environmental Protection Agency (EPA) Regulatory Model (AERMOD) Improvement Committee's applied air dispersion model against 17 field study databases is described. AERMOD is a steady-state plume model with significant improvements over commonly applied regulatory models. The databases are characterized, and the performance measures are described. Emphasis is placed on statistics that demonstrate the model's abilities to reproduce the upper end of the concentration distribution. This is most important for applied regulatory modeling. The field measurements are characterized by flat and complex terrain, urban and rural conditions, and elevated and surface releases with and without building wake effects. As is indicated by comparisons of modeled and observed concentration distributions, with few exceptions AERMOD's performance is superior to that of the other applied models tested. This is the second of two articles, with the first describing the model formulations.

  20. The Transporter Classification Database

    PubMed Central

    Saier, Milton H.; Reddy, Vamsee S.; Tamang, Dorjee G.; Västermark, Åke

    2014-01-01

    The Transporter Classification Database (TCDB; http://www.tcdb.org) serves as a common reference point for transport protein research. The database contains more than 10 000 non-redundant proteins that represent all currently recognized families of transmembrane molecular transport systems. Proteins in TCDB are organized in a five level hierarchical system, where the first two levels are the class and subclass, the second two are the family and subfamily, and the last one is the transport system. Superfamilies that contain multiple families are included as hyperlinks to the five tier TC hierarchy. TCDB includes proteins from all types of living organisms and is the only transporter classification system that is both universal and recognized by the International Union of Biochemistry and Molecular Biology. It has been expanded by manual curation, contains extensive text descriptions providing structural, functional, mechanistic and evolutionary information, is supported by unique software and is interconnected to many other relevant databases. TCDB is of increasing usefulness to the international scientific community and can serve as a model for the expansion of database technologies. This manuscript describes an update of the database descriptions previously featured in NAR database issues. PMID:24225317

  1. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  2. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  3. Examination of the U.S. EPA’s vapor intrusion database based on models

    PubMed Central

    Yao, Yijun; Shen, Rui; Pennell, Kelly G.; Suuberg, Eric M.

    2013-01-01

    In the United States Environmental Protection Agency (U.S. EPA)’s vapor intrusion (VI) database, there appears to be a trend showing an inverse relationship between the indoor air concentration attenuation factor and the subsurface source vapor concentration. This is inconsistent with the physical understanding in current vapor intrusion models. This paper explores possible reasons for this apparent discrepancy. Soil vapor transport processes occur independently of the actual building entry process, and are consistent with the trends in the database results. A recent EPA technical report provided a list of factors affecting vapor intrusion, and the influence of some of these are explored in the context of the database results. PMID:23293835

  4. Development of a database of organ doses for paediatric and young adult CT scans in the United Kingdom

    PubMed Central

    Kim, K. P.; Berrington de González, A.; Pearce, M. S.; Salotti, J. A.; Parker, L.; McHugh, K.; Craft, A. W.; Lee, C.

    2012-01-01

    Despite great potential benefits, there are concerns about the possible harm from medical imaging including the risk of radiation-related cancer. There are particular concerns about computed tomography (CT) scans in children because both radiation dose and sensitivity to radiation for children are typically higher than for adults undergoing equivalent procedures. As direct empirical data on the cancer risks from CT scans are lacking, the authors are conducting a retrospective cohort study of over 240 000 children in the UK who underwent CT scans. The main objective of the study is to quantify the magnitude of the cancer risk in relation to the radiation dose from CT scans. In this paper, the methods used to estimate typical organ-specific doses delivered by CT scans to children are described. An organ dose database from Monte Carlo radiation transport-based computer simulations using a series of computational human phantoms from newborn to adults for both male and female was established. Organ doses vary with patient size and sex, examination types and CT technical settings. Therefore, information on patient age, sex and examination type from electronic radiology information systems and technical settings obtained from two national surveys in the UK were used to estimate radiation dose. Absorbed doses to the brain, thyroid, breast and red bone marrow were calculated for reference male and female individuals with the ages of newborns, 1, 5, 10, 15 and 20 y for a total of 17 different scan types in the pre- and post-2001 time periods. In general, estimated organ doses were slightly higher for females than males which might be attributed to the smaller body size of the females. The younger children received higher doses in pre-2001 period when adult CT settings were typically used for children. Paediatric-specific adjustments were assumed to be used more frequently after 2001, since then radiation doses to children have often been smaller than those to adults. The

  5. Modeling Personnel Turnover in the Parametric Organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A primary issue in organizing a new parametric cost analysis function is to determine the skill mix and number of personnel required. The skill mix can be obtained by a functional decomposition of the tasks required within the organization and a matrixed correlation with educational or experience backgrounds. The number of personnel is a function of the skills required to cover all tasks, personnel skill background and cross training, the intensity of the workload for each task, migration through various tasks by personnel along a career path, personnel hiring limitations imposed by management and the applicant marketplace, personnel training limitations imposed by management and personnel capability, and the rate at which personnel leave the organization for whatever reason. Faced with the task of relating all of these organizational facets in order to grow a parametric cost analysis (PCA) organization from scratch, it was decided that a dynamic model was required in order to account for the obvious dynamics of the forming organization. The challenge was to create such a simple model which would be credible during all phases of organizational development. The model development process was broken down into the activities of determining the tasks required for PCA, determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the dynamic model, implementing the dynamic model, and testing the dynamic model.

  6. D Digital Model Database Applied to Conservation and Research of Wooden Construction in China

    NASA Astrophysics Data System (ADS)

    Zheng, Y.

    2013-07-01

    Protected by the Tai-Hang Mountains, Shanxi Province, located in north central China, is a highly prosperous, densely populated valley and considered to be one of the cradles of Chinese civilization. Its continuous habitation and rich culture have given rise to a large number of temple complexes and pavilions. Among these structures, 153 can be dated as early as from the Tang dynasty (618- 907C.E.) to the end of the Yuan dynasty (1279-1368C.E.) in Southern Shanxi area. The buildings are the best-preserved examples of wooden Chinese architecture in existence, exemplifying historic building technology and displaying highly intricate architectural decoration and detailing. They have survived war, earthquakes, and, in the last hundred years, neglect. In 2005, a decade-long conservation project was initiated by the State Administration of Cultural Heritage of China (SACH) to conserve and document these important buildings. The conservation process requires stabilization, conservation of important features, and, where necessary, partial dismantlement in order to replace unsound structural elements. Project team of CHCC have developed a practical recording system that created a record of all building components prior to and during the conservation process. After that we are trying to establish a comprehensive database which include all of the 153 earlier buildings, through which we can easily entering, browse, indexing information of the wooden construction, even deep into component details. The Database can help us to carry out comparative studies of these wooden structures, and, provide important support for the continued conservation of these heritage buildings. For some of the most important wooden structure, we have established three-dimensional models. Connected the Database with 3D Digital Model based on ArcGIS, we have developed 3D Digital Model Database for these cherish buildings. The 3D Digital Model Database helps us set up an integrate information inventory

  7. Significant speedup of database searches with HMMs by search space reduction with PSSM family models

    PubMed Central

    Beckstette, Michael; Homann, Robert; Giegerich, Robert; Kurtz, Stefan

    2009-01-01

    Motivation: Profile hidden Markov models (pHMMs) are currently the most popular modeling concept for protein families. They provide sensitive family descriptors, and sequence database searching with pHMMs has become a standard task in today's genome annotation pipelines. On the downside, searching with pHMMs is computationally expensive. Results: We propose a new method for efficient protein family classification and for speeding up database searches with pHMMs as is necessary for large-scale analysis scenarios. We employ simpler models of protein families called position-specific scoring matrices family models (PSSM-FMs). For fast database search, we combine full-text indexing, efficient exact p-value computation of PSSM match scores and fast fragment chaining. The resulting method is well suited to prefilter the set of sequences to be searched for subsequent database searches with pHMMs. We achieved a classification performance only marginally inferior to hmmsearch, yet, results could be obtained in a fraction of runtime with a speedup of >64-fold. In experiments addressing the method's ability to prefilter the sequence space for subsequent database searches with pHMMs, our method reduces the number of sequences to be searched with hmmsearch to only 0.80% of all sequences. The filter is very fast and leads to a total speedup of factor 43 over the unfiltered search, while retaining >99.5% of the original results. In a lossless filter setup for hmmsearch on UniProtKB/Swiss-Prot, we observed a speedup of factor 92. Availability: The presented algorithms are implemented in the program PoSSuMsearch2, available for download at http://bibiserv.techfak.uni-bielefeld.de/possumsearch2/. Contact: beckstette@zbh.uni-hamburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19828575

  8. Modelling nitrous oxide emissions from organic soils in Europe

    NASA Astrophysics Data System (ADS)

    Leppelt, Thomas; Dechow, Rene; Gebbert, Sören; Freibauer, Annette

    2013-04-01

    The greenhouse gas emission potential of peatland ecosystems are mandatory for a complete annual emission budget in Europe. The GHG-Europe project aims to improve the modelling capabilities for greenhouse gases, e.g., nitrous oxide. The heterogeneous and event driven fluxes of nitrous oxide are challenging to model on European scale, especially regarding the upscaling purpose and certain parameter estimations. Due to these challenges adequate techniques are needed to create a robust empirical model. Therefore a literature study of nitrous oxide fluxes from organic soils has been carried out. This database contains flux data from boreal and temperate climate zones and covers the different land use categories: cropland, grassland, forest, natural and peat extraction sites. Especially managed crop- and grassland sites feature high emission potential. Generally nitrous oxide emissions increases significantly with deep drainage and intensive application of nitrogen fertilisation. Whereas natural peatland sites with a near surface groundwater table can act as nitrous oxide sink. An empirical fuzzy logic model has been applied to predict annual nitrous oxide emissions from organic soils. The calibration results in two separate models with best model performances for bogs and fens, respectively. The derived parameter combinations of these models contain mean groundwater table, nitrogen fertilisation, annual precipitation, air temperature, carbon content and pH value. Influences of the calibrated parameters on nitrous oxide fluxes are verified by several studies in literature. The extrapolation potential has been tested by an implemented cross validation. Furthermore the parameter ranges of the calibrated models are compared to occurring values on European scale. This avoid unknown systematic errors for the regionalisation purpose. Additionally a sensitivity analysis specify the model behaviour for each alternating parameter. The upscaling process for European peatland

  9. Using the Reactome Database

    PubMed Central

    Haw, Robin

    2012-01-01

    There is considerable interest in the bioinformatics community in creating pathway databases. The Reactome project (a collaboration between the Ontario Institute for Cancer Research, Cold Spring Harbor Laboratory, New York University Medical Center and the European Bioinformatics Institute) is one such pathway database and collects structured information on all the biological pathways and processes in the human. It is an expert-authored and peer-reviewed, curated collection of well-documented molecular reactions that span the gamut from simple intermediate metabolism to signaling pathways and complex cellular events. This information is supplemented with likely orthologous molecular reactions in mouse, rat, zebrafish, worm and other model organisms. This unit describes how to use the Reactome database to learn the steps of a biological pathway; navigate and browse through the Reactome database; identify the pathways in which a molecule of interest is involved; use the Pathway and Expression analysis tools to search the database for and visualize possible connections within user-supplied experimental data set and Reactome pathways; and the Species Comparison tool to compare human and model organism pathways. PMID:22700314

  10. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  11. Animal model integration to AutDB, a genetic database for autism

    PubMed Central

    2011-01-01

    Background In the post-genomic era, multi-faceted research on complex disorders such as autism has generated diverse types of molecular information related to its pathogenesis. The rapid accumulation of putative candidate genes/loci for Autism Spectrum Disorders (ASD) and ASD-related animal models poses a major challenge for systematic analysis of their content. We previously created the Autism Database (AutDB) to provide a publicly available web portal for ongoing collection, manual annotation, and visualization of genes linked to ASD. Here, we describe the design, development, and integration of a new module within AutDB for ongoing collection and comprehensive cataloguing of ASD-related animal models. Description As with the original AutDB, all data is extracted from published, peer-reviewed scientific literature. Animal models are annotated with a new standardized vocabulary of phenotypic terms developed by our researchers which is designed to reflect the diverse clinical manifestations of ASD. The new Animal Model module is seamlessly integrated to AutDB for dissemination of diverse information related to ASD. Animal model entries within the new module are linked to corresponding candidate genes in the original "Human Gene" module of the resource, thereby allowing for cross-modal navigation between gene models and human gene studies. Although the current release of the Animal Model module is restricted to mouse models, it was designed with an expandable framework which can easily incorporate additional species and non-genetic etiological models of autism in the future. Conclusions Importantly, this modular ASD database provides a platform from which data mining, bioinformatics, and/or computational biology strategies may be adopted to develop predictive disease models that may offer further insights into the molecular underpinnings of this disorder. It also serves as a general model for disease-driven databases curating phenotypic characteristics of

  12. Acquisition of Seco Creek GIS database and it's use in water quality models

    SciTech Connect

    Steers, C.A.; Steiner, M.; Taylor, B. )

    1993-02-01

    The Seco Creek Water Quality Demonstration Project covers 1,700,670 acres in parts of Bandera, Frio, Medina and Uvalde Counties in south central Texas. The Seco Creek Database was constructed as part of the Soil Conservation Service's National Water Quality Program to develop hydrologic tools that measure the effects of agricultural nonpoint source pollution and to demonstrate the usefulness of GIS in natural resources management. This project will be part of a GRASS-Water Quality Model Interface which will incorporate watershed models with water quality planning and implementation by January of 1994. The Seco Creek Demonstration Area is the sole water supply for 1.3 million in the San Antonio Area. The database constructed for the project will help maintain the excellent water quality that flows directly into the Edwards Aquifer. The database consists of several vector and raster layers including: SSURGO quality soils, elevation, roads, streams and detailed data on field ownership, cropping and grazing practices and other landuses. This paper will consist of the development and planned uses of the Seco Creek Database.

  13. Measuring the effects of distributed database models on transaction availability measures

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, data replication, and system reliability are key factors in determining the availability measures for transactions in distributed database systems. In order to simplify the evaluation of these measures, database designers and researchers tend to make unrealistic assumptions about these factors. Here, the effect of such assumptions on the computational complexity and accuracy of such evaluations is investigated. A database system is represented with five parameters related to the above factors. Probabilistic analysis is employed to evaluate the availability of read-one and read-write transactions. Both the read-one/write-all and the majority-read/majority-write replication control policies are considered. It is concluded that transaction availability is more sensitive to variations in degrees of replication, less sensitive to data distribution, and insensitive to reliability variations in a heterogeneous system. The computational complexity of the evaluations is found to be mainly determined by the chosen distributed database model, while the accuracy of the results are not so much dependent on the models.

  14. The feature-based modeling of standard tooth in a dental prosthetic database.

    PubMed

    Song, Ya-Li; Li, Jia; Huang, Tian; Gao, Ping

    2005-01-01

    This paper presents a feature-based approach that creates standard teeth models in database to provide the topological construction of the model for dental CAD. The approach arises from the basic idea that every tooth has its individual features and can be implemented in three steps. In the first step, the features on teeth are defined according to the oral anatomy. In the second step, Nurbs surfaces are applied so that the forms of standard teeth can be represented via establishing the topological relationship of features. Here, these feature-based surfaces have the capability of being local controlled that guarantees the accuracy of dental design. In the last step, feature curves are presented to describe the topological construction of dental ridges and grooves. Through these curves, the occlusal surface can be changed globally, simplifying dental design. It is finished with the establishment of standard database composed of 28 standard models constructed by feature-based surfaces and feature curves. PMID:17281869

  15. A scalable database model for multiparametric time series: a volcano observatory case study

    NASA Astrophysics Data System (ADS)

    Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea

    2014-05-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  16. The Chinchilla Research Resource Database: resource for an otolaryngology disease model

    PubMed Central

    Shimoyama, Mary; Smith, Jennifer R.; De Pons, Jeff; Tutaj, Marek; Khampang, Pawjai; Hong, Wenzhou; Erbe, Christy B.; Ehrlich, Garth D.; Bakaletz, Lauren O.; Kerschner, Joseph E.

    2016-01-01

    The long-tailed chinchilla (Chinchilla lanigera) is an established animal model for diseases of the inner and middle ear, among others. In particular, chinchilla is commonly used to study diseases involving viral and bacterial pathogens and polymicrobial infections of the upper respiratory tract and the ear, such as otitis media. The value of the chinchilla as a model for human diseases prompted the sequencing of its genome in 2012 and the more recent development of the Chinchilla Research Resource Database (http://crrd.mcw.edu) to provide investigators with easy access to relevant datasets and software tools to enhance their research. The Chinchilla Research Resource Database contains a complete catalog of genes for chinchilla and, for comparative purposes, human. Chinchilla genes can be viewed in the context of their genomic scaffold positions using the JBrowse genome browser. In contrast to the corresponding records at NCBI, individual gene reports at CRRD include functional annotations for Disease, Gene Ontology (GO) Biological Process, GO Molecular Function, GO Cellular Component and Pathway assigned to chinchilla genes based on annotations from the corresponding human orthologs. Data can be retrieved via keyword and gene-specific searches. Lists of genes with similar functional attributes can be assembled by leveraging the hierarchical structure of the Disease, GO and Pathway vocabularies through the Ontology Search and Browser tool. Such lists can then be further analyzed for commonalities using the Gene Annotator (GA) Tool. All data in the Chinchilla Research Resource Database is freely accessible and downloadable via the CRRD FTP site or using the download functions available in the search and analysis tools. The Chinchilla Research Resource Database is a rich resource for researchers using, or considering the use of, chinchilla as a model for human disease. Database URL: http://crrd.mcw.edu PMID:27173523

  17. The Chinchilla Research Resource Database: resource for an otolaryngology disease model.

    PubMed

    Shimoyama, Mary; Smith, Jennifer R; De Pons, Jeff; Tutaj, Marek; Khampang, Pawjai; Hong, Wenzhou; Erbe, Christy B; Ehrlich, Garth D; Bakaletz, Lauren O; Kerschner, Joseph E

    2016-01-01

    The long-tailed chinchilla (Chinchilla lanigera) is an established animal model for diseases of the inner and middle ear, among others. In particular, chinchilla is commonly used to study diseases involving viral and bacterial pathogens and polymicrobial infections of the upper respiratory tract and the ear, such as otitis media. The value of the chinchilla as a model for human diseases prompted the sequencing of its genome in 2012 and the more recent development of the Chinchilla Research Resource Database (http://crrd.mcw.edu) to provide investigators with easy access to relevant datasets and software tools to enhance their research. The Chinchilla Research Resource Database contains a complete catalog of genes for chinchilla and, for comparative purposes, human. Chinchilla genes can be viewed in the context of their genomic scaffold positions using the JBrowse genome browser. In contrast to the corresponding records at NCBI, individual gene reports at CRRD include functional annotations for Disease, Gene Ontology (GO) Biological Process, GO Molecular Function, GO Cellular Component and Pathway assigned to chinchilla genes based on annotations from the corresponding human orthologs. Data can be retrieved via keyword and gene-specific searches. Lists of genes with similar functional attributes can be assembled by leveraging the hierarchical structure of the Disease, GO and Pathway vocabularies through the Ontology Search and Browser tool. Such lists can then be further analyzed for commonalities using the Gene Annotator (GA) Tool. All data in the Chinchilla Research Resource Database is freely accessible and downloadable via the CRRD FTP site or using the download functions available in the search and analysis tools. The Chinchilla Research Resource Database is a rich resource for researchers using, or considering the use of, chinchilla as a model for human disease.Database URL: http://crrd.mcw.edu. PMID:27173523

  18. The origin and evolution of model organisms

    NASA Technical Reports Server (NTRS)

    Hedges, S. Blair

    2002-01-01

    The phylogeny and timescale of life are becoming better understood as the analysis of genomic data from model organisms continues to grow. As a result, discoveries are being made about the early history of life and the origin and development of complex multicellular life. This emerging comparative framework and the emphasis on historical patterns is helping to bridge barriers among organism-based research communities.

  19. Database of the United States Coal Pellet Collection of the U.S. Geological Survey Organic Petrology Laboratory

    USGS Publications Warehouse

    Deems, Nikolaus J.; Hackley, Paul C.

    2012-01-01

    The Organic Petrology Laboratory (OPL) of the U.S. Geological Survey (USGS) Eastern Energy Resources Science Center in Reston, Virginia, contains several thousand processed coal sample materials that were loosely organized in laboratory drawers for the past several decades. The majority of these were prepared as 1-inch-diameter particulate coal pellets (more than 6,000 pellets; one sample usually was prepared as two pellets, although some samples were prepared in as many as four pellets), which were polished and used in reflected light petrographic studies. These samples represent the work of many scientists from the 1970s to the present, most notably Ron Stanton, who managed the OPL until 2001 (see Warwick and Ruppert, 2005, for a comprehensive bibliography of Ron Stanton's work). The purpose of the project described herein was to organize and catalog the U.S. part of the petrographic sample collection into a comprehensive database (available with this report as a Microsoft Excel file) and to compile and list published studies associated with the various sample sets. Through this work, the extent of the collection is publicly documented as a resource and sample library available to other scientists and researchers working in U.S. coal basins previously studied by organic petrologists affiliated with the USGS. Other researchers may obtain samples in the OPL collection on loan at the discretion of the USGS authors listed in this report and its associated Web page.

  20. Putting "Organizations" into an Organization Theory Course: A Hybrid CAO Model for Teaching Organization Theory

    ERIC Educational Resources Information Center

    Hannah, David R.; Venkatachary, Ranga

    2010-01-01

    In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…

  1. Toxicity of halogenated organic compounds. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    Not Available

    1993-09-01

    The bibliography contains citations concerning health and environmental effects of halogenated organic compounds. Topics include laboratory and field investigations regarding bioaccumulation and concentration, metabolic aspects, and specific site studies in industrial and commercial operations. Pesticides, solvents, and a variety of industrial compounds are discussed. (Contains 250 citations and includes a subject term index and title list.)

  2. Research proceedings on amphibian model organisms

    PubMed Central

    LIU, Lu-Sha; ZHAO, Lan-Ying; WANG, Shou-Hong; JIANG, Jian-Ping

    2016-01-01

    Model organisms have long been important in biology and medicine due to their specific characteristics. Amphibians, especially Xenopus, play key roles in answering fundamental questions on developmental biology, regeneration, genetics, and toxicology due to their large and abundant eggs, as well as their versatile embryos, which can be readily manipulated and developed in vivo. Furthermore, amphibians have also proven to be of considerable benefit in human disease research due to their conserved cellular developmental and genomic organization. This review gives a brief introduction on the progress and limitations of these animal models in biology and human disease research, and discusses the potential and challenge of Microhyla fissipes as a new model organism. PMID:27469255

  3. Research proceedings on amphibian model organisms.

    PubMed

    Liu, Lu-Sha; Zhao, Lan-Ying; Wang, Shou-Hong; Jiang, Jian-Ping

    2016-07-18

    Model organisms have long been important in biology and medicine due to their specific characteristics. Amphibians, especially Xenopus, play key roles in answering fundamental questions on developmental biology, regeneration, genetics, and toxicology due to their large and abundant eggs, as well as their versatile embryos, which can be readily manipulated and developed in vivo. Furthermore, amphibians have also proven to be of considerable benefit in human disease research due to their conserved cellular developmental and genomic organization. This review gives a brief introduction on the progress and limitations of these animal models in biology and human disease research, and discusses the potential and challenge of Microhyla fissipes as a new model organism. PMID:27469255

  4. Leaf respiration (GlobResp) - global trait database supports Earth System Models

    DOE PAGESBeta

    Wullschleger, Stan D.; Warren, Jeffrey; Thornton, Peter E.

    2015-03-20

    Here we detail how Atkin and his colleagues compiled a global database (GlobResp) that details rates of leaf dark respiration and associated traits from sites that span Arctic tundra to tropical forests. This compilation builds upon earlier research (Reich et al., 1998; Wright et al., 2006) and was supplemented by recent field campaigns and unpublished data.In keeping with other trait databases, GlobResp provides insights on how physiological traits, especially rates of dark respiration, vary as a function of environment and how that variation can be used to inform terrestrial biosphere models and land surface components of Earth System Models. Althoughmore » an important component of plant and ecosystem carbon (C) budgets (Wythers et al., 2013), respiration has only limited representation in models. Seen through the eyes of a plant scientist, Atkin et al. (2015) give readers a unique perspective on the climatic controls on respiration, thermal acclimation and evolutionary adaptation of dark respiration, and insights into the covariation of respiration with other leaf traits. We find there is ample evidence that once large databases are compiled, like GlobResp, they can reveal new knowledge of plant function and provide a valuable resource for hypothesis testing and model development.« less

  5. Leaf respiration (GlobResp) - global trait database supports Earth System Models

    SciTech Connect

    Wullschleger, Stan D.; Warren, Jeffrey; Thornton, Peter E.

    2015-03-20

    Here we detail how Atkin and his colleagues compiled a global database (GlobResp) that details rates of leaf dark respiration and associated traits from sites that span Arctic tundra to tropical forests. This compilation builds upon earlier research (Reich et al., 1998; Wright et al., 2006) and was supplemented by recent field campaigns and unpublished data.In keeping with other trait databases, GlobResp provides insights on how physiological traits, especially rates of dark respiration, vary as a function of environment and how that variation can be used to inform terrestrial biosphere models and land surface components of Earth System Models. Although an important component of plant and ecosystem carbon (C) budgets (Wythers et al., 2013), respiration has only limited representation in models. Seen through the eyes of a plant scientist, Atkin et al. (2015) give readers a unique perspective on the climatic controls on respiration, thermal acclimation and evolutionary adaptation of dark respiration, and insights into the covariation of respiration with other leaf traits. We find there is ample evidence that once large databases are compiled, like GlobResp, they can reveal new knowledge of plant function and provide a valuable resource for hypothesis testing and model development.

  6. Data extraction tool and colocation database for satellite and model product evaluation (Invited)

    NASA Astrophysics Data System (ADS)

    Ansari, S.; Zhang, H.; Privette, J. L.; Del Greco, S.; Urzen, M.; Pan, Y.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2009-12-01

    The Satellite Product Evaluation Center (SPEC) is an ongoing project to integrate operational monitoring of data products from satellite and model analysis, with support for quantitative calibration, validation and algorithm improvement. The system uniquely allows scientists and others to rapidly access, subset, visualize, statistically compare and download multi-temporal data from multiple in situ, satellite, weather radar and model sources without reference to native data and metadata formats, packaging or physical location. Although still in initial development, the SPEC database and services will contain a wealth of integrated data for evaluation, validation, and discovery science activities across many different disciplines. The SPEC data extraction architecture departs from traditional dataset and research driven approaches through the use of standards and relational database technology. The NetCDF for Java API is used as a framework for data decoding and abstraction. The data are treated as generic feature types (such as Grid or Swath) as defined by the NetCDF Climate and Forecast (CF) metadata conventions. Colocation data for various field measurement networks, such as the Climate Reference Network (CRN) and Ameriflux network, are extracted offline, from local disk or distributed sources. The resulting data subsets are loaded into a relational database for fast access. URL-based (Representational State Transfer (REST)) web services are provided for simple database access to application programmers and scientists. SPEC supports broad NOAA, U.S. Global Change Research Program (USGCRP) and World Climate Research Programme (WCRP) initiatives including the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and NOAA’s Climate Data Record (CDR) programs. SPEC is a collaboration between NOAA’s National Climatic Data Center (NCDC) and DOE’s Oak Ridge National Laboratory (ORNL). In this presentation we will describe the data extraction

  7. S-World: A high resolution global soil database for simulation modelling (Invited)

    NASA Astrophysics Data System (ADS)

    Stoorvogel, J. J.

    2013-12-01

    There is an increasing call for high resolution soil information at the global level. A good example for such a call is the Global Gridded Crop Model Intercomparison carried out within AgMIP. While local studies can make use of surveying techniques to collect additional techniques this is practically impossible at the global level. It is therefore important to rely on legacy data like the Harmonized World Soil Database. Several efforts do exist that aim at the development of global gridded soil property databases. These estimates of the variation of soil properties can be used to assess e.g., global soil carbon stocks. However, they do not allow for simulation runs with e.g., crop growth simulation models as these models require a description of the entire pedon rather than a few soil properties. This study provides the required quantitative description of pedons at a 1 km resolution for simulation modelling. It uses the Harmonized World Soil Database (HWSD) for the spatial distribution of soil types, the ISRIC-WISE soil profile database to derive information on soil properties per soil type, and a range of co-variables on topography, climate, and land cover to further disaggregate the available data. The methodology aims to take stock of these available data. The soil database is developed in five main steps. Step 1: All 148 soil types are ordered on the basis of their expected topographic position using e.g., drainage, salinization, and pedogenesis. Using the topographic ordering and combining the HWSD with a digital elevation model allows for the spatial disaggregation of the composite soil units. This results in a new soil map with homogeneous soil units. Step 2: The ranges of major soil properties for the topsoil and subsoil of each of the 148 soil types are derived from the ISRIC-WISE soil profile database. Step 3: A model of soil formation is developed that focuses on the basic conceptual question where we are within the range of a particular soil property

  8. Thermodynamic modeling for organic solid precipitation

    SciTech Connect

    Chung, T.H.

    1992-12-01

    A generalized predictive model which is based on thermodynamic principle for solid-liquid phase equilibrium has been developed for organic solid precipitation. The model takes into account the effects of temperature, composition, and activity coefficient on the solubility of wax and asphaltenes in organic solutions. The solid-liquid equilibrium K-value is expressed as a function of the heat of melting, melting point temperature, solubility parameter, and the molar volume of each component in the solution. All these parameters have been correlated with molecular weight. Thus, the model can be applied to crude oil systems. The model has been tested with experimental data for wax formation and asphaltene precipitation. The predicted wax appearance temperature is very close to the measured temperature. The model not only can match the measured asphaltene solubility data but also can be used to predict the solubility of asphaltene in organic solvents or crude oils. The model assumes that asphaltenes are dissolved in oil in a true liquid state, not in colloidal suspension, and the precipitation-dissolution process is reversible by changing thermodynamic conditions. The model is thermodynamically consistent and has no ambiguous assumptions.

  9. Global database of InSAR earthquake source models: A tool for independently assessing seismic catalogues

    NASA Astrophysics Data System (ADS)

    Ferreira, A. M.; Weston, J. M.; Funning, G. J.

    2011-12-01

    Earthquake source models are routinely determined using seismic data and are reported in many seismic catalogues, such as the Global Centroid Moment Tensor (GCMT) catalogue. Recent advances in space geodesy, such as InSAR, have enabled the estimation of earthquake source parameters from the measurement of deformation of the Earth's surface, independently of seismic information. The absence of an earthquake catalogue based on geodetic data prompted the compilation of a large InSAR database of CMT parameters from the literature (Weston et al., 2011, hereafter referred to as the ICMT database). Information provided in published InSAR studies of earthquakes is used to obtain earthquake source parameters, and equivalent CMT parameters. Multiple studies of the same earthquake are included in the database, as they are valuable to assess uncertainties in source models. Here, source parameters for 70 earthquakes in an updated version of the ICMT database are compared with those reported in global and regional seismic catalogues. There is overall good agreement between parameters, particularly in fault strike, dip and rake. However, InSAR centroid depths are systematically shallower (5-10 km) than those in the EHB catalogue, but this is reduced for depths from inversions of InSAR data that use a layered half-space. Estimates of the seismic moment generally agree well between the two datasets, but for thrust earthquakes there is a slight tendency for the InSAR-determined seismic moment to be larger. Centroid locations from the ICMT database are in good agreement with those from regional seismic catalogues with a median distance of ~6 km between them, which is smaller than for comparisons with global catalogues (17.0 km and 9.2 km for the GCMT and ISC catalogues, respectively). Systematic tests of GCMT-like inversions have shown that similar mislocations occur for several different degree 20 Earth models (Ferreira et al., 2011), suggesting that higher resolution Earth models

  10. Methodology for creating three-dimensional terrain databases for use in IR signature modeling

    NASA Astrophysics Data System (ADS)

    Williams, Bryan L.; Pickard, J. W., Jr.

    1996-06-01

    This paper describes a methodology which has been successfully used to create high fidelity three-dimensional infrared (IR) signature models of terrain backgrounds for use in digital simulations by the U.S. Army Missile Command. Topics discussed include (1) derivation of database fidelity and resolution requirements based upon system parameters, (2) use of existing digital elevation maps (DEMs) (3) generation of digital elevation maps from stereo aerial and satellite imagery, and (4) classification of ground cover materials.

  11. Modeling organic solvents permeation through protective gloves.

    PubMed

    Chao, Keh-Ping; Wang, Ven-Shing; Lee, Pak-Hing

    2004-02-01

    Several researchers have studied the diffusion of organic solvents through chemical protective gloves and have estimated the diffusion coefficients by using various models. In this study, permeation experiments of benzene, toluene, and styrene through nitrile and Neoprene gloves were conducted using the ASTM F-739 standard test method. The diffusion coefficients were estimated using several models from the literature. Using a one-dimensional diffusion equation based on Fick's second law and the estimated diffusion coefficients, the permeation concentrations were simulated and compared with the experimental results. The modeling results indicated that the solubility of the solvent in the glove materials obtained by immersion tests was not an appropriate boundary condition for organic solvent permeation through the polymer gloves. The modeling work of this study will assist industrial hygienists to assess exposure of chemicals to workers through the chemical protective gloves. PMID:15204879

  12. Databases for solar energetic particle models: methodical uncertainties and technical errors

    NASA Astrophysics Data System (ADS)

    Mottl, D.; Nymmik, R.

    Quite a number of models have been developed recently to describe the solar energetic proton fluxes, which constitute a very important source of radiation hazard in the space. The models make use of not only different methods, but also different experimental databases. Therefore, the credibility of the models is essentially defined by not only the adequacy of the preferred methods, but also the reliability of the databases. The results are reported of the comparative analysis of the IMP-8, GOES-6, GOES-7, and METEOR-measured proton flux databases of cycle 22. The comparative analysis results are used together with the data on the compatibility of the measured energy spectra with the ground-based neutron monitor measurements. Significant methodological errors have been found in the particle fluxes measured with different instruments. The systematic technical errors in separate energy channels reach factor 5 (the 15-44 MeV channel of GOES-6). The reliability of the corrections introduced into the measured fluxes, as well as of the techniques for scaling the measured differential fluxes to the calculated integral fluxes, is discussed. The GOES -7 - measured (uncorrected) differential fluxes and the METEOR-measured integral fluxes are shown to be most reliable.

  13. Sediment-Hosted Copper Deposits of the World: Deposit Models and Database

    USGS Publications Warehouse

    Cox, Dennis P.; Lindsey, David A.; Singer, Donald A.; Diggles, Michael F.

    2003-01-01

    Introduction This publication contains four descriptive models and four grade-tonnage models for sediment hosted copper deposits. Descriptive models are useful in exploration planning and resource assessment because they enable the user to identify deposits in the field and to identify areas on geologic and geophysical maps where deposits could occur. Grade and tonnage models are used in resource assessment to predict the likelihood of different combinations of grades and tonnages that could occur in undiscovered deposits in a specific area. They are also useful in exploration in deciding what deposit types meet the economic objectives of the exploration company. The models in this report supersede the sediment-hosted copper models in USGS Bulletin 1693 (Cox, 1986, and Mosier and others, 1986) and are subdivided into a general type and three subtypes. The general model is useful in classifying deposits whose features are obscured by metamorphism or are otherwise poorly described, and for assessing regions in which the geologic environments are poorly understood. The three subtypes are based on differences in deposit form and environments of deposition. These differences are described under subtypes in the general model. Deposit models are based on the descriptions of geologic environments and physical characteristics, and on metal grades and tonnages of many individual deposits. Data used in this study are presented in a database representing 785 deposits in nine continents. This database was derived partly from data published by Kirkham and others (1994) and from new information in recent publications. To facilitate the construction of grade and tonnage models, the information, presented by Kirkham in disaggregated form, was brought together to provide a single grade and a single tonnage for each deposit. Throughout the report individual deposits are defined as being more than 2,000 meters from the nearest adjacent deposit. The deposit models are presented here as

  14. A database and tool for boundary conditions for regional air quality modeling: description and evaluation

    NASA Astrophysics Data System (ADS)

    Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.

    2014-02-01

    Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying lateral boundary conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2001-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complemented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone and carbon monoxide vertical profiles. The results show performance is largely within uncertainty estimates for ozone from the Ozone Monitoring Instrument and carbon monoxide from the Measurements Of Pollution In The Troposphere (MOPITT), but there were some notable biases compared with Tropospheric Emission Spectrometer (TES) ozone. Compared with TES, our ozone predictions are high-biased in the upper troposphere, particularly in the south during January. This publication documents the global simulation database, the tool for conversion to LBC, and the evaluation of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.

  15. Experiment Databases

    NASA Astrophysics Data System (ADS)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  16. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models

  17. Identifying mouse models for skin cancer using the Mouse Tumor Biology Database.

    PubMed

    Begley, Dale A; Krupke, Debra M; Neuhauser, Steven B; Richardson, Joel E; Schofield, Paul N; Bult, Carol J; Eppig, Janan T; Sundberg, John P

    2014-10-01

    In recent years, the scientific community has generated an ever-increasing amount of data from a growing number of animal models of human cancers. Much of these data come from genetically engineered mouse models. Identifying appropriate models for skin cancer and related relevant genetic data sets from an expanding pool of widely disseminated data can be a daunting task. The Mouse Tumor Biology Database (MTB) provides an electronic archive, search and analysis system that can be used to identify dermatological mouse models of cancer, retrieve model-specific data and analyse these data. In this report, we detail MTB's contents and capabilities, together with instructions on how to use MTB to search for skin-related tumor models and associated data. PMID:25040013

  18. Biophysical Modeling of Respiratory Organ Motion

    NASA Astrophysics Data System (ADS)

    Werner, René

    Methods to estimate respiratory organ motion can be divided into two groups: biophysical modeling and image registration. In image registration, motion fields are directly extracted from 4D ({D}+{t}) image sequences, often without concerning knowledge about anatomy and physiology in detail. In contrast, biophysical approaches aim at identification of anatomical and physiological aspects of breathing dynamics that are to be modeled. In the context of radiation therapy, biophysical modeling of respiratory organ motion commonly refers to the framework of continuum mechanics and elasticity theory, respectively. Underlying ideas and corresponding boundary value problems of those approaches are described in this chapter, along with a brief comparison to image registration-based motion field estimation.

  19. Extensions to the time-oriented database model to support temporal reasoning in medical expert systems.

    PubMed

    Kahn, M G; Fagan, L M; Tu, S

    1991-01-01

    Physicians faced with diagnostic and therapeutic decisions must reason about clinical features that change over time. Database-management systems (DBMS) can increase access to patient data, but most systems are limited in their ability to store and retrieve complex temporal information. The Time-Oriented Databank (TOD) model, the most widely used data model for medical database systems, associates a single time stamp with each observation. The proper analysis of most clinical data requires accounting for multiple concurrent clinical events that may alter the interpretation of the raw data. Most medical DBMSs cannot retrieve patient data indexed by multiple clinical events. We describe two logical extensions to TOD-based databases that solve a set of temporal reasoning problems we encountered in constructing medical expert systems. A key feature of both extensions is that stored data are partitioned into groupings, such as sequential clinical visits, clinical exacerbations, or other abstract events that have clinical decision-making relevance. The temporal network (TNET) is an object-oriented database that extends the temporal reasoning capabilities of ONCOCIN, a medical expert system that provides chemotherapy advice. TNET uses persistent objects to associate observations with intervals of time during which "an event of clinical interest" occurred. A second object-oriented system called the extended temporal network (ETNET), is both an extension and a simplification of TNET. Like TNET, ETNET uses persistent objects to represent relevant intervals; unlike the first system, however, ETNET contains reasoning methods (rules) that can be executed when an event "begins", and that are withdrawn when that event "concludes". TNET and ETNET capture temporal relationships among recorded information that are not represented in TOD-based databases. Although they do not solve all temporal reasoning problems found in medical decision making, these new structures enable patient

  20. MODEL-BASED HYDROACOUSTIC BLOCKAGE ASSESSMENT AND DEVELOPMENT OF AN EXPLOSIVE SOURCE DATABASE

    SciTech Connect

    Matzel, E; Ramirez, A; Harben, P

    2005-07-11

    We are continuing the development of the Hydroacoustic Blockage Assessment Tool (HABAT) which is designed for use by analysts to predict which hydroacoustic monitoring stations can be used in discrimination analysis for any particular event. The research involves two approaches (1) model-based assessment of blockage, and (2) ground-truth data-based assessment of blockage. The tool presents the analyst with a map of the world, and plots raypath blockages from stations to sources. The analyst inputs source locations and blockage criteria, and the tool returns a list of blockage status from all source locations to all hydroacoustic stations. We are currently using the tool in an assessment of blockage criteria for simple direct-path arrivals. Hydroacoustic data, predominantly from earthquake sources, are read in and assessed for blockage at all available stations. Several measures are taken. First, can the event be observed at a station above background noise? Second, can we establish backazimuth from the station to the source. Third, how large is the decibel drop at one station relative to other stations. These observational results are then compared with model estimates to identify the best set of blockage criteria and used to create a set of blockage maps for each station. The model-based estimates are currently limited by the coarse bathymetry of existing databases and by the limitations inherent in the raytrace method. In collaboration with BBN Inc., the Hydroacoustic Coverage Assessment Model (HydroCAM) that generates the blockage files that serve as input to HABAT, is being extended to include high-resolution bathymetry databases in key areas that increase model-based blockage assessment reliability. An important aspect of this capability is to eventually include reflected T-phases where they reliably occur and to identify the associated reflectors. To assess how well any given hydroacoustic discriminant works in separating earthquake and in-water explosion

  1. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  2. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2013-12-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation type data from the European Space Agency (ESA) GlobCover Project, and 30 arc-sec Leaf Area Index and Fraction of Absorbed Photosynthetically Active Radiation data from the ESA GlobCarbon Project. Simulations are carried out for the Metropolitan Area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers with depths of 0.01 and 1.0 m are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering the period from 6 to 7 September 2007 are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, topographic and land-use databases and grid resolution. Our comparisons show overall good agreement between simulated and observed data and also indicate that the low resolution of the 30 arc-sec soil database from United States Geological Survey, the soil moisture and skin temperature initial conditions assimilated from the GFS analyses and the synoptic forcing on the lateral boundaries of the finer grids may affect an adequate spatial description of the meteorological variables.

  3. Seismic hazard assessment for Myanmar: Earthquake model database, ground-motion scenarios, and probabilistic assessments

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Thant, M.; Maung Maung, P.; Sieh, K.

    2015-12-01

    We have constructed an earthquake and fault database, conducted a series of ground-shaking scenarios, and proposed seismic hazard maps for all of Myanmar and hazard curves for selected cities. Our earthquake database integrates the ISC, ISC-GEM and global ANSS Comprehensive Catalogues, and includes harmonized magnitude scales without duplicate events. Our active fault database includes active fault data from previous studies. Using the parameters from these updated databases (i.e., the Gutenberg-Richter relationship, slip rate, maximum magnitude and the elapse time of last events), we have determined the earthquake recurrence models of seismogenic sources. To evaluate the ground shaking behaviours in different tectonic regimes, we conducted a series of tests by matching the modelled ground motions to the felt intensities of earthquakes. Through the case of the 1975 Bagan earthquake, we determined that Atkinson and Moore's (2003) scenario using the ground motion prediction equations (GMPEs) fits the behaviours of the subduction events best. Also, the 2011 Tarlay and 2012 Thabeikkyin events suggested the GMPEs of Akkar and Cagnan (2010) fit crustal earthquakes best. We thus incorporated the best-fitting GMPEs and site conditions based on Vs30 (the average shear-velocity down to 30 m depth) from analysis of topographic slope and microtremor array measurements to assess seismic hazard. The hazard is highest in regions close to the Sagaing Fault and along the Western Coast of Myanmar as seismic sources there have earthquakes occur at short intervals and/or last events occurred a long time ago. The hazard curves for the cities of Bago, Mandalay, Sagaing, Taungoo and Yangon show higher hazards for sites close to an active fault or with a low Vs30, e.g., the downtown of Sagaing and Shwemawdaw Pagoda in Bago.

  4. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2014-08-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.

  5. Epidemiology of Occupational Accidents in Iran Based on Social Security Organization Database

    PubMed Central

    Mehrdad, Ramin; Seifmanesh, Shahdokht; Chavoshi, Farzaneh; Aminian, Omid; Izadi, Nazanin

    2014-01-01

    Background: Background: Today, occupational accidents are one of the most important problems in industrial world. Due to lack of appropriate system for registration and reporting, there is no accurate statistics of occupational accidents all over the world especially in developing countries. Objectives: The aim of this study is epidemiological assessment of occupational accidents in Iran. Materials and Methods: Information of available occupational accidents in Social Security Organization was extracted from accident reporting and registration forms. In this cross-sectional study, gender, age, economic activity, type of accident and injured body part in 22158 registered accidents during 2008 were described. Results: The occupational accidents rate was 253 in 100,000 workers in 2008. 98.2% of injured workers were men. The mean age of injured workers was 32.07 ± 9.12 years. The highest percentage belonged to age group of 25-34 years old. In our study, most of the accidents occurred in basic metals industry, electrical and non-electrical machines and construction industry. Falling down from height and crush injury were the most prevalent accidents. Upper and lower extremities were the most common injured body parts. Conclusion: Due to the high rate of accidents in metal and construction industries, engineering controls, the use of appropriate protective equipment and safety worker training seems necessary. PMID:24719699

  6. InterMOD: integrated data and tools for the unification of model organism research.

    PubMed

    Sullivan, Julie; Karra, Kalpana; Moxon, Sierra A T; Vallejos, Andrew; Motenko, Howie; Wong, J D; Aleksic, Jelena; Balakrishnan, Rama; Binkley, Gail; Harris, Todd; Hitz, Benjamin; Jayaraman, Pushkala; Lyne, Rachel; Neuhauser, Steven; Pich, Christian; Smith, Richard N; Trinh, Quang; Cherry, J Michael; Richardson, Joel; Stein, Lincoln; Twigger, Simon; Westerfield, Monte; Worthey, Elizabeth; Micklem, Gos

    2013-01-01

    Model organisms are widely used for understanding basic biology, and have significantly contributed to the study of human disease. In recent years, genomic analysis has provided extensive evidence of widespread conservation of gene sequence and function amongst eukaryotes, allowing insights from model organisms to help decipher gene function in a wider range of species. The InterMOD consortium is developing an infrastructure based around the InterMine data warehouse system to integrate genomic and functional data from a number of key model organisms, leading the way to improved cross-species research. So far including budding yeast, nematode worm, fruit fly, zebrafish, rat and mouse, the project has set up data warehouses, synchronized data models, and created analysis tools and links between data from different species. The project unites a number of major model organism databases, improving both the consistency and accessibility of comparative research, to the benefit of the wider scientific community. PMID:23652793

  7. JAKs and STATs in invertebrate model organisms.

    PubMed

    Dearolf, C R

    1999-09-01

    Invertebrate organisms provide systems to elucidate the developmental roles of Janus kinase (JAK)/signal transducers and activators of transcription (STAT) signaling pathways, thereby complementing research conducted with mammalian cells and animals. Components of the JAK/STAT protein pathway have been identified and characterized in the fruit fly Drosophila melanogaster and the cellular slime mold Dictyostelium discoideum. This review summarizes the molecular and genetic data obtained from these model organisms. In particular, a Drosophila JAK/STAT pathway regulates normal segmentation, cell proliferation, and differentiation, and hyperactivation of the pathway leads to tumor formation and leukemia-like defects. A Dictyostelium STAT regulates the development of stalk cells during the multicellular part of the life cycle. Future research utilizing these organisms should continue to provide insights into the roles and regulation of these proteins and their signaling pathways. PMID:10526575

  8. Coupling ensemble weather predictions based on TIGGE database with Grid-Xinanjiang model for flood forecast

    NASA Astrophysics Data System (ADS)

    Bao, H.-J.; Zhao, L.-N.; He, Y.; Li, Z.-J.; Wetterhall, F.; Cloke, H. L.; Pappenberger, F.; Manful, D.

    2011-02-01

    The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.

  9. Longitudinal driver model and collision warning and avoidance algorithms based on human driving databases

    NASA Astrophysics Data System (ADS)

    Lee, Kangwon

    Intelligent vehicle systems, such as Adaptive Cruise Control (ACC) or Collision Warning/Collision Avoidance (CW/CA), are currently under development, and several companies have already offered ACC on selected models. Control or decision-making algorithms of these systems are commonly evaluated under extensive computer simulations and well-defined scenarios on test tracks. However, they have rarely been validated with large quantities of naturalistic human driving data. This dissertation utilized two University of Michigan Transportation Research Institute databases (Intelligent Cruise Control Field Operational Test and System for Assessment of Vehicle Motion Environment) in the development and evaluation of longitudinal driver models and CW/CA algorithms. First, to examine how drivers normally follow other vehicles, the vehicle motion data from the databases were processed using a Kalman smoother. The processed data was then used to fit and evaluate existing longitudinal driver models (e.g., the linear follow-the-leader model, the Newell's special model, the nonlinear follow-the-leader model, the linear optimal control model, the Gipps model and the optimal velocity model). A modified version of the Gipps model was proposed and found to be accurate in both microscopic (vehicle) and macroscopic (traffic) senses. Second, to examine emergency braking behavior and to evaluate CW/CA algorithms, the concepts of signal detection theory and a performance index suitable for unbalanced situations (few threatening data points vs. many safe data points) are introduced. Selected existing CW/CA algorithms were found to have a performance index (geometric mean of true-positive rate and precision) not exceeding 20%. To optimize the parameters of the CW/CA algorithms, a new numerical optimization scheme was developed to replace the original data points with their representative statistics. A new CW/CA algorithm was proposed, which was found to score higher than 55% in the

  10. Model Organisms and Traditional Chinese Medicine Syndrome Models

    PubMed Central

    Xu, Jin-Wen

    2013-01-01

    Traditional Chinese medicine (TCM) is an ancient medical system with a unique cultural background. Nowadays, more and more Western countries due to its therapeutic efficacy are accepting it. However, safety and clear pharmacological action mechanisms of TCM are still uncertain. Due to the potential application of TCM in healthcare, it is necessary to construct a scientific evaluation system with TCM characteristics and benchmark the difference from the standard of Western medicine. Model organisms have played an important role in the understanding of basic biological processes. It is easier to be studied in certain research aspects and to obtain the information of other species. Despite the controversy over suitable syndrome animal model under TCM theoretical guide, it is unquestionable that many model organisms should be used in the studies of TCM modernization, which will bring modern scientific standards into mysterious ancient Chinese medicine. In this review, we aim to summarize the utilization of model organisms in the construction of TCM syndrome model and highlight the relevance of modern medicine with TCM syndrome animal model. It will serve as the foundation for further research of model organisms and for its application in TCM syndrome model. PMID:24381636

  11. Modeling plasmonic efficiency enhancement in organic photovoltaics.

    PubMed

    Taff, Y; Apter, B; Katz, E A; Efron, U

    2015-09-10

    Efficiency enhancement of bulk heterojunction (BHJ) organic solar cells by means of the plasmonic effect is investigated by using finite-difference time-domain (FDTD) optical simulations combined with analytical modeling of exciton dissociation and charge transport efficiencies. The proposed method provides an improved analysis of the cell performance compared to previous FDTD studies. The results of the simulations predict an 11.8% increase in the cell's short circuit current with the use of Ag nano-hexagons. PMID:26368970

  12. Latest updates in global flood modelling: channel bifurcation and global river width database

    NASA Astrophysics Data System (ADS)

    Yamazaki, D.; Kanae, S.; Hirabayashi, Y.; O'Loughlin, F.; Trigg, M. A.; Bates, P. D.

    2014-12-01

    Global flood modelling is a relatively new framework in earth system studies, and there still exist many rooms for improving model physics. A typical grid size of global models (generally >5 km) is coarser than the scale of the topography of river channels and floodplains, therefore flood dynamics in global flood models is represented by sub-grid parameterization. Here, we introduce two latest updates in flood dynamics parameterization, i.e. channel bifurcation scheme and global river width database. The upstream-downstream relationship of model grids is prescribed (i.e. parameterized) by a river network map, where each grid has been assumed to have only one downstream grid. We abandoned this "only one downstream" assumption, and succeeded to represent channel bifurcation in a global flood model. The new bifurcation scheme was tested in the Mekong River, and showed the importance of channel bifurcation in mega-delta hydrodynamics. Channel cross-sectional shape has been parameterized using an empirical equation of discharge (or drainage area), and it is a major source of uncertainties in global flood modelling. We recently developed a fully-automated algorithm to calculate river width from satellite water mask. By applying this algorithm to SRTM Water Body Data, the Global Width Database for Large Rivers (GWD-LR) was constructed. The difference between the satellite-based width and empirically-estimated width is very large, suggesting the difficulty of river width parameterization by an empirical equation. Improvement in flood dynamics parameterization reduces uncertainties in global flood simulations. This enables advanced validation/calibration of global flood models, such as direct comparison against satellite altimeters. A future strategy for advanced model validation/calibration will be mentioned in the conference presentation.

  13. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    NASA Astrophysics Data System (ADS)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  14. Transforming the Premier Perspective® Hospital Database into the Observational Medical Outcomes Partnership (OMOP) Common Data Model

    PubMed Central

    Makadia, Rupa; Ryan, Patrick B.

    2014-01-01

    Background: The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) has been implemented on various claims and electronic health record (EHR) databases, but has not been applied to a hospital transactional database. This study addresses the implementation of the OMOP CDM on the U.S. Premier Hospital database. Methods: We designed and implemented an extract, transform, load (ETL) process to convert the Premier hospital database into the OMOP CDM. Standard charge codes in Premier were mapped between the OMOP version 4.0 Vocabulary and standard charge descriptions. Visit logic was added to impute the visit dates. We tested the conversion by replicating a published study using the raw and transformed databases. The Premier hospital database was compared to a claims database, in regard to prevalence of disease. Findings: The data transformed into the CDM resulted in 1% of the data being discarded due to data errors in the raw data. A total of 91.4% of Premier standard charge codes were mapped successfully to a standard vocabulary. The results of the replication study resulted in a similar distribution of patient characteristics. The comparison to the claims data yields notable similarities and differences amongst conditions represented in both databases. Discussion: The transformation of the Premier database into the OMOP CDM version 4.0 adds value in conducting analyses due to successful mapping of the drugs and procedures. The addition of visit logic gives ordinality to drugs and procedures that wasn’t present prior to the transformation. Comparing conditions in Premier against a claims database can provide an understanding about Premier’s potential use in pharmacoepidemiology studies that are traditionally conducted via claims databases. Conclusion and Next Steps: The conversion of the Premier database into the OMOP CDM 4.0 was completed successfully. The next steps include refinement of vocabularies and mappings and continual maintenance of

  15. Ionospheric climatology and model from long-term databases of worldwide incoherent scatter radars

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Holt, J. M.; van Eyken, T.; McCready, M.; Amory-Mazaudier, C.; Fukao, S.; Sulzer, M.

    2005-05-01

    Long-term databases of worldwide incoherent scatter radars are utilized to study ionospheric climatology and create empirical models for electron density, ion and electron temperatures, and ion drifts. These radars, including, from magnetic north to south and east to west, EISCAT Svalbard Radar (Norway), Soundrestrom Radar (Greenland), EISCAT Tromso Radars (Norway), Millstone Hill Radar (USA), St. Santin Radar (France), Shigaraki Middle and Upper atmosphere (MU) Radar (Japan) and Arecibo Radar (Puerto Rico), are able to characterize diurnal, seasonal, and solar cycle variations of height dependent ionospheric strctures in a broad latitude and longitude area. In these huge databases, available through the MADRIGAL system (http://www.openmadrigal.org), the data cover generally 1-2 solar cycles, and for Millstone Hill and Arecibo they span nearly 3 solar cycles. Based on these data, our systematical analyses result in a comprehensive overview of various features of the ionosphere and series of web-based empirical models (http://www.haystack.mit.edu/madrigal/Models/). This presentation will review local models for each site and discuss the ionospheric climatology, with emphasis on the development of annual/semiannual electron density variations with latitudes and longitudes, and on ionospheric thermal status at midlatitudes. This presentation will also explore the long-term trend of ionospheric electron density and ion temperature variations from Millstone Hill observations.

  16. Assessment of cloud cover in climate models and reanalysis databases with ISCCP over the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Enriquez, Aaron; Calbo, Josep; Gonzalez, Josep-Abel

    2013-04-01

    Clouds are an important regulator of climate due to their influence on the water balance of the atmosphere and their interaction with solar and infrared radiation. At any time, clouds cover a great percentage of the Earth's surface but their distribution is very irregular along time and space, which makes the evaluation of their influence on climate a difficult task. At present there are few studies related to cloud cover comparing current climate models with observational data. In this study, the database of monthly cloud cover provided by the International Satellite Cloud Climatology Project (ISCCP) has been chosen as a reference against which we compare the output of CMIP5 climate models and reanalysis databases, on the domain South-Europe-Mediterranean (SEM) established by the Intergovernmental Panel on Climate Change (IPCC) [1]. The study covers the period between 1984 and 2009, and the performance of cloud cover estimations for seasons has also been studied. To quantify the agreement between the databases we use two types of statistics: bias and SkillScore, which is based on the probability density functions (PDFs) of the databases [2]. We also use Taylor diagrams to visualize the statistics. Results indicate that there are areas where the models accurately describe what it is observed by ISCCP, for some periods of the year (e.g. Northern Africa, for autumn), compared to other areas and periods for which the agreement is lower (Iberian Peninsula in winter and the Black Sea for the summer months). However these differences should be attributed not only to the limitations of climate models, but possibly also to the data provided by ISCCP. References [1] Intergovernmental Panel on Climate Change (2007) Fourth Assessment Report: Climate Change 2007: Working Group I Report: The Physical Science Basis. [2] Ranking the AR4 climate models over the Murray Darling Basin using simulated maximum temperature, minimum temperature and precipitation. Int J Climatol 28

  17. Improving Quality and Quantity of Contributions: Two Models for Promoting Knowledge Exchange with Shared Databases

    ERIC Educational Resources Information Center

    Cress, U.; Barquero, B.; Schwan, S.; Hesse, F. W.

    2007-01-01

    Shared databases are used for knowledge exchange in groups. Whether a person is willing to contribute knowledge to a shared database presents a social dilemma: Each group member saves time and energy by not contributing any information to the database and by using the database only to retrieve information which was contributed by others. But if…

  18. Modeling the topological organization of cellular processes.

    PubMed

    Giavitto, Jean-Louis; Michel, Olivier

    2003-07-01

    The cell as a dynamical system presents the characteristics of having a dynamical structure. That is, the exact phase space of the system cannot be fixed before the evolution and integrative cell models must state the evolution of the structure jointly with the evolution of the cell state. This kind of dynamical systems is very challenging to model and simulate. New programming concepts must be developed to ease their modeling and simulation. In this context, the goal of the MGS project is to develop an experimental programming language dedicated to the simulation of this kind of systems. MGS proposes a unified view on several computational mechanisms (CHAM, Lindenmayer systems, Paun systems, cellular automata) enabling the specification of spatially localized computations on heterogeneous entities. The evolution of a dynamical structure is handled through the concept of transformation which relies on the topological organization of the system components. An example based on the modeling of spatially distributed biochemical networks is used to illustrate how these notions can be used to model the spatial and temporal organization of intracellular processes. PMID:12915272

  19. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    NASA Astrophysics Data System (ADS)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  20. Global Exposure Modelling of Semivolatile Organic Compounds

    NASA Astrophysics Data System (ADS)

    Guglielmo, F.; Lammel, G.; Maier-Reimer, E.

    2008-12-01

    Organic compounds which are persistent and toxic as the agrochemicals γ-hexachlorocyclohexane (γ-HCH, lindane) and dichlorodiphenyltrichloroethane (DDT) pose a hazard for the ecosystems. These compounds are semivolatile, hence multicompartmental substances and subject to long-range transport (LRT) in atmosphere and ocean. Being lipophilic, they accumulate in exposed organism tissues and biomagnify along food chains. The multicompartmental global fate and LRT of DDT and lindane in the atmosphere and ocean have been studied using application data for 1980, on a decadal scale using a model based on the coupling of atmosphere and (for the first time for these compounds) ocean General Circulation Models (ECHAM5 and MPI-OM). The model system encompasses furthermore 2D terrestrial compartments (soil and vegetation) and sea ice, a fully dynamic atmospheric aerosol (HAM) module and an ocean biogeochemistry module (HAMOCC5). Large mass fractions of the compounds are found in soil. Lindane is also found in comparable amount in ocean. DDT has the longest residence time in almost all compartments. The sea ice compartment locally almost inhibits volatilization from the sea. The air/sea exchange is also affected , up to a reduction of 35 % for DDT by partitioning to the organic phases (suspended and dissolved particulate matter) in the global oceans. Partitioning enhances vertical transport in the sea. Ocean dynamics are found to be more significant for vertical transport than sinking associated with particulate matter. LRT in the global environment is determined by the fast atmospheric circulation. Net meridional transport taking place in the ocean is locally effective mostly via western boundary currents, upon applications at mid- latitudes. The pathways of the long-lived semivolatile organic compounds studied include a sequence of several cycles of volatilisation, transport in the atmosphere, deposition and transport in the ocean (multihopping substances). Multihopping is

  1. Object-Oriented Database for Managing Building Modeling Components and Metadata: Preprint

    SciTech Connect

    Long, N.; Fleming, K.; Brackney, L.

    2011-12-01

    Building simulation enables users to explore and evaluate multiple building designs. When tools for optimization, parametrics, and uncertainty analysis are combined with analysis engines, the sheer number of discrete simulation datasets makes it difficult to keep track of the inputs. The integrity of the input data is critical to designers, engineers, and researchers for code compliance, validation, and building commissioning long after the simulations are finished. This paper discusses an application that stores inputs needed for building energy modeling in a searchable, indexable, flexible, and scalable database to help address the problem of managing simulation input data.

  2. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  3. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  4. A database model for evaluating material accountability safeguards effectiveness against protracted theft

    SciTech Connect

    Sicherman, A.; Fortney, D.S.; Patenaude, C.J.

    1993-07-01

    DOE Material Control and Accountability Order 5633.3A requires that facilities handling special nuclear material evaluate their effectiveness against protracted theft (repeated thefts of small quantities of material, typically occurring over an extended time frame, to accumulate a goal quantity). Because a protracted theft attempt can extend over time, material accountability-like (MA) safeguards may help detect a protracted theft attempt in progress. Inventory anomalies, and material not in its authorized location when requested for processing are examples of MA detection mechanisms. Crediting such detection in evaluations, however, requires taking into account potential insider subversion of MA safeguards. In this paper, the authors describe a database model for evaluating MA safeguards effectiveness against protracted theft that addresses potential subversion. The model includes a detailed yet practical structure for characterizing various types of MA activities, lists of potential insider MA defeat methods and access/authority related to MA activities, and an initial implementation of built-in MA detection probabilities. This database model, implemented in the new Protracted Insider module of ASSESS (Analytic System and Software for Evaluating Safeguards and Security), helps facilitate the systematic collection of relevant information about MA activity steps, and ``standardize`` MA safeguards evaluations.

  5. Ad HOC Model Generation Using Multiscale LIDAR Data from a Geospatial Database

    NASA Astrophysics Data System (ADS)

    Gordon, M.; Borgmann, B.; Gehrung, J.; Hebel, M.; Arens, M.

    2015-08-01

    Due to the spread of economically priced laser scanning technology nowadays, especially in the field of topographic surveying and mapping, ever-growing amounts of data need to be handled. Depending on the requirements of the specific application, airborne, mobile or terrestrial laser scanners are commonly used. Since visualizing this flood of data is not feasible with classical approaches like raw point cloud rendering, real time decision making requires sophisticated solutions. In addition, the efficient storage and recovery of 3D measurements is a challenging task. Therefore we propose an approach for the intelligent storage of 3D point clouds using a spatial database. For a given region of interest, the database is queried for the data available. All resulting point clouds are fused in a model generation process, utilizing the fact that low density airborne measurements could be used to supplement higher density mobile or terrestrial laser scans. The octree based modeling approach divides and subdivides the world into cells of varying size and fits one plane per cell, once a specified amount of points is present. The resulting model exceeds the completeness and precision of every single data source and enables for real time visualization. This is especially supported by data compression ratios of about 90%.

  6. A Bayesian Multivariate Receptor Model for Estimating Source Contributions to Particulate Matter Pollution using National Databases

    PubMed Central

    Hackstadt, Amber J.; Peng, Roger D.

    2014-01-01

    Summary Time series studies have suggested that air pollution can negatively impact health. These studies have typically focused on the total mass of fine particulate matter air pollution or the individual chemical constituents that contribute to it, and not source-specific contributions to air pollution. Source-specific contribution estimates are useful from a regulatory standpoint by allowing regulators to focus limited resources on reducing emissions from sources that are major contributors to air pollution and are also desired when estimating source-specific health effects. However, researchers often lack direct observations of the emissions at the source level. We propose a Bayesian multivariate receptor model to infer information about source contributions from ambient air pollution measurements. The proposed model incorporates information from national databases containing data on both the composition of source emissions and the amount of emissions from known sources of air pollution. The proposed model is used to perform source apportionment analyses for two distinct locations in the United States (Boston, Massachusetts and Phoenix, Arizona). Our results mirror previous source apportionment analyses that did not utilize the information from national databases and provide additional information about uncertainty that is relevant to the estimation of health effects. PMID:25309119

  7. [Estimation of soil organic carbon density and storage in Zhejiang Province of East China by using 1:50000 soil database].

    PubMed

    Zhi, Jun-Jun; Jing, Chang-Wei; Zhang, Cao; Wu, Jia-Ping; Ni, Zhi-Hua; Chen, Hong-Jin; Xu, Jin

    2013-03-01

    As an important component of the carbon pool of terrestrial ecosystem, soil carbon pool plays a key role in the studies of greenhouse effect and global change. By using a 1:50000 soil database, the organic carbon density in the 0-100 cm layer of 277 soil species in Zhejiang Province was estimated, and the soil organic carbon (SOC) density and storage in the whole Province as well as the spatial distribution of the SOC density and storage in the main soil types of the Province were analyzed. In the whole Province, the SOC density ranged from 5 kg.m-2 to 10 kg.m-2. Among the main soil types in the Province, humic mountain yellow soil had the highest SOC density (52.80 kg.m-2), whereas fluvio-sand ridge soil had the lowest one (1.82 kg.m-2). Red soil and paddy soil had the largest SOC storages, with the sum accounting for 63.8% of the total SOC storage in the Province. The total area of the soils in the Province was 100784.19 km2, the estimated SOC storage was 875. 42 x 10(6) t, and the estimated SOC density was averagely 8.69 kg.m-2. The analysis with the superposition digital elevation model showed that the SOC density presented an obvious variation trend with the changes of elevation, slope gradient, and aspect. PMID:23755481

  8. Modelling motions within the organ of Corti

    NASA Astrophysics Data System (ADS)

    Ni, Guangjian; Baumgart, Johannes; Elliott, Stephen

    2015-12-01

    Most cochlear models used to describe the basilar membrane vibration along the cochlea are concerned with macromechanics, and often assume that the organ of Corti moves as a single unit, ignoring the individual motion of different components. New experimental technologies provide the opportunity to measure the dynamic behaviour of different components within the organ of Corti, but only for certain types of excitation. It is thus still difficult to directly measure every aspect of cochlear dynamics, particularly for acoustic excitation of the fully active cochlea. The present work studies the dynamic response of a model of the cross-section of the cochlea, at the microscopic level, using the finite element method. The elastic components are modelled with plate elements and the perilymph and endolymph are modelled with inviscid fluid elements. The individual motion of each component within the organ of Corti is calculated with dynamic pressure loading on the basilar membrane and the motions of the experimentally accessible parts are compared with measurements. The reticular lamina moves as a stiff plate, without much bending, and is pivoting around a point close to the region of the inner hair cells, as observed experimentally. The basilar membrane shows a slightly asymmetric mode shape, with maximum displacement occurring between the second-row and the third-row of the outer hair cells. The dynamics responses is also calculated, and compared with experiments, when driven by the outer hair cells. The receptance of the basilar membrane motion and of the deflection of the hair bundles of the outer hair cells is thus obtained, when driven either acoustically or electrically. In this way, the fully active linear response of the basilar membrane to acoustic excitation can be predicted by using a linear superposition of the calculated receptances and a defined gain function for the outer hair cell feedback.

  9. Evaluation of a vortex-based subgrid stress model using DNS databases

    NASA Technical Reports Server (NTRS)

    Misra, Ashish; Lund, Thomas S.

    1996-01-01

    The performance of a SubGrid Stress (SGS) model for Large-Eddy Simulation (LES) developed by Misra k Pullin (1996) is studied for forced and decaying isotropic turbulence on a 32(exp 3) grid. The physical viability of the model assumptions are tested using DNS databases. The results from LES of forced turbulence at Taylor Reynolds number R(sub (lambda)) approximately equals 90 are compared with filtered DNS fields. Probability density functions (pdfs) of the subgrid energy transfer, total dissipation, and the stretch of the subgrid vorticity by the resolved velocity-gradient tensor show reasonable agreement with the DNS data. The model is also tested in LES of decaying isotropic turbulence where it correctly predicts the decay rate and energy spectra measured by Comte-Bellot & Corrsin (1971).

  10. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    SciTech Connect

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model

  11. Clinical risk assessment of organ manifestations in systemic sclerosis: a report from the EULAR Scleroderma Trials And Research group database

    PubMed Central

    Walker, U A; Tyndall, A; Czirják, L; Denton, C; Farge‐Bancel, D; Kowal‐Bielecka, O; Müller‐Ladner, U; Bocelli‐Tyndall, C; Matucci‐Cerinic, M

    2007-01-01

    Background Systemic sclerosis (SSc) is a multisystem autoimmune disease, which is classified into a diffuse cutaneous (dcSSc) and a limited cutaneous (lcSSc) subset according to the skin involvement. In order to better understand the vascular, immunological and fibrotic processes of SSc and to guide its treatment, the EULAR Scleroderma Trials And Research (EUSTAR) group was formed in June 2004. Aims and methods EUSTAR collects prospectively the Minimal Essential Data Set (MEDS) on all sequential patients fulfilling the American College of Rheumatology diagnostic criteria in participating centres. We aimed to characterise demographic, clinical and laboratory characteristics of disease presentation in SSc and analysed EUSTAR baseline visits. Results In April 2006, a total of 3656 patients (1349 with dcSSc and 2101 with lcSSc) were enrolled in 102 centres and 30 countries. 1330 individuals had autoantibodies against Scl70 and 1106 against anticentromere antibodies. 87% of patients were women. On multivariate analysis, scleroderma subsets (dcSSc vs lcSSc), antibody status and age at onset of Raynaud's phenomenon, but not gender, were found to be independently associated with the prevalence of organ manifestations. Autoantibody status in this analysis was more closely associated with clinical manifestations than were SSc subsets. Conclusion dcSSc and lcSSc subsets are associated with particular organ manifestations, but in this analysis the clinical distinction seemed to be superseded by an antibody‐based classification in predicting some scleroderma complications. The EUSTAR MEDS database facilitates the analysis of clinical patterns in SSc, and contributes to the standardised assessment and monitoring of SSc internationally. PMID:17234652

  12. Evaluating new thermospheric neutral density models with a 30-year satellite drag database

    NASA Astrophysics Data System (ADS)

    Marcos, F. A.; Wise, J.; Bruinsma, S.; Picone, M.; Bass, J.; Bowman, B.; Kendra, M.

    2001-12-01

    Deficiencies in empirical neutral density models used for satellite drag have persisted at about the 15% one-sigma level. Major error sources include the inadequacies of proxy indices used as model drivers. Two new empirical models have recently been developed. The NRLMSISE-00 model combines orbital drag data from the 1960's and early 70's with mass spectrometer and temperature data from the 1970's and 80's. The DTM-2000 model, based on accelerometer measurements of satellite drag, replaces the conventional F10 solar flux proxy with a Mg index. These, and other models used operationally, are validated with a new, independent set of historic data derived for the period 1969-2000. The new database is generated from actual radar tracking observations, rather than from the less accurate historical element sets, to form precise orbit and drag/density data with improved accuracy and resolution. We use data from three satellites, all with perigee near 350 km, and with inclinations between 31 and 78 degrees to evaluate models vs latitude, season and solar flux. A major emphasis is on analyzing differences between modeled and measured density variability as a function of solar activity over three solar cycles.

  13. Studying Oogenesis in a Non-model Organism Using Transcriptomics: Assembling, Annotating, and Analyzing Your Data.

    PubMed

    Carter, Jean-Michel; Gibbs, Melanie; Breuker, Casper J

    2016-01-01

    This chapter provides a guide to processing and analyzing RNA-Seq data in a non-model organism. This approach was implemented for studying oogenesis in the Speckled Wood Butterfly Pararge aegeria. We focus in particular on how to perform a more informative primary annotation of your non-model organism by implementing our multi-BLAST annotation strategy. We also provide a general guide to other essential steps in the next-generation sequencing analysis workflow. Before undertaking these methods, we recommend you familiarize yourself with command line usage and fundamental concepts of database handling. Most of the operations in the primary annotation pipeline can be performed in Galaxy (or equivalent standalone versions of the tools) and through the use of common database operations (e.g. to remove duplicates) but other equivalent programs and/or custom scripts can be implemented for further automation. PMID:27557578

  14. A database and model to support proactive management of sediment-related sewer blockages.

    PubMed

    Rodríguez, Juan Pablo; McIntyre, Neil; Díaz-Granados, Mario; Maksimović, Cedo

    2012-10-01

    Due to increasing customer and political pressures, and more stringent environmental regulations, sediment and other blockage issues are now a high priority when assessing sewer system operational performance. Blockages caused by sediment deposits reduce sewer system reliability and demand remedial action at considerable operational cost. Consequently, procedures are required for identifying which parts of the sewer system are in most need of proactive removal of sediments. This paper presents an exceptionally long (7.5 years) and spatially detailed (9658 grid squares--0.03 km² each--covering a population of nearly 7.5 million) data set obtained from a customer complaints database in Bogotá (Colombia). The sediment-related blockage data are modelled using homogeneous and non-homogeneous Poisson process models. In most of the analysed areas the inter-arrival time between blockages can be represented by the homogeneous process, but there are a considerable number of areas (up to 34%) for which there is strong evidence of non-stationarity. In most of these cases, the mean blockage rate increases over time, signifying a continual deterioration of the system despite repairs, this being particularly marked for pipe and gully pot related blockages. The physical properties of the system (mean pipe slope, diameter and pipe length) have a clear but weak influence on observed blockage rates. The Bogotá case study illustrates the potential value of customer complaints databases and formal analysis frameworks for proactive sewerage maintenance scheduling in large cities. PMID:22794800

  15. A Conceptual Model and Database to Integrate Data and Project Management

    NASA Astrophysics Data System (ADS)

    Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.

    2015-12-01

    Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype

  16. Assessment of methods for creating a national building statistics database for atmospheric dispersion modeling

    SciTech Connect

    Velugubantla, S. P.; Burian, S. J.; Brown, M. J.; McKinnon, A. T.; McPherson, T. N.; Han, W. S.

    2004-01-01

    Mesoscale meteorological codes and transport and dispersion models are increasingly being applied in urban areas. Representing urban terrain characteristics in these models is critical for accurate predictions of air flow, heating and cooling, and airborne contaminant concentrations in cities. A key component of urban terrain characterization is the description of building morphology (e.g., height, plan area, frontal area) and derived properties (e.g., roughness length). Methods to determine building morphological statistics range from manual field surveys to automated processing of digital building databases. In order to improve the quality and consistency of mesoscale meteorological and atmospheric dispersion modeling, a national dataset of building morphological statistics is needed. Currently, due to the expense and logistics of conducting detailed field surveys, building statistics have been derived for only small sections of a few cities. In most other cities, modeling projects rely on building statistics estimated using intuition and best guesses. There has been increasing emphasis in recent years to derive building statistics using digital building data or other data sources as a proxy for those data. Although there is a current expansion in public and private sector development of digital building data, at present there is insufficient data to derive a national building statistics database using automated analysis tools. Too many cities lack digital data on building footprints and heights and many of the cities having such data do so for only small areas. Due to the lack of sufficient digital building data, other datasets are used to estimate building statistics. Land use often serves as means to provide building statistics for a model domain, but the strength and consistency of the relationship between land use and building morphology is largely uncertain. In this paper, we investigate whether building statistics can be correlated to the underlying land

  17. MAKER: An easy-to-use annotation pipeline designed for emerging model organism genomes

    PubMed Central

    Cantarel, Brandi L.; Korf, Ian; Robb, Sofia M.C.; Parra, Genis; Ross, Eric; Moore, Barry; Holt, Carson; Sánchez Alvarado, Alejandro; Yandell, Mark

    2008-01-01

    We have developed a portable and easily configurable genome annotation pipeline called MAKER. Its purpose is to allow investigators to independently annotate eukaryotic genomes and create genome databases. MAKER identifies repeats, aligns ESTs and proteins to a genome, produces ab initio gene predictions, and automatically synthesizes these data into gene annotations having evidence-based quality indices. MAKER is also easily trainable: Outputs of preliminary runs are used to automatically retrain its gene-prediction algorithm, producing higher-quality gene-models on subsequent runs. MAKER’s inputs are minimal, and its outputs can be used to create a GMOD database. Its outputs can also be viewed in the Apollo Genome browser; this feature of MAKER provides an easy means to annotate, view, and edit individual contigs and BACs without the overhead of a database. As proof of principle, we have used MAKER to annotate the genome of the planarian Schmidtea mediterranea and to create a new genome database, SmedGD. We have also compared MAKER’s performance to other published annotation pipelines. Our results demonstrate that MAKER provides a simple and effective means to convert a genome sequence into a community-accessible genome database. MAKER should prove especially useful for emerging model organism genome projects for which extensive bioinformatics resources may not be readily available. PMID:18025269

  18. Effect of diabetes and acute rejection on liver transplant outcomes: An analysis of the organ procurement and transplantation network/united network for organ sharing database.

    PubMed

    Kuo, Hung-Tien; Lum, Erik; Martin, Paul; Bunnapradist, Suphamai

    2016-06-01

    The effects of diabetic status and acute rejection (AR) on liver transplant outcomes are largely unknown. We studied 13,736 liver recipients from the United Network for Organ Sharing/Organ Procurement Transplant Network database who underwent transplantation between 2004 and 2007 with a functioning graft for greater than 1 year. The association of pretransplant diabetes mellitus (PDM), new-onset diabetes after transplant (NODAT), and AR rates on allograft failure, all-cause mortality, and cardiovascular mortality were determined. To determine the differential and joint effects of diabetic status and AR on transplant outcomes, recipients were further stratified into 6 groups: neither (reference, n = 6600); NODAT alone (n = 2054); PDM alone (n = 2414); AR alone (n = 1448); NODAT and AR (n = 707); and PDM and AR (n = 513). An analysis with hepatitis C virus (HCV) serostatus was also performed (HCV recipients, n = 6384; and non-HCV recipient, n = 5934). The median follow-up was 2537 days. The prevalence of PDM was 21.3%. At 1 year after transplant, the rates of NODAT and AR were 25.5% and 19.4%, respectively. Overall, PDM, NODAT, and AR were associated with increased risks for graft failure (PDM, hazard ratio [HR] = 1.31, P < 0.01; NODAT, HR = 1.11, P = 0.02; AR, HR = 1.28, P < 0.01). A multivariate Cox regression analysis of the 6 recipient groups demonstrated that NODAT alone was not significantly associated with any study outcomes. The presence of PDM, AR, NODAT and AR, and PDM and AR were associated with higher overall graft failure risk and mortality risk. The presence of PDM was associated with higher cardiovascular mortality risk. The analyses in both HCV-positive and HCV-negative cohorts showed a similar trend as in the overall cohort. In conclusion, PDM and AR, but not NODAT, is associated with increased mortality and liver allograft failure. Liver Transplantation 22 796-804 2016 AASLD. PMID:26850091

  19. Organic acid modeling and model validation: Workshop summary. Final report

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E&S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled ``Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.`` The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  20. Organic acid modeling and model validation: Workshop summary

    SciTech Connect

    Sullivan, T.J.; Eilers, J.M.

    1992-08-14

    A workshop was held in Corvallis, Oregon on April 9--10, 1992 at the offices of E S Environmental Chemistry, Inc. The purpose of this workshop was to initiate research efforts on the entitled Incorporation of an organic acid representation into MAGIC (Model of Acidification of Groundwater in Catchments) and testing of the revised model using Independent data sources.'' The workshop was attended by a team of internationally-recognized experts in the fields of surface water acid-bass chemistry, organic acids, and watershed modeling. The rationale for the proposed research is based on the recent comparison between MAGIC model hindcasts and paleolimnological inferences of historical acidification for a set of 33 statistically-selected Adirondack lakes. Agreement between diatom-inferred and MAGIC-hindcast lakewater chemistry in the earlier research had been less than satisfactory. Based on preliminary analyses, it was concluded that incorporation of a reasonable organic acid representation into the version of MAGIC used for hindcasting was the logical next step toward improving model agreement.

  1. High resolution topography and land cover databases for wind resource assessment using mesoscale models

    NASA Astrophysics Data System (ADS)

    Barranger, Nicolas; Stathopoulos, Christos; Kallos, Georges

    2013-04-01

    In wind resource assessment, mesoscale models can provide wind flow characteristics without the use of mast measurements. In complex terrain, local orography and land cover data assimilation are essential parameters to accurately simulate the wind flow pattern within the atmospheric boundary layer. State-of-the-art Mesoscale Models such as RAMS usually provides orography and landuse data with of resolution of 30s (about 1km). This resolution is necessary for solving mesocale phenomena accurately but not sufficient when the aim is to quantitatively estimate the wind flow characteristics passing over sharp hills or ridges. Furthermore, the abrupt change in land cover characterization is nor always taken into account in the model with a low resolution land use database. When land cover characteristics changes dramatically, parameters such as roughness, albedo or soil moisture that can highly influence the Atmospheric Boundary Layer meteorological characteristics. Therefore they require to be accurately assimilated into the model. Since few years, high resolution databases derived from satellite imagery (Modis, SRTM, LandSat, SPOT ) are available online. Being converted to RAMS requirements inputs, an evaluation of the model requires to be achieved. For this purpose, three new high resolution land cover and two topographical databases are implemented and tested in RAMS. The analysis of terrain variability is performed using basis functions of space frequency and amplitude. Practically, one and two dimension Fast Fourier Transform is applied to terrain height to reveal the main characteristics of local orography according to the obtained wave spectrum. By this way, a comparison between different topographic data sets is performed, based on the terrain power spectrum entailed in the terrain height input. Furthermore, this analysis is a powerful tool in the determination of the proper horizontal grid resolution required to resolve most of the energy containing spectrum

  2. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  3. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    PubMed Central

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  4. TI: The West Coast and Alaska Tsunami Warning Center Forecast Model Project Applied to an Operational Tsunami Threat-database

    NASA Astrophysics Data System (ADS)

    Knight, W.; Huang, P.; Whitmore, P.; Sterling, K.

    2008-12-01

    Continuous improvement in the NOAA/West Coast & Alaska Tsunami Warning Center (WCATWC) forecast model has allowed the consideration of new uses for this model. These improvements include a finer propagation mesh, more model sources and magnitudes, runup boundary conditions, and continuous, unbroken fine coastal meshes. The focus of this report is on a new operational use of the model at the WCATWC - creation of a threat database of tsunami impacts on US and Canadian coastlines. Since all forecast model data is pre-computed, this concept should be easily realized. One recent case which showed the utility of a model-based threat database was the 4-1-2007 Solomon Islands Tsunami event. Tsunami energy maps clearly showed the energy was directed southwest and was no danger to regions to the northeast. Another case was the use of modeled tsunamis and their synthetic mareograms in the design of Gulf and Atlantic coast tsunami warning criteria. Currently, the only quantitative model data to appear in tsunami messages are ETAs for the leading edge of the tsunami wave train (the expected impact level is described in text - based on forecast model data). Since runups can now be forecasted for any coastal point, they can be used to constrain initial warning/watch/advisory messages to only threatened regions and can be saved to a database for later inclusion (along with ETAs) in tsunami bulletins. Present practice is to include all areas within a certain travel time or distance from epicenter in the initial warning bulletin, regardless of the threat. Since watch-warning- advisory breakpoints are based in the later bulletins on forecasted wave heights, the database can also be used to refine the extent of the warned zones. With full modeled mareograms similarly saved to a database, additional wave information like initial recession / elevation, or ETAs for first and highest waves can be added to tsunami bulletins. By comparison of scaled model prediction to historic tide gauge

  5. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  6. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  7. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  8. Integrated Standardized Database/Model Management System: Study management concepts and requirements

    SciTech Connect

    Baker, R.; Swerdlow, S.; Schultz, R.; Tolchin, R.

    1994-02-01

    Data-sharing among planners and planning software for utility companies is the motivation for creating the Integrated Standardized Database (ISD) and Model Management System (MMS). The purpose of this document is to define the requirements for the ISD/MMS study management component in a manner that will enhance the use of the ISD. After an analysis period which involved EPRI member utilities across the United States, the study concept was formulated. It is defined in terms of its entities, relationships and its support processes, specifically for implementation as the key component of the MMS. From the study concept definition, requirements are derived. There are unique requirements, such as the necessity to interface with DSManager, EGEAS, IRPManager, MIDAS and UPM and there are standard information systems requirements, such as create, modify, delete and browse data. An initial ordering of the requirements is established, with a section devoted to future enhancements.

  9. Engineering the object-relation database model in O-Raid

    NASA Technical Reports Server (NTRS)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  10. Making designer mutants in model organisms

    PubMed Central

    Peng, Ying; Clark, Karl J.; Campbell, Jarryd M.; Panetta, Magdalena R.; Guo, Yi; Ekker, Stephen C.

    2014-01-01

    Recent advances in the targeted modification of complex eukaryotic genomes have unlocked a new era of genome engineering. From the pioneering work using zinc-finger nucleases (ZFNs), to the advent of the versatile and specific TALEN systems, and most recently the highly accessible CRISPR/Cas9 systems, we now possess an unprecedented ability to analyze developmental processes using sophisticated designer genetic tools. In this Review, we summarize the common approaches and applications of these still-evolving tools as they are being used in the most popular model developmental systems. Excitingly, these robust and simple genomic engineering tools also promise to revolutionize developmental studies using less well established experimental organisms. PMID:25336735

  11. JAK/STAT signalling--an executable model assembled from molecule-centred modules demonstrating a module-oriented database concept for systems and synthetic biology.

    PubMed

    Blätke, Mary Ann; Dittrich, Anna; Rohr, Christian; Heiner, Monika; Schaper, Fred; Marwan, Wolfgang

    2013-06-01

    Mathematical models of molecular networks regulating biological processes in cells or organisms are most frequently designed as sets of ordinary differential equations. Various modularisation methods have been applied to reduce the complexity of models, to analyse their structural properties, to separate biological processes, or to reuse model parts. Taking the JAK/STAT signalling pathway with the extensive combinatorial cross-talk of its components as a case study, we make a natural approach to modularisation by creating one module for each biomolecule. Each module consists of a Petri net and associated metadata and is organised in a database publically accessible through a web interface (). The Petri net describes the reaction mechanism of a given biomolecule and its functional interactions with other components including relevant conformational states. The database is designed to support the curation, documentation, version control, and update of individual modules, and to assist the user in automatically composing complex models from modules. Biomolecule centred modules, associated metadata, and database support together allow the automatic creation of models by considering differential gene expression in given cell types or under certain physiological conditions or states of disease. Modularity also facilitates exploring the consequences of alternative molecular mechanisms by comparative simulation of automatically created models even for users without mathematical skills. Models may be selectively executed as an ODE system, stochastic, or qualitative models or hybrid and exported in the SBML format. The fully automated generation of models of redesigned networks by metadata-guided modification of modules representing biomolecules with mutated function or specificity is proposed. PMID:23443149

  12. Combining a weed traits database with a population dynamics model predicts shifts in weed communities

    PubMed Central

    Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A

    2015-01-01

    A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated ‘fitness contours’ (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments. PMID:26190870

  13. Drug-target interaction prediction: databases, web servers and computational models.

    PubMed

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data. PMID:26283676

  14. Modelling the Geographical Origin of Rice Cultivation in Asia Using the Rice Archaeological Database

    PubMed Central

    Silva, Fabio; Stevens, Chris J.; Weisskopf, Alison; Castillo, Cristina; Qin, Ling; Bevan, Andrew; Fuller, Dorian Q.

    2015-01-01

    We have compiled an extensive database of archaeological evidence for rice across Asia, including 400 sites from mainland East Asia, Southeast Asia and South Asia. This dataset is used to compare several models for the geographical origins of rice cultivation and infer the most likely region(s) for its origins and subsequent outward diffusion. The approach is based on regression modelling wherein goodness of fit is obtained from power law quantile regressions of the archaeologically inferred age versus a least-cost distance from the putative origin(s). The Fast Marching method is used to estimate the least-cost distances based on simple geographical features. The origin region that best fits the archaeobotanical data is also compared to other hypothetical geographical origins derived from the literature, including from genetics, archaeology and historical linguistics. The model that best fits all available archaeological evidence is a dual origin model with two centres for the cultivation and dispersal of rice focused on the Middle Yangtze and the Lower Yangtze valleys. PMID:26327225

  15. Human intestinal transporter database: QSAR modeling and virtual profiling of drug uptake, efflux and interactions

    PubMed Central

    Sedykh, Alexander; Fourches, Denis; Duan, Jianmin; Hucke, Oliver; Garneau, Michel; Zhu, Hao; Bonneau, Pierre; Tropsha, Alexander

    2013-01-01

    Purpose Membrane transporters mediate many biological effects of chemicals and play a major role in pharmacokinetics and drug resistance. The selection of viable drug candidates among biologically active compounds requires the assessment of their transporter interaction profiles. Methods Using public sources, we have assembled and curated the largest, to our knowledge, human intestinal transporter database (>5,000 interaction entries for >3,700 molecules). This data was used to develop thoroughly validated classification Quantitative Structure-Activity Relationship (QSAR) models of transport and/or inhibition of several major transporters including MDR1, BCRP, MRP1-4, PEPT1, ASBT, OATP2B1, OCT1, and MCT1. Results & Conclusions QSAR models have been developed with advanced machine learning techniques such as Support Vector Machines, Random Forest, and k Nearest Neighbors using Dragon and MOE chemical descriptors. These models afforded high external prediction accuracies of 71–100% estimated by 5-fold external validation, and showed hit retrieval rates with up to 20-fold enrichment in the virtual screening of DrugBank compounds. The compendium of predictive QSAR models developed in this study can be used for virtual profiling of drug candidates and/or environmental agents with the optimal transporter profiles. PMID:23269503

  16. Modelling the Geographical Origin of Rice Cultivation in Asia Using the Rice Archaeological Database.

    PubMed

    Silva, Fabio; Stevens, Chris J; Weisskopf, Alison; Castillo, Cristina; Qin, Ling; Bevan, Andrew; Fuller, Dorian Q

    2015-01-01

    We have compiled an extensive database of archaeological evidence for rice across Asia, including 400 sites from mainland East Asia, Southeast Asia and South Asia. This dataset is used to compare several models for the geographical origins of rice cultivation and infer the most likely region(s) for its origins and subsequent outward diffusion. The approach is based on regression modelling wherein goodness of fit is obtained from power law quantile regressions of the archaeologically inferred age versus a least-cost distance from the putative origin(s). The Fast Marching method is used to estimate the least-cost distances based on simple geographical features. The origin region that best fits the archaeobotanical data is also compared to other hypothetical geographical origins derived from the literature, including from genetics, archaeology and historical linguistics. The model that best fits all available archaeological evidence is a dual origin model with two centres for the cultivation and dispersal of rice focused on the Middle Yangtze and the Lower Yangtze valleys. PMID:26327225

  17. A lattice vibrational model using vibrational density of states for constructing thermodynamic databases (Invited)

    NASA Astrophysics Data System (ADS)

    Jacobs, M. H.; Van Den Berg, A. P.

    2013-12-01

    Thermodynamic databases are indispensable tools in materials science and mineral physics to derive thermodynamic properties in regions of pressure-temperature-composition space for which experimental data are not available or scant. Because the amount of phases and substances in a database is arbitrarily large, thermodynamic formalisms coupled to these databases are often kept as simple as possible to sustain computational efficiency. Although formalisms based on parameterizations of 1 bar thermodynamic data, commonly used in Calphad methodology, meet this requirement, physically unrealistic behavior in properties hamper the application in the pressure regime prevailing in the Earth's lower mantle. The application becomes especially cumbersome when they are applied to planetary mantles of massive super earth exoplanets or in the development of pressure scales, where Hugoniot data at extreme conditions are involved. Methods based on the Mie-Grüneisen-Debye formalism have the advantage that physically unrealistic behavior in thermodynamic properties is absent, but due to the simple construction of the vibrational density of states (VDoS), they lack engineering precision in the low-pressure regime, especially at 1 bar pressure, hampering application of databases incorporating such formalism to industrial processes. To obtain a method that is generally applicable in the complete stability range of a material, we developed a method based on an alternative use of Kieffer's lattice vibrational formalism. The method requires experimental data to constrain the model parameters and is therefore semi-empirical. It has the advantage that microscopic properties for substances, such as the VDoS, Grüneisen parameters and electronic and static lattice properties resulting from present-day ab-initio methods can be incorporated to constrain a thermodynamic analysis of experimental data. It produces results free from physically unrealistic behavior at high pressure and temperature

  18. Innovative estimation of survival using log-normal survival modelling on ACCENT database

    PubMed Central

    Chapman, J W; O'Callaghan, C J; Hu, N; Ding, K; Yothers, G A; Catalano, P J; Shi, Q; Gray, R G; O'Connell, M J; Sargent, D J

    2013-01-01

    Background: The ACCENT database, with individual patient data for 20 898 patients from 18 colon cancer clinical trials, was used to support Food and Drug Administration (FDA) approval of 3-year disease-free survival as a surrogate for 5-year overall survival. We hypothesised substantive differences in survival estimation with log-normal modelling rather than standard Kaplan–Meier or Cox approaches. Methods: Time to relapse, disease-free survival, and overall survival were estimated using Kaplan–Meier, Cox, and log-normal approaches for male subjects aged 60–65 years, with stage III colon cancer, treated with 5-fluorouracil-based chemotherapy regimens (with 5FU), or with surgery alone (without 5FU). Results: Absolute differences between Cox and log-normal estimates with (without) 5FU varied by end point. The log-normal model had 5.8 (6.3)% higher estimated 3-year time to relapse than the Cox model; 4.8 (5.1)% higher 3-year disease-free survival; and 3.2 (2.2)% higher 5-year overall survival. Model checking indicated greater data support for the log-normal than the Cox model, with Cox and Kaplan–Meier estimates being more similar. All three model types indicate consistent evidence of treatment benefit on both 3-year disease-free survival and 5-year overall survival; patients allocated to 5FU had 5.0–6.7% higher 3-year disease-free survival and 5.3–6.8% higher 5-year overall survival. Conclusion: Substantive absolute differences between estimates of 3-year disease-free survival and 5-year overall survival with log-normal and Cox models were large enough to be clinically relevant, and warrant further consideration. PMID:23385733

  19. Coverage of whole proteome by structural genomics observed through protein homology modeling database

    PubMed Central

    Yamaguchi, Akihiro; Go, Mitiko

    2006-01-01

    We have been developing FAMSBASE, a protein homology-modeling database of whole ORFs predicted from genome sequences. The latest update of FAMSBASE (http://daisy.nagahama-i-bio.ac.jp/Famsbase/), which is based on the protein three-dimensional (3D) structures released by November 2003, contains modeled 3D structures for 368,724 open reading frames (ORFs) derived from genomes of 276 species, namely 17 archaebacterial, 130 eubacterial, 18 eukaryotic and 111 phage genomes. Those 276 genomes are predicted to have 734,193 ORFs in total and the current FAMSBASE contains protein 3D structure of approximately 50% of the ORF products. However, cases that a modeled 3D structure covers the whole part of an ORF product are rare. When portion of an ORF with 3D structure is compared in three kingdoms of life, in archaebacteria and eubacteria, approximately 60% of the ORFs have modeled 3D structures covering almost the entire amino acid sequences, however, the percentage falls to about 30% in eukaryotes. When annual differences in the number of ORFs with modeled 3D structure are calculated, the fraction of modeled 3D structures of soluble protein for archaebacteria is increased by 5%, and that for eubacteria by 7% in the last 3 years. Assuming that this rate would be maintained and that determination of 3D structures for predicted disordered regions is unattainable, whole soluble protein model structures of prokaryotes without the putative disordered regions will be in hand within 15 years. For eukaryotic proteins, they will be in hand within 25 years. The 3D structures we will have at those times are not the 3D structure of the entire proteins encoded in single ORFs, but the 3D structures of separate structural domains. Measuring or predicting spatial arrangements of structural domains in an ORF will then be a coming issue of structural genomics. PMID:17146617

  20. Carbonatites of the World, Explored Deposits of Nb and REE - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Berger, Vladimir I.; Singer, Donald A.; Orris, Greta J.

    2009-01-01

    This report is based on published tonnage and grade data on 58 Nb- and rare-earth-element (REE)-bearing carbonatite deposits that are mostly well explored and are partially mined or contain resources of these elements. The deposits represent only a part of the known 527 carbonatites around the world, but they are characterized by reliable quantitative data on ore tonnages and grades of niobium and REE. Grade and tonnage models are an important component of mineral resource assessments. Carbonatites present one of the main natural sources of niobium and rare-earth elements, the economic importance of which grows consistently. A purpose of this report is to update earlier publications. New information about known deposits, as well as data on new deposits published during the last decade, are incorporated in the present paper. The compiled database (appendix 1; linked to right) contains 60 explored Nb- and REE-bearing carbonatite deposits - resources of 55 of these deposits are taken from publications. In the present updated grade-tonnage model we have added 24 deposits comparing with the previous model of Singer (1998). Resources of most deposits are residuum ores in the upper part of carbonatite bodies. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types are present in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment, and the grade and tonnage models allow economists to

  1. AgBase: supporting functional modeling in agricultural organisms.

    PubMed

    McCarthy, Fiona M; Gresham, Cathy R; Buza, Teresia J; Chouvarine, Philippe; Pillai, Lakshmi R; Kumar, Ranjit; Ozkan, Seval; Wang, Hui; Manda, Prashanti; Arick, Tony; Bridges, Susan M; Burgess, Shane C

    2011-01-01

    AgBase (http://www.agbase.msstate.edu/) provides resources to facilitate modeling of functional genomics data and structural and functional annotation of agriculturally important animal, plant, microbe and parasite genomes. The website is redesigned to improve accessibility and ease of use, including improved search capabilities. Expanded capabilities include new dedicated pages for horse, cat, dog, cotton, rice and soybean. We currently provide 590 240 Gene Ontology (GO) annotations to 105 454 gene products in 64 different species, including GO annotations linked to transcripts represented on agricultural microarrays. For many of these arrays, this provides the only functional annotation available. GO annotations are available for download and we provide comprehensive, species-specific GO annotation files for 18 different organisms. The tools available at AgBase have been expanded and several existing tools improved based upon user feedback. One of seven new tools available at AgBase, GOModeler, supports hypothesis testing from functional genomics data. We host several associated databases and provide genome browsers for three agricultural pathogens. Moreover, we provide comprehensive training resources (including worked examples and tutorials) via links to Educational Resources at the AgBase website. PMID:21075795

  2. AgBase: supporting functional modeling in agricultural organisms

    PubMed Central

    McCarthy, Fiona M.; Gresham, Cathy R.; Buza, Teresia J.; Chouvarine, Philippe; Pillai, Lakshmi R.; Kumar, Ranjit; Ozkan, Seval; Wang, Hui; Manda, Prashanti; Arick, Tony; Bridges, Susan M.; Burgess, Shane C.

    2011-01-01

    AgBase (http://www.agbase.msstate.edu/) provides resources to facilitate modeling of functional genomics data and structural and functional annotation of agriculturally important animal, plant, microbe and parasite genomes. The website is redesigned to improve accessibility and ease of use, including improved search capabilities. Expanded capabilities include new dedicated pages for horse, cat, dog, cotton, rice and soybean. We currently provide 590 240 Gene Ontology (GO) annotations to 105 454 gene products in 64 different species, including GO annotations linked to transcripts represented on agricultural microarrays. For many of these arrays, this provides the only functional annotation available. GO annotations are available for download and we provide comprehensive, species-specific GO annotation files for 18 different organisms. The tools available at AgBase have been expanded and several existing tools improved based upon user feedback. One of seven new tools available at AgBase, GOModeler, supports hypothesis testing from functional genomics data. We host several associated databases and provide genome browsers for three agricultural pathogens. Moreover, we provide comprehensive training resources (including worked examples and tutorials) via links to Educational Resources at the AgBase website. PMID:21075795

  3. Contextual models of clinical publications for enhancing retrieval from full-text databases.

    PubMed Central

    Purcell, G. P.; Shortliffe, E. H.

    1995-01-01

    Conventional methods for retrieving information from the medical literature are imprecise and inefficient. Information retrieval systems employ unmanageable indexing vocabularies or use full-text representations that overwhelm the user with irrelevant information. This paper describes a document representation designed to improve the precision of searching in textual databases without significantly compromising recall. The representation augments simple text word representations with contextual models that reflect recurring semantic themes in clinical publications. Using this representation, a searcher may indicate both the terms of interest and the contexts in which they should occur. The contexts limit the potential interpretations of text words, and thus form the basis for more precise searching. In this paper, we discuss the shortcomings of traditional retrieval systems and describe our context-based representation. Improved retrieval performance with contextual models is illustrated by example, and a more extensive study is proposed. We present an evaluation of the contextual models as an indexing scheme, using a variation of the traditional inter-indexer consistency experiments, and we demonstrate that contextual indexing is reproducible by minimally trained physicians and medical students. PMID:8563412

  4. A database of lumbar spinal mechanical behavior for validation of spinal analytical models.

    PubMed

    Stokes, Ian A F; Gardner-Morse, Mack

    2016-03-21

    Data from two experimental studies with eight specimens each of spinal motion segments and/or intervertebral discs are presented in a form that can be used for comparison with finite element model predictions. The data include the effect of compressive preload (0, 250 and 500N) with quasistatic cyclic loading (0.0115Hz) and the effect of loading frequency (1, 0.1, 0.01 and 0.001Hz) with a physiological compressive preload (mean 642N). Specimens were tested with displacements in each of six degrees of freedom (three translations and three rotations) about defined anatomical axes. The three forces and three moments in the corresponding axis system were recorded during each test. Linearized stiffness matrices were calculated that could be used in multi-segmental biomechanical models of the spine and these matrices were analyzed to determine whether off-diagonal terms and symmetry assumptions should be included. These databases of lumbar spinal mechanical behavior under physiological conditions quantify behaviors that should be present in finite element model simulations. The addition of more specimens to identify sources of variability associated with physical dimensions, degeneration, and other variables would be beneficial. Supplementary data provide the recorded data and Matlab® codes for reading files. Linearized stiffness matrices derived from the tests at different preloads revealed few significant unexpected off-diagonal terms and little evidence of significant matrix asymmetry. PMID:26900035

  5. Spectral Line-Shape Model to Replace the Voigt Profile in Spectroscopic Databases

    NASA Astrophysics Data System (ADS)

    Lisak, Daniel; Ngo, Ngoc Hoa; Tran, Ha; Hartmann, Jean-Michel

    2014-06-01

    The standard description of molecular line shapes in spectral databases and radiative transfer codes is based on the Voigt profile. It is well known that its simplified assumptions of absorber free motion and independence of collisional parameters from absorber velocity lead to systematic errors in analysis of experimental spectra, and retrieval of gas concentration. We demonstrate1,2 that the partially correlated quadratic speed-dependent hardcollision profile3. (pCqSDHCP) is a good candidate to replace the Voigt profile in the next generations of spectroscopic databases. This profile takes into account the following physical effects: the Doppler broadening, the pressure broadening and shifting of the line, the velocity-changing collisions, the speed-dependence of pressure broadening and shifting, and correlations between velocity- and phase/state-changing collisions. The speed-dependence of pressure broadening and shifting is incorporated into the pCqSDNGP in the so-called quadratic approximation. The velocity-changing collisions lead to the Dicke narrowing effect; however in many cases correlations between velocityand phase/state-changing collisions may lead to effective reduction of observed Dicke narrowing. The hard-collision model of velocity-changing collisions is also known as the Nelkin-Ghatak model or Rautian model. Applicability of the pCqSDHCP for different molecular systems was tested on calculated and experimental spectra of such molecules as H2, O2, CO2, H2O in a wide span of pressures. For all considered systems, pCqSDHCP is able to describe molecular spectra at least an order of magnitude better than the Voigt profile with all fitted parameters being linear with pressure. In the most cases pCqSDHCP can reproduce the reference spectra down to 0.2% or better, which fulfills the requirements of the most demanding remote-sensing applications. An important advantage of pCqSDHCP is that a fast algorithm for its computation was developedab4,5 and allows

  6. Modeling plant-level industrial energy demand with the Manufacturing Energy Consumption Survey (MECS) database and the Longitudinal Research Database (LRD)

    SciTech Connect

    Boyd, G.A.; Neifer, M.J.; Ross, M.H.

    1992-08-01

    This report discusses Phase 1 of a project to help the US Department of Energy determine the applicability of the Manufacturing Energy Consumption Survey (MECS) database and the Longitudinal Research Database (LRD) for industrial modeling and analysis. Research was conducted at the US Bureau of the Census; disclosure of the MECS/LRD data used as a basis for this report was subject to the Bureau`s confidentiality restriction. The project is designed to examine the plant-level energy behavior of energy-intensive industries. In Phase 1, six industries at the four-digit standard industrial classification (SIC) level were studied. The utility of analyzing four-digit SIC samples at the plant level is mixed, but the plant-level structure of the MECS/LRD makes analyzing samples disaggregated below the four-digit level feasible, particularly when the MECS/LRD data are combined with trade association or other external data. When external data are used, the validity of using value of shipments as a measure of output for analyzing energy use can also be examined. Phase 1 results indicate that technical efficiency and the distribution of energy intensities vary significantly at the plant level. They also show that the six industries exhibit monopsony-like behavior; that is, energy prices vary significantly at the plant level, with lower prices being correlated with a higher level of energy consumption. Finally, they show to what degree selected energy-intensive products are manufactured outside their primary industry.

  7. Cazymes Analysis Toolkit (CAT): Webservice for searching and analyzing carbohydrateactive enzymes in a newly sequenced organism using CAZy database

    SciTech Connect

    Karpinets, Tatiana V; Park, Byung; Syed, Mustafa H; Uberbacher, Edward C; Leuze, Michael Rex

    2010-01-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire non-redundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains (DUF) and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit (CAT), and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  8. Defining new criteria for selection of cell-based intestinal models using publicly available databases

    PubMed Central

    2012-01-01

    Background The criteria for choosing relevant cell lines among a vast panel of available intestinal-derived lines exhibiting a wide range of functional properties are still ill-defined. The objective of this study was, therefore, to establish objective criteria for choosing relevant cell lines to assess their appropriateness as tumor models as well as for drug absorption studies. Results We made use of publicly available expression signatures and cell based functional assays to delineate differences between various intestinal colon carcinoma cell lines and normal intestinal epithelium. We have compared a panel of intestinal cell lines with patient-derived normal and tumor epithelium and classified them according to traits relating to oncogenic pathway activity, epithelial-mesenchymal transition (EMT) and stemness, migratory properties, proliferative activity, transporter expression profiles and chemosensitivity. For example, SW480 represent an EMT-high, migratory phenotype and scored highest in terms of signatures associated to worse overall survival and higher risk of recurrence based on patient derived databases. On the other hand, differentiated HT29 and T84 cells showed gene expression patterns closest to tumor bulk derived cells. Regarding drug absorption, we confirmed that differentiated Caco-2 cells are the model of choice for active uptake studies in the small intestine. Regarding chemosensitivity we were unable to confirm a recently proposed association of chemo-resistance with EMT traits. However, a novel signature was identified through mining of NCI60 GI50 values that allowed to rank the panel of intestinal cell lines according to their drug responsiveness to commonly used chemotherapeutics. Conclusions This study presents a straightforward strategy to exploit publicly available gene expression data to guide the choice of cell-based models. While this approach does not overcome the major limitations of such models, introducing a rank order of selected

  9. The Time Is Right to Focus on Model Organism Metabolomes.

    PubMed

    Edison, Arthur S; Hall, Robert D; Junot, Christophe; Karp, Peter D; Kurland, Irwin J; Mistrik, Robert; Reed, Laura K; Saito, Kazuki; Salek, Reza M; Steinbeck, Christoph; Sumner, Lloyd W; Viant, Mark R

    2016-01-01

    Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research. PMID:26891337

  10. The Time Is Right to Focus on Model Organism Metabolomes

    PubMed Central

    Edison, Arthur S.; Hall, Robert D.; Junot, Christophe; Karp, Peter D.; Kurland, Irwin J.; Mistrik, Robert; Reed, Laura K.; Saito, Kazuki; Salek, Reza M.; Steinbeck, Christoph; Sumner, Lloyd W.; Viant, Mark R.

    2016-01-01

    Model organisms are an essential component of biological and biomedical research that can be used to study specific biological processes. These organisms are in part selected for facile experimental study. However, just as importantly, intensive study of a small number of model organisms yields important synergies as discoveries in one area of science for a given organism shed light on biological processes in other areas, even for other organisms. Furthermore, the extensive knowledge bases compiled for each model organism enable systems-level understandings of these species, which enhance the overall biological and biomedical knowledge for all organisms, including humans. Building upon extensive genomics research, we argue that the time is now right to focus intensively on model organism metabolomes. We propose a grand challenge for metabolomics studies of model organisms: to identify and map all metabolites onto metabolic pathways, to develop quantitative metabolic models for model organisms, and to relate organism metabolic pathways within the context of evolutionary metabolomics, i.e., phylometabolomics. These efforts should focus on a series of established model organisms in microbial, animal and plant research. PMID:26891337

  11. Earthquake Model of the Middle East (EMME) Project: Active Fault Database for the Middle East Region

    NASA Astrophysics Data System (ADS)

    Gülen, L.; Wp2 Team

    2010-12-01

    The Earthquake Model of the Middle East (EMME) Project is a regional project of the umbrella GEM (Global Earthquake Model) project (http://www.emme-gem.org/). EMME project region includes Turkey, Georgia, Armenia, Azerbaijan, Syria, Lebanon, Jordan, Iran, Pakistan, and Afghanistan. Both EMME and SHARE projects overlap and Turkey becomes a bridge connecting the two projects. The Middle East region is tectonically and seismically very active part of the Alpine-Himalayan orogenic belt. Many major earthquakes have occurred in this region over the years causing casualties in the millions. The EMME project will use PSHA approach and the existing source models will be revised or modified by the incorporation of newly acquired data. More importantly the most distinguishing aspect of the EMME project from the previous ones will be its dynamic character. This very important characteristic is accomplished by the design of a flexible and scalable database that will permit continuous update, refinement, and analysis. A digital active fault map of the Middle East region is under construction in ArcGIS format. We are developing a database of fault parameters for active faults that are capable of generating earthquakes above a threshold magnitude of Mw≥5.5. Similar to the WGCEP-2007 and UCERF-2 projects, the EMME project database includes information on the geometry and rates of movement of faults in a “Fault Section Database”. The “Fault Section” concept has a physical significance, in that if one or more fault parameters change, a new fault section is defined along a fault zone. So far over 3,000 Fault Sections have been defined and parameterized for the Middle East region. A separate “Paleo-Sites Database” includes information on the timing and amounts of fault displacement for major fault zones. A digital reference library that includes the pdf files of the relevant papers, reports is also being prepared. Another task of the WP-2 of the EMME project is to prepare

  12. Data-based information gain on the response behaviour of hydrological models at catchment scale

    NASA Astrophysics Data System (ADS)

    Willems, Patrick

    2013-04-01

    A data-based approach is presented to analyse the response behaviour of hydrological models at the catchment scale. The approach starts with a number of sequential time series processing steps, applied to available rainfall, ETo and river flow observation series. These include separation of the high frequency (e.g., hourly, daily) river flow series into subflows, split of the series in nearly independent quick and slow flow hydrograph periods, and the extraction of nearly independent peak and low flows. Quick-, inter- and slow-subflow recession behaviour, sub-responses to rainfall and soil water storage are derived from the time series data. This data-based information on the catchment response behaviour can be applied on the basis of: - Model-structure identification and case-specific construction of lumped conceptual models for gauged catchments; or diagnostic evaluation of existing model structures; - Intercomparison of runoff responses for gauged catchments in a river basin, in order to identify similarity or significant differences between stations or between time periods, and relate these differences to spatial differences or temporal changes in catchment characteristics; - (based on the evaluation of the temporal changes in previous point:) Detection of temporal changes/trends and identification of its causes: climate trends, or land use changes; - Identification of asymptotic properties of the rainfall-runoff behaviour towards extreme peak or low flow conditions (for a given catchment) or towards extreme catchment conditions (for regionalization, ungauged basin prediction purposes); hence evaluating the performance of the model in making extrapolations beyond the range of available stations' data; - (based on the evaluation in previous point:) Evaluation of the usefulness of the model for making extrapolations to more extreme climate conditions projected by for instance climate models. Examples are provided for river basins in Belgium, Ethiopia, Kenya

  13. iGNM 2.0: the Gaussian network model database for biomolecular structural dynamics.

    PubMed

    Li, Hongchun; Chang, Yuan-Yu; Yang, Lee-Wei; Bahar, Ivet

    2016-01-01

    Gaussian network model (GNM) is a simple yet powerful model for investigating the dynamics of proteins and their complexes. GNM analysis became a broadly used method for assessing the conformational dynamics of biomolecular structures with the development of a user-friendly interface and database, iGNM, in 2005. We present here an updated version, iGNM 2.0 http://gnmdb.csb.pitt.edu/, which covers more than 95% of the structures currently available in the Protein Data Bank (PDB). Advanced search and visualization capabilities, both 2D and 3D, permit users to retrieve information on inter-residue and inter-domain cross-correlations, cooperative modes of motion, the location of hinge sites and energy localization spots. The ability of iGNM 2.0 to provide structural dynamics data on the large majority of PDB structures and, in particular, on their biological assemblies makes it a useful resource for establishing the bridge between structure, dynamics and function. PMID:26582920

  14. The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease

    PubMed Central

    Eppig, Janan T.; Blake, Judith A.; Bult, Carol J.; Kadin, James A.; Richardson, Joel E.

    2015-01-01

    The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse–human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human–Mouse: Disease Connection, allows users to explore gene–phenotype–disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community. PMID:25348401

  15. The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease.

    PubMed

    Eppig, Janan T; Blake, Judith A; Bult, Carol J; Kadin, James A; Richardson, Joel E

    2015-01-01

    The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse-human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human-Mouse: Disease Connection, allows users to explore gene-phenotype-disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community. PMID:25348401

  16. iGNM 2.0: the Gaussian network model database for biomolecular structural dynamics

    PubMed Central

    Li, Hongchun; Chang, Yuan-Yu; Yang, Lee-Wei; Bahar, Ivet

    2016-01-01

    Gaussian network model (GNM) is a simple yet powerful model for investigating the dynamics of proteins and their complexes. GNM analysis became a broadly used method for assessing the conformational dynamics of biomolecular structures with the development of a user-friendly interface and database, iGNM, in 2005. We present here an updated version, iGNM 2.0 http://gnmdb.csb.pitt.edu/, which covers more than 95% of the structures currently available in the Protein Data Bank (PDB). Advanced search and visualization capabilities, both 2D and 3D, permit users to retrieve information on inter-residue and inter-domain cross-correlations, cooperative modes of motion, the location of hinge sites and energy localization spots. The ability of iGNM 2.0 to provide structural dynamics data on the large majority of PDB structures and, in particular, on their biological assemblies makes it a useful resource for establishing the bridge between structure, dynamics and function. PMID:26582920

  17. A picture of gene sampling/expression in model organisms using ESTs and KOG proteins.

    PubMed

    Mudado, Maurício de Alvarenga; Ortega, José Miguel

    2006-01-01

    The expressed sequence tag (EST) is an instrument of gene discovery. When available in large numbers, ESTs may be used to estimate gene expression. We analyzed gene expression by EST sampling, using the KOG database, which includes 24,154 proteins from Arabidopsis thaliana (Ath), 17,101 from Caenorhabditis elegans (Cel), 10,517 from Drosophila melanogaster (Dme), and 26,324 from Homo sapiens (Hsa), and 178,538 ESTs for Ath, 215,200 for Cel, 261,404 for Dme, and 1,941,556 for Hsa. BLAST similarity searches were performed to assign KOG annotation to all ESTs. We determined the amount of gene sampling or expression dedicated to each KOG functional category by each model organism. We found that the 25% most-expressed genes are frequently shared among these organisms. The KOG protein classification allowed the EST sampling calculation throughout the glycolysis pathway. We calculated the KOG cluster coverage and inferred that 50 to 80 K ESTs would efficiently cover 80-85% of the KOG database clusters in a transcriptome project. Since KOG is a database biased towards housekeeping genes, this is probably the number of ESTs needed to include the more commonly expressed genes in these organisms. We also examined a still unaddressed question: what is the minimum number of ESTs that should be produced in a transcriptome project? PMID:16755515

  18. GWIDD: Genome-wide protein docking database

    PubMed Central

    Kundrotas, Petras J.; Zhu, Zhengwei; Vakser, Ilya A.

    2010-01-01

    Structural information on interacting proteins is important for understanding life processes at the molecular level. Genome-wide docking database is an integrated resource for structural studies of protein–protein interactions on the genome scale, which combines the available experimental data with models obtained by docking techniques. Current database version (August 2009) contains 25 559 experimental and modeled 3D structures for 771 organisms spanned over the entire universe of life from viruses to humans. Data are organized in a relational database with user-friendly search interface allowing exploration of the database content by a number of parameters. Search results can be interactively previewed and downloaded as PDB-formatted files, along with the information relevant to the specified interactions. The resource is freely available at http://gwidd.bioinformatics.ku.edu. PMID:19900970

  19. Model estimation of cerebral hemodynamics between blood flow and volume changes: a data-based modeling approach.

    PubMed

    Wei, Hua-Liang; Zheng, Ying; Pan, Yi; Coca, Daniel; Li, Liang-Min; Mayhew, J E W; Billings, Stephen A

    2009-06-01

    It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV. PMID:19174333

  20. Modelling the fate of persistent organic pollutants in europe: parameterization of a gridded distribution model

    SciTech Connect

    Prevedouros, Konstantinos; MacLeod, Matthew; Jones, Kevin C.; Sweetman Andrew J.

    2003-12-01

    A regionally segmented multimedia fate model for the European continent is described together with an illustrative steady-state case study examining the fate of small gamma, Greek-HCH (lindane) based on 1998 emission data. The study builds on the regionally segmented BETR North America model structure and describes the regional segmentation and parameterization for Europe. The European continent is described by a 5 degree x 5 degree grid, leading to 50 regions together with four perimetric boxes representing regions buffering the European environment. Each zone comprises seven compartments including; upper and lower atmosphere, soil, vegetation, fresh water and sediment and coastal water. Inter-regions flows of air and water are described, exploiting information originating from GIS databases and other georeferenced data. The model is primarily designed to describe the fate of Persistent Organic Pollutants (POPs) within the European environment by examining chemical partitioning and de gradation in each region, and inter-region transport either under steady-state conditions or fully dynamically. A test case scenario is presented which examines the fate of estimated spatially resolved atmospheric emissions of lindane throughout Europe within the lower atmosphere and surface soil compartments. In accordance with the predominant wind direction in Europe, the model predicts high concentrations close to the major sources as well as towards Central and Northeast regions. Elevated soil concentrations in Scandinavian soils provide further evidence of the potential of increased scavenging by forests and subsequent accumulation by organic-rich terrestrial surfaces. Initial model predictions have revealed a factor of 5 10 underestimation of lindane concentrations in the atmosphere. This is explained by an underestimation of source strength and/or an underestimation of European background levels. The model presented can further be used to predict deposition fluxes and chemical

  1. FACILITATING ADVANCED URBAN METEOROLOGY AND AIR QUALITY MODELING CAPABILITIES WITH HIGH RESOLUTION URBAN DATABASE AND ACCESS PORTAL TOOLS

    EPA Science Inventory

    Information of urban morphological features at high resolution is needed to properly model and characterize the meteorological and air quality fields in urban areas. We describe a new project called National Urban Database with Access Portal Tool, (NUDAPT) that addresses this nee...

  2. Organizations, Environments, and Models of Public Relations.

    ERIC Educational Resources Information Center

    Grunig, James E.

    Noting that little theory has been developed to explain how and why organizations choose to manage public relations, this paper argues that theorists cannot improve the practice of public relations until they can explain what public relations is and what it contributes to the functions of an organization. The paper addresses that issue by…

  3. 3D Bioprinting of Tissue/Organ Models.

    PubMed

    Pati, Falguni; Gantelius, Jesper; Svahn, Helene Andersson

    2016-04-01

    In vitro tissue/organ models are useful platforms that can facilitate systematic, repetitive, and quantitative investigations of drugs/chemicals. The primary objective when developing tissue/organ models is to reproduce physiologically relevant functions that typically require complex culture systems. Bioprinting offers exciting prospects for constructing 3D tissue/organ models, as it enables the reproducible, automated production of complex living tissues. Bioprinted tissues/organs may prove useful for screening novel compounds or predicting toxicity, as the spatial and chemical complexity inherent to native tissues/organs can be recreated. In this Review, we highlight the importance of developing 3D in vitro tissue/organ models by 3D bioprinting techniques, characterization of these models for evaluating their resemblance to native tissue, and their application in the prioritization of lead candidates, toxicity testing, and as disease/tumor models. PMID:26895542

  4. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    SciTech Connect

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 {times} 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs.

  5. Informatics calibration of a molecular descriptors database to predict solid dispersion potential of small molecule organic solids.

    PubMed

    Moore, Michael D; Wildfong, Peter L D

    2011-10-14

    The use of a novel, in silico method for making an intelligent polymer selection to physically stabilize small molecule organic (SMO) solid compounds formulated as amorphous molecular solid dispersions is reported. 12 compounds (75%, w/w) were individually co-solidified with polyvinyl pyrrolidone:vinyl acetate (PVPva) copolymer by melt-quenching. Co-solidified products were analyzed intact using differential scanning calorimetry (DSC) and the pair distribution function (PDF) transform of powder X-ray diffraction (PXRD) data to assess miscibility. Molecular descriptor indices were calculated for all twelve compounds using their reported crystallographic structures. Logistic regression was used to assess correlation between molecular descriptors and amorphous molecular solid dispersion potential. The final model was challenged with three compounds. Of the 12 compounds, 6 were miscible with PVPva (i.e. successful formation) and 6 were phase separated (i.e. unsuccessful formation). 2 of the 6 unsuccessful compounds exhibited detectable phase-separation using the PDF method, where DSC indicated miscibility. Logistic regression identified 7 molecular descriptors correlated to solid dispersion potential (α=0.001). The atomic mass-weighted third-order R autocorrelation index (R3m) was the only significant descriptor to provide completely accurate predictions of dispersion potential. The three compounds used to challenge the R3m model were also successfully predicted. PMID:21756988

  6. Gene–disease relationship discovery based on model-driven data integration and database view definition

    PubMed Central

    Yilmaz, S.; Jonveaux, P.; Bicep, C.; Pierron, L.; Smaïl-Tabbone, M.; Devignes, M.D.

    2009-01-01

    Motivation: Computational methods are widely used to discover gene–disease relationships hidden in vast masses of available genomic and post-genomic data. In most current methods, a similarity measure is calculated between gene annotations and known disease genes or disease descriptions. However, more explicit gene–disease relationships are required for better insights into the molecular bases of diseases, especially for complex multi-gene diseases. Results: Explicit relationships between genes and diseases are formulated as candidate gene definitions that may include intermediary genes, e.g. orthologous or interacting genes. These definitions guide data modelling in our database approach for gene–disease relationship discovery and are expressed as views which ultimately lead to the retrieval of documented sets of candidate genes. A system called ACGR (Approach for Candidate Gene Retrieval) has been implemented and tested with three case studies including a rare orphan gene disease. Availability: The ACGR sources are freely available at http://bioinfo.loria.fr/projects/acgr/acgr-software/. See especially the file ‘disease_description’ and the folders ‘Xcollect_scenarios’ and ‘ACGR_views’. Contact: devignes@loria.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19042916

  7. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    NASA Astrophysics Data System (ADS)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2014-06-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~275 to ~400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures with the introduction of a new temperature dependence parameterisation. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multicomponent system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~190 to ~440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 25% in comparison to

  8. Sediment-hosted gold deposits of the world: database and grade and tonnage models

    USGS Publications Warehouse

    Berger, Vladimir I.; Mosier, Dan L.; Bliss, James D.; Moring, Barry C.

    2014-01-01

    All sediment-hosted gold deposits (as a single population) share one characteristic—they all have disseminated micron-sized invisible gold in sedimentary rocks. Sediment-hosted gold deposits are recognized in the Great Basin province of the western United States and in China along with a few recognized deposits in Indonesia, Iran, and Malaysia. Three new grade and tonnage models for sediment-hosted gold deposits are presented in this paper: (1) a general sediment-hosted gold type model, (2) a Carlin subtype model, and (3) a Chinese subtype model. These models are based on grade and tonnage data from a database compilation of 118 sediment-hosted gold deposits including a total of 123 global deposits. The new general grade and tonnage model for sediment-hosted gold deposits (n=118) has a median tonnage of 5.7 million metric tonnes (Mt) and a gold grade of 2.9 grams per tonne (g/t). This new grade and tonnage model is remarkable in that the estimated parameters of the resulting grade and tonnage distributions are comparable to the previous model of Mosier and others (1992). A notable change is in the reporting of silver in more than 10 percent of deposits; moreover, the previous model had not considered deposits in China. From this general grade and tonnage model, two significantly different subtypes of sediment-hosted gold deposits are differentiated: Carlin and Chinese. The Carlin subtype includes 88 deposits in the western United States, Indonesia, Iran, and Malaysia, with median tonnage and grade of 7.1 Mt and 2.0 g/t Au, respectively. The silver grade is 0.78 g/t Ag for the 10th percentile of deposits. The Chinese subtype represents 30 deposits in China, with a median tonnage of 3.9 Mt and medium grade of 4.6 g/t Au. Important differences are recognized in the mineralogy and alteration of the two sediment-hosted gold subtypes such as: increased sulfide minerals in the Chinese subtype and decalcification alteration dominant in the Carlin type. We therefore

  9. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  10. A computational platform to maintain and migrate manual functional annotations for BioCyc databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Model organism databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integratio...

  11. Toxico-Cheminformatics and QSPR Modeling of the Carcinogenic Potency Database

    EPA Science Inventory

    Report on the development of a tiered, confirmatory scheme for prediction of chemical carcinogenicity based on QSAR studies of compounds with available mutagenic and carcinogenic data. For 693 such compounds from the Carcinogenic Potency Database characterized molecular topologic...

  12. A Database for Propagation Models and Conversion to C++ Programming Language

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Angkasa, Krisjani; Rucker, James

    1996-01-01

    The telecommunications system design engineer generally needs the quantification of effects of the propagation medium (definition of the propagation channel) to design an optimal communications system. To obtain the definition of the channel, the systems engineer generally has a few choices. A search of the relevant publications such as the IEEE Transactions, CCIR's, NASA propagation handbook, etc., may be conducted to find the desired channel values. This method may need excessive amounts of time and effort on the systems engineer's part and there is a possibility that the search may not even yield the needed results. To help the researcher and the systems engineers, it was recommended by the conference participants of NASA Propagation Experimenters (NAPEX) XV (London, Ontario, Canada, June 28 and 29, 1991) that a software should be produced that would contain propagation models and the necessary prediction methods of most propagation phenomena. Moreover, the software should be flexible enough for the user to make slight changes to the models without expending a substantial effort in programming. In the past few years, a software was produced to fit these requirements as best as could be done. The software was distributed to all NAPEX participants for evaluation and use, the participant reactions, suggestions etc., were gathered and were used to improve the subsequent releases of the software. The existing database program is in the Microsoft Excel application software and works fine within the guidelines of that environment, however, recently there have been some questions about the robustness and survivability of the Excel software in the ever changing (hopefully improving) world of software packages.

  13. A Modeling Exercise for the Organic Classroom

    ERIC Educational Resources Information Center

    Whitlock, Christine R.

    2010-01-01

    An in-class molecular modeling exercise is described. Groups of students are given molecular models to investigate and questions about the models to answer. This exercise is a quick and effective way to review nomenclature, stereochemistry, and conformational analysis.

  14. A model of willingness to become a potential organ donor.

    PubMed

    Horton, R L; Horton, P J

    1991-01-01

    This article presents two models of the decision to become a potential organ donor. In the first model the act of carrying or requesting an organ donor card is related to values and factual knowledge regarding organ donation, through intervening attitude and willingness constructs. A sample of 286 students is used to test this model via the LISREL computer program for modeling latent variables. All hypothesized relationships had the predicted sign and were significant. This model is extended by adding the variables attitude towards death, prior blood donation, and age of subject to the model. A second sample of 365 adults from the local community is used to test the second model via LISREL. With two exceptions in the adult sample, all hypothesized relationships had the predicted sign and were significant. Where the two models overlap the results are generally similar. Implications of the models for marketing the act of becoming a potential organ donor are discussed. PMID:1771431

  15. In praise of other model organisms

    PubMed Central

    2015-01-01

    The early cell biological literature is the resting place of false starts and lost opportunities. Though replete with multiple studies of diverse organisms, a few of which served as foundations for several fields, most were not pursued, abandoned largely for technical reasons that are no longer limiting. The time has come to revisit the old literature and to resurrect the organisms that are buried there, both to uncover new mechanisms and to marvel at the richness of the cellular world. PMID:25688132

  16. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    USGS Publications Warehouse

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  17. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling

  18. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  19. TREATABILITY DATABASE DESCRIPTION

    EPA Science Inventory

    The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities, first responders to spills or emergencies, treatment process designers, research organizations, academics, regulato...

  20. Scaling laws and model of words organization in spoken and written language

    NASA Astrophysics Data System (ADS)

    Bian, Chunhua; Lin, Ruokuang; Zhang, Xiaoyu; Ma, Qianli D. Y.; Ivanov, Plamen Ch.

    2016-01-01

    A broad range of complex physical and biological systems exhibits scaling laws. The human language is a complex system of words organization. Studies of written texts have revealed intriguing scaling laws that characterize the frequency of words occurrence, rank of words, and growth in the number of distinct words with text length. While studies have predominantly focused on the language system in its written form, such as books, little attention is given to the structure of spoken language. Here we investigate a database of spoken language transcripts and written texts, and we uncover that words organization in both spoken language and written texts exhibits scaling laws, although with different crossover regimes and scaling exponents. We propose a model that provides insight into words organization in spoken language and written texts, and successfully accounts for all scaling laws empirically observed in both language forms.

  1. The Digital Astronaut: An integrated modeling and database system for space biomedical research and operations

    NASA Astrophysics Data System (ADS)

    White, Ronald J.; McPhee, Jancy C.

    2007-02-01

    The Digital Astronaut is an integrated, modular modeling and database system that will support space biomedical research and operations in a variety of fundamental ways. This system will enable the identification and meaningful interpretation of the medical and physiological research required for human space exploration, a determination of the effectiveness of specific individual human countermeasures in reducing risk and meeting health and performance goals on challenging exploration missions and an evaluation of the appropriateness of various medical interventions during mission emergencies, accidents and illnesses. Such a computer-based, decision support system will enable the construction, validation and utilization of important predictive simulations of the responses of the whole human body to the types of stresses experienced during space flight and low-gravity environments. These simulations will be essential for direct, real-time analysis and maintenance of astronaut health and performance capabilities. The Digital Astronaut will collect and integrate past and current human data across many physiological disciplines and simulations into an operationally useful form that will not only summarize knowledge in a convenient and novel way but also reveal gaps that must be filled via new research in order to effectively ameliorate biomedical risks. Initial phases of system development will focus on simulating ground-based analog systems that are just beginning to collect multidisciplinary data in a standardized way (e.g., the International Multidisciplinary Artificial Gravity Project). During later phases, the focus will shift to development and planning for missions and to exploration mission operations. Then, the Digital Astronaut system will enable evaluation of the effectiveness of multiple, simultaneously applied countermeasures (a task made difficult by the many-system physiological effects of individual countermeasures) and allow for the prescription of

  2. Discovery of predictive models in an injury surveillance database: an application of data mining in clinical research.

    PubMed

    Holmes, J H; Durbin, D R; Winston, F K

    2000-01-01

    A new, evolutionary computation-based approach to discovering prediction models in surveillance data was developed and evaluated. This approach was operationalized in EpiCS, a type of learning classifier system specially adapted to model clinical data. In applying EpiCS to a large, prospective injury surveillance database, EpiCS was found to create accurate predictive models quickly that were highly robust, being able to classify > 99% of cases early during training. After training, EpiCS classified novel data more accurately (p < 0.001) than either logistic regression or decision tree induction (C4.5), two traditional methods for discovering or building predictive models. PMID:11079905

  3. A Model for Implementing E-Learning in Iranian Organizations

    ERIC Educational Resources Information Center

    Ghaeni, Emad; Abdehagh, Babak

    2010-01-01

    This article reviews the current status of information and communications technology (ICT) usage and provides a comprehensive outlook on e-learning in both virtual universities and organizations in Iran. A model for e-learning implementation is presented. This model tries to address specific issues in Iranian organizations. (Contains 1 table and 2…

  4. Modeling the Explicit Chemistry of Anthropogenic and Biogenic Organic Aerosols

    SciTech Connect

    Madronich, Sasha

    2015-12-09

    The atmospheric burden of Secondary Organic Aerosols (SOA) remains one of the most important yet uncertain aspects of the radiative forcing of climate. This grant focused on improving our quantitative understanding of SOA formation and evolution, by developing, applying, and improving a highly detailed model of atmospheric organic chemistry, the Generation of Explicit Chemistry and Kinetics of Organics in the Atmosphere (GECKO-A) model. Eleven (11) publications have resulted from this grant.

  5. Functional genomics and proteomics of the cellular osmotic stress response in 'non-model' organisms.

    PubMed

    Kültz, Dietmar; Fiol, Diego; Valkova, Nelly; Gomez-Jimenez, Silvia; Chan, Stephanie Y; Lee, Jinoo

    2007-05-01

    All organisms are adapted to well-defined extracellular salinity ranges. Osmoregulatory mechanisms spanning all levels of biological organization, from molecules to behavior, are central to salinity adaptation. Functional genomics and proteomics approaches represent powerful tools for gaining insight into the molecular basis of salinity adaptation and euryhalinity in animals. In this review, we discuss our experience in applying such tools to so-called 'non-model' species, including euryhaline animals that are well-suited for studies of salinity adaptation. Suppression subtractive hybridization, RACE-PCR and mass spectrometry-driven proteomics can be used to identify genes and proteins involved in salinity adaptation or other environmental stress responses in tilapia, sharks and sponges. For protein identification in non-model species, algorithms based on sequence homology searches such as MSBLASTP2 are most powerful. Subsequent gene ontology and pathway analysis can then utilize sets of identified genes and proteins for modeling molecular mechanisms of environmental adaptation. Current limitations for proteomics in non-model species can be overcome by improving sequence coverage, N- and C-terminal sequencing and analysis of intact proteins. Dependence on information about biochemical pathways and gene ontology databases for model species represents a more severe barrier for work with non-model species. To minimize such dependence, focusing on a single biological process (rather than attempting to describe the system as a whole) is key when applying 'omics' approaches to non-model organisms. PMID:17449824

  6. International Comparisions Database

    National Institute of Standards and Technology Data Gateway

    International Comparisions Database (Web, free access)   The International Comparisons Database (ICDB) serves the U.S. and the Inter-American System of Metrology (SIM) with information based on Appendices B (International Comparisons), C (Calibration and Measurement Capabilities) and D (List of Participating Countries) of the Comit� International des Poids et Mesures (CIPM) Mutual Recognition Arrangement (MRA). The official source of the data is The BIPM key comparison database. The ICDB provides access to results of comparisons of measurements and standards organized by the consultative committees of the CIPM and the Regional Metrology Organizations.

  7. Populating a Control Point Database: A cooperative effort between the USGS, Grand Canyon Monitoring and Research Center and the Grand Canyon Youth Organization

    NASA Astrophysics Data System (ADS)

    Brown, K. M.; Fritzinger, C.; Wharton, E.

    2004-12-01

    The Grand Canyon Monitoring and Research Center measures the effects of Glen Canyon Dam operations on the resources along the Colorado River from Glen Canyon Dam to Lake Mead in support of the Grand Canyon Adaptive Management Program. Control points are integral for geo-referencing the myriad of data collected in the Grand Canyon including aerial photography, topographic and bathymetric data used for classification and change-detection analysis of physical, biologic and cultural resources. The survey department has compiled a list of 870 control points installed by various organizations needing to establish a consistent reference for data collected at field sites along the 240 mile stretch of Colorado River in the Grand Canyon. This list is the foundation for the Control Point Database established primarily for researchers, to locate control points and independently geo-reference collected field data. The database has the potential to be a valuable mapping tool for assisting researchers to easily locate a control point and reduce the occurrance of unknowingly installing new control points within close proximity of an existing control point. The database is missing photographs and accurate site description information. Current site descriptions do not accurately define the location of the point but refer to the project that used the point, or some other interesting fact associated with the point. The Grand Canyon Monitoring and Research Center (GCMRC) resolved this problem by turning the data collection effort into an educational exercise for the participants of the Grand Canyon Youth organization. Grand Canyon Youth is a non-profit organization providing experiential education for middle and high school aged youth. GCMRC and the Grand Canyon Youth formed a partnership where GCMRC provided the logistical support, equipment, and training to conduct the field work, and the Grand Canyon Youth provided the time and personnel to complete the field work. Two data

  8. A Holocene Database of Relative Sea Levels for North America and the Caribbean: Implications for Geophysical Models

    NASA Astrophysics Data System (ADS)

    Engelhart, S. E.; Peltier, W. R.; Horton, B. P.; Khan, N. S.; Liu, S.; Vacchi, M.

    2011-12-01

    We have expanded the previously available quality-controlled database of relative sea-level (RSL) observations for the U.S. Atlantic coast with data from the Atlantic coast of Canada, the Pacific coast of North America and the Caribbean. The Holocene sea-level database for the U.S. Atlantic coast consisted of 836 sea-level indicators. The database documented a decreasing rate of relative sea-level (RSL) rise through time with no evidence of sea level being above present in the middle to late Holocene. The highest rates of rise were found in the mid-Atlantic region. We employed the database to constrain an ensemble of glacial isostatic adjustment (GIA) models using two ice (ICE-5G and ICE-6G) and two mantle viscosity variation (VM5a and VM5b). We identified significant misfits between observations and predictions using ICE-5G with the VM5a viscosity profile. ICE-6G provides some improvement for the northern Atlantic region, but misfits remain elsewhere. Decreasing the upper mantle and transition zone viscosity from 0.5*1021 Pa s (VM5a) to 0.25*1021 Pa s (VM5b) removed significant discrepancies between observations and predictions along the mid-Atlantic coastline, although misfits remained in the southern Atlantic region. The addition of new data from areas more proximal and distal to Laurentide ice loading has allowed us to further investigate the VM5b mantle viscosity profile.

  9. Exploring Organic Mechanistic Puzzles with Molecular Modeling

    ERIC Educational Resources Information Center

    Horowitz, Gail; Schwartz, Gary

    2004-01-01

    The molecular modeling was used to reinforce more general skills such as deducing and drawing reaction mechanisms, analyzing reaction kinetics and thermodynamics and drawing reaction coordinate energy diagrams. This modeling was done through the design of mechanistic puzzles, involving reactions not familiar to the students.

  10. The Cardiac Atlas Project—an imaging database for computational modeling and statistical atlases of the heart

    PubMed Central

    Fonseca, Carissa G.; Backhaus, Michael; Bluemke, David A.; Britten, Randall D.; Chung, Jae Do; Cowan, Brett R.; Dinov, Ivo D.; Finn, J. Paul; Hunter, Peter J.; Kadish, Alan H.; Lee, Daniel C.; Lima, Joao A. C.; Medrano−Gracia, Pau; Shivkumar, Kalyanam; Suinesiaputra, Avan; Tao, Wenchao; Young, Alistair A.

    2011-01-01

    Motivation: Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. Results: Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt). Availability: http://www.cardiacatlas.org Contact: a.young@auckland.ac.nz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21737439

  11. Report on Analysis and Database Needs to Implement Model. Project: Multilevel Evaluation System.

    ERIC Educational Resources Information Center

    Keesling, J. Ward

    This paper describes analytical concepts that can be applied to a comprehensive database containing information on students, teachers, and schools. The concepts are presented in terms of displays, potential inferences, and possible difficulties. Statistical techniques are mentioned, but not rigorously developed. Focus is on displays that can be…

  12. Application of Knowledge Discovery in Databases Methodologies for Predictive Models for Pregnancy Adverse Events

    ERIC Educational Resources Information Center

    Taft, Laritza M.

    2010-01-01

    In its report "To Err is Human", The Institute of Medicine recommended the implementation of internal and external voluntary and mandatory automatic reporting systems to increase detection of adverse events. Knowledge Discovery in Databases (KDD) allows the detection of patterns and trends that would be hidden or less detectable if analyzed by…

  13. A relational database of transcription factors.

    PubMed Central

    Ghosh, D

    1990-01-01

    Recent advances in the understanding of eukaryotic gene regulation have produced an extensive body of transcriptionally-related sequence information in the biological literature, and have created a need for computing structures that organize and manage this information. The 'relational model' represents an approach that is finding increasing application in the design of biological databases. This report describes the compilation of information regarding eukaryotic transcription factors, the organization of this information into five tables, the computational applications of the resultant relational database that are of theoretical as well as experimental interest, and possible avenues of further development. PMID:2186365

  14. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  15. An Assessment of Thermodynamic Database Effects on Reactive Transport Models' Predictions of Permeability Fields: Insights from CO2/Brine Experiments

    NASA Astrophysics Data System (ADS)

    Tutolo, B. M.; Seyfried, W. E.; Saar, M. O.

    2011-12-01

    Numerical modeling software such as TOUGHREACT, ECLIPSE, and Geochemist's Workbench provide modules that couple mineral reactive chemistry with porosity and permeability modifications to predict the behavior of energy byproducts, such as carbon dioxide, in the subsurface. Modern coders have already included increasingly complex equations that describe natural systems (i.e. mineral dissolution/precipitation kinetic parameters and porosity/permeability functions) into these and other software applications. Generally, these computer models use the bulk volumetric changes predicted by geochemical calculations to infer porosity changes, and subsequently use highly simplified porosity/permeability correlation functions, such as the Carman-Kozeny equation, to modify permeability fields. In spite of the computational complexity provided in these models, they require, as a foundation, fundamental information on the thermodynamic stability of minerals and aqueous species at a wide range of temperatures and pressures to produce accurate predictions of the geochemistry of long-term energy byproduct storage in the subsurface, even in the simplest cases. With improvements in geochemical thermodynamic databases, researchers may begin to produce more realistic simulations of the complex interactions between fluid and heat flow and geological systems. Unfortunately, the requisite thermodynamic data is often lacking, or inaccurate. In this study, therefore, we provide a discussion of geochemical thermodynamic databases, discuss the synthesis and reconciliation of the databases used in this study, and compare predictions from reactive transport software with single phase CO2/brine experiments performed at temperatures and pressures applicable to geologic storage conditions.

  16. NERVE AS MODEL TEMPERATURE END ORGAN

    PubMed Central

    Bernhard, C. G.; Granit, Ragnar

    1946-01-01

    Rapid local cooling of mammalian nerve sets up a discharge which is preceded by a local temperature potential, the cooled region being electronegative relative to a normal portion of the nerve. Heating the nerve locally above its normal temperature similarly makes the heated region electronegative relative to a region at normal temperature, and again a discharge is set up from the heated region. These local temperature potentials, set up by the nerve itself, are held to serve as "generator potentials" and the mechanism found is regarded as the prototype for temperature end organs. PMID:19873460

  17. Modeling mineral dust emissions from the Sahara desert using new surface properties and soil database

    NASA Astrophysics Data System (ADS)

    Laurent, B.; Marticorena, B.; Bergametti, G.; LéOn, J. F.; Mahowald, N. M.

    2008-07-01

    The present study investigates the mineral dust emissions and the occurrence of dust emission events over the Sahara desert from 1996 to 2001. Mineral dust emissions are simulated over a region extending from 16°N to 38°N and from 19°W to 40°E with a ?° × ?° spatial resolution. The input parameters required by the dust emission model are surface features data (aerodynamic roughness length, dry soil size distribution and texture for erodible soils), and meteorological surface data (mainly surface wind velocity and soil moisture). A map of the aerodynamic roughness lengths is established based on a composition of protrusion coefficients derived from the POLDER-1 surface products. Soil dry size distribution and texture are derived from measurements performed on soil samples from desert areas, and from a soil map derived from a geomorphologic analysis of desert landscapes. Surface re-analyzed meteorological databases (ERA-40) of the European Centre for Medium range Weather Forecasts (ECMWF) are used. The influence of soil moisture on simulated dust emissions is quantified. The main Saharan dust sources identified during the 6-year simulated period are in agreement with the previous studies based on in situ or satellite observations. The relevance of the simulated large dust sources and point sources ("hot spots") is tested using aerosol indexes derived from satellite observations (TOMS Absorbing Aerosol Index and Infrared Dust Difference Index Meteosat). The Saharan dust emissions simulated from 1996 to 2001 range from 585 to 759 Tg a-1. The simulations show marked seasonal cycles with a maximum in summer for the western Sahara and in spring for the eastern Sahara. The interannual variability of dust emissions is pronounced in the eastern part of the Sahara while the emissions from the western Sahara are more regular over the studied period. The soil moisture does not noticeably affect the Saharan dust emissions, their seasonal cycle or their interannual

  18. Modeling the influence of organic acids on soil weathering

    USGS Publications Warehouse

    Lawrence, Corey R.; Harden, Jennifer W.; Maher, Kate

    2014-01-01

    Biological inputs and organic matter cycling have long been regarded as important factors in the physical and chemical development of soils. In particular, the extent to which low molecular weight organic acids, such as oxalate, influence geochemical reactions has been widely studied. Although the effects of organic acids are diverse, there is strong evidence that organic acids accelerate the dissolution of some minerals. However, the influence of organic acids at the field-scale and over the timescales of soil development has not been evaluated in detail. In this study, a reactive-transport model of soil chemical weathering and pedogenic development was used to quantify the extent to which organic acid cycling controls mineral dissolution rates and long-term patterns of chemical weathering. Specifically, oxalic acid was added to simulations of soil development to investigate a well-studied chronosequence of soils near Santa Cruz, CA. The model formulation includes organic acid input, transport, decomposition, organic-metal aqueous complexation and mineral surface complexation in various combinations. Results suggest that although organic acid reactions accelerate mineral dissolution rates near the soil surface, the net response is an overall decrease in chemical weathering. Model results demonstrate the importance of organic acid input concentrations, fluid flow, decomposition and secondary mineral precipitation rates on the evolution of mineral weathering fronts. In particular, model soil profile evolution is sensitive to kaolinite precipitation and oxalate decomposition rates. The soil profile-scale modeling presented here provides insights into the influence of organic carbon cycling on soil weathering and pedogenesis and supports the need for further field-scale measurements of the flux and speciation of reactive organic compounds.

  19. Electronic Databases.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    1985-01-01

    Presents examples of bibliographic, full-text, and numeric databases. Also discusses how to access these databases online, aids to online retrieval, and several issues and trends (including copyright and downloading, transborder data flow, use of optical disc/videodisc technology, and changing roles in database generation and processing). (JN)

  20. An Organic Model for Detecting Cyber Events

    SciTech Connect

    Oehmen, Christopher S.; Peterson, Elena S.; Dowson, Scott T.

    2010-04-21

    Cyber entities in many ways mimic the behavior of organic systems. Individuals or groups compete for limited resources using a variety of strategies and effective strategies are re-used and refined in later ‘generations’. Traditionally this drift has made detection of malicious entities very difficult because 1) recognition systems are often built on exact matching to a pattern that can only be ‘learned’ after a malicious entity reveals itself and 2) the enormous volume and variation in benign entities is an overwhelming source of previously unseen entities that often confound detectors. To turn the tables of complexity on the would-be attackers, we have developed a method for mapping the sequence of behaviors in which cyber entities engage to strings of text and analyze these strings using modified bioinformatics algorithms. Bioinformatics algorithms optimize the alignment between text strings even in the presence of mismatches, insertions or deletions and do not require an a priori definition of the patterns one is seeking. Nor does it require any type of exact matching. This allows the data itself to suggest meaningful patterns that are conserved between cyber entities. We demonstrate this method on data generated from network traffic. The impact of this approach is that it can rapidly calculate similarity measures of previously unseen cyber entities in terms of well-characterized entities. These measures may also be used to organize large collections of data into families, making it possible to identify motifs indicative of each family.

  1. Collaborative multi organ segmentation by integrating deformable and graphical models.

    PubMed

    Uzunbaş, Mustafa Gökhan; Chen, Chao; Zhang, Shaoting; Poh, Kilian M; Li, Kang; Metaxas, Dimitris

    2013-01-01

    Organ segmentation is a challenging problem on which significant progress has been made. Deformable models (DM) and graphical models (GM) are two important categories of optimization based image segmentation methods. Efforts have been made on integrating two types of models into one framework. However, previous methods are not designed for segmenting multiple organs simultaneously and accurately. In this paper, we propose a hybrid multi organ segmentation approach by integrating DM and GM in a coupled optimization framework. Specifically, we show that region-based deformable models can be integrated with Markov Random Fields (MRF), such that multiple models' evolutions are driven by a maximum a posteriori (MAP) inference. It brings global and local deformation constraints into a unified framework for simultaneous segmentation of multiple objects in an image. We validate this proposed method on two challenging problems of multi organ segmentation, and the results are promising. PMID:24579136

  2. Public Opinion Poll Question Databases: An Evaluation

    ERIC Educational Resources Information Center

    Woods, Stephen

    2007-01-01

    This paper evaluates five polling resource: iPOLL, Polling the Nations, Gallup Brain, Public Opinion Poll Question Database, and Polls and Surveys. Content was evaluated on disclosure standards from major polling organizations, scope on a model for public opinion polls, and presentation on a flow chart discussing search limitations and usability.

  3. The MEXICO project (Model Experiments in Controlled Conditions): The database and first results of data processing and interpretation

    NASA Astrophysics Data System (ADS)

    Snel, H.; Schepers, J. G.; Montgomerie, B.

    2007-07-01

    The Mexico (Model experiments in Controlled Conditions) was a FP5 project, partly financed by European Commission. The main objective was to create a database of detailed aerodynamic and load measurements on a wind turbine model, in a large and high quality wind tunnel, to be used for model validation and improvement. Here model stands for both the extended BEM modelling used in state-of-the-art design and certification software, and CFD modelling of the rotor and near wake flow. For this purpose a three bladed 4.5 m diameter wind tunnel model was built and instrumented. The wind tunnel experiments were carried out in the open section (9.5*9.5 m2) of the Large Scale Facility of the DNW (German-Netherlands) during a six day campaign in December 2006. The conditions for measurements cover three operational tip speed ratios, many blade pitch angles, three yaw misalignment angles and a small number of unsteady cases in the form of pitch ramps and rotor speed ramps. One of the most important feats of the measurement program was the flow field mapping, with stereo PIV techniques. Overall the measurement campaign was very successful. The paper describes the now existing database and discusses a number of highlights from early data processing and interpretation. It should be stressed that all results are first results, no tunnel correction has been performed so far, nor has the necessary checking of data quality.

  4. Elimination kinetic model for organic chemicals in earthworms.

    PubMed

    Dimitrova, N; Dimitrov, S; Georgieva, D; Van Gestel, C A M; Hankard, P; Spurgeon, D; Li, H; Mekenyan, O

    2010-08-15

    Mechanistic understanding of bioaccumulation in different organisms and environments should take into account the influence of organism and chemical depending factors on the uptake and elimination kinetics of chemicals. Lipophilicity, metabolism, sorption (bioavailability) and biodegradation of chemicals are among the important factors that may significantly affect the bioaccumulation process in soil organisms. This study attempts to model elimination kinetics of organic chemicals in earthworms by accounting for the effects of both chemical and biological properties, including metabolism. The modeling approach that has been developed is based on the concept for simulating metabolism used in the BCF base-line model developed for predicting bioaccumulation in fish. Metabolism was explicitly accounted for by making use of the TIMES engine for simulation of metabolism and a set of principal transformations. Kinetic characteristics of transformations were estimated on the basis of observed kinetics data for the elimination of organic chemicals from earthworms. PMID:20185163

  5. Personality organization, five-factor model, and mental health.

    PubMed

    Laverdière, Olivier; Gamache, Dominick; Diguer, Louis; Hébert, Etienne; Larochelle, Sébastien; Descôteaux, Jean

    2007-10-01

    Otto Kernberg has developed a model of personality and psychological functioning centered on the concept of personality organization. The purpose of this study is to empirically examine the relationships between this model, the five-factor model, and mental health. The Personality Organization Diagnostic Form (Diguer et al., The Personality Organization Diagnostic Form-II (PODF-II), 2001), the NEO Five-Factor Inventory (Costa and McCrae, Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) Professional Manual. 1992a), and the Health-Sickness Rating Scale (Luborsky, Arch Gen Psychiatry. 1962;7:407-417) were used to assess these constructs. Results show that personality organization and personality factors are distinct but interrelated constructs and that both contribute in similar proportion to mental health. Results also suggest that the integration of personality organization and factors can provide clinicians and researchers with an enriched understanding of psychological functioning. PMID:18043522

  6. Lattice animal model of chromosome organization

    NASA Astrophysics Data System (ADS)

    Iyer, Balaji V. S.; Arya, Gaurav

    2012-07-01

    Polymer models tied together by constraints of looping and confinement have been used to explain many of the observed organizational characteristics of interphase chromosomes. Here we introduce a simple lattice animal representation of interphase chromosomes that combines the features of looping and confinement constraints into a single framework. We show through Monte Carlo simulations that this model qualitatively captures both the leveling off in the spatial distance between genomic markers observed in fluorescent in situ hybridization experiments and the inverse decay in the looping probability as a function of genomic separation observed in chromosome conformation capture experiments. The model also suggests that the collapsed state of chromosomes and their segregation into territories with distinct looping activities might be a natural consequence of confinement.

  7. Acoustic modeling of the speech organ

    NASA Astrophysics Data System (ADS)

    Kacprowski, J.

    The state of research on acoustic modeling of phonational and articulatory speech producing elements is reviewed. Consistent with the physical interpretation of the speech production process, the acoustic theory of speech production is expressed as the product of three factors: laryngeal involvement, sound transmission, and emanations from the mouth and/or nose. Each of these factors is presented in the form of a simplified mathematical description which provides the theoretical basis for the formation of physical models of the appropriate functional members of this complex bicybernetic system. Vocal tract wall impedance, vocal tract synthesizers, laryngeal dysfunction, vowel nasalization, resonance circuits, and sound wave propagation are discussed.

  8. UGTA Photograph Database

    SciTech Connect

    NSTec Environmental Restoration

    2009-04-20

    One of the advantages of the Nevada Test Site (NTS) is that most of the geologic and hydrologic features such as hydrogeologic units (HGUs), hydrostratigraphic units (HSUs), and faults, which are important aspects of flow and transport modeling, are exposed at the surface somewhere in the vicinity of the NTS and thus are available for direct observation. However, due to access restrictions and the remote locations of many of the features, most Underground Test Area (UGTA) participants cannot observe these features directly in the field. Fortunately, National Security Technologies, LLC, geologists and their predecessors have photographed many of these features through the years. During fiscal year 2009, work was done to develop an online photograph database for use by the UGTA community. Photographs were organized, compiled, and imported into Adobe® Photoshop® Elements 7. The photographs were then assigned keyword tags such as alteration type, HGU, HSU, location, rock feature, rock type, and stratigraphic unit. Some fully tagged photographs were then selected and uploaded to the UGTA website. This online photograph database provides easy access for all UGTA participants and can help “ground truth” their analytical and modeling tasks. It also provides new participants a resource to more quickly learn the geology and hydrogeology of the NTS.

  9. PhasePlot: A Software Program for Visualizing Phase Relations Computed Using Thermochemical Models and Databases

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.

    2011-12-01

    A new software program has been developed for Macintosh computers that permits the visualization of phase relations calculated from thermodynamic data-model collections. The data-model collections of MELTS (Ghiorso and Sack, 1995, CMP 119, 197-212), pMELTS (Ghiorso et al., 2002, G-cubed 3, 10.1029/2001GC000217) and the deep mantle database of Stixrude and Lithgow-Bertelloni (2011, GJI 184, 1180-1213) are currently implemented. The software allows users to enter a system bulk composition and a range of reference conditions and then calculate a grid of phase relations. These relations may be visualized in a variety of ways including phase diagrams, phase proportion plots, and contour diagrams of phase compositions and abundances. Results may be exported into Excel or similar spreadsheet applications. Flexibility in stipulating reference conditions permit the construction of temperature-pressure, temperature-volume, entropy-pressure, or entropy-volume display grids. Calculations on the grid are performed for fixed bulk composition or in open systems governed by user specified constraints on component chemical potentials (e.g., specified oxygen fugacity buffers). The calculation engine for the software is optimized for multi-core compute architectures and is very fast, allowing a typical grid of 64 points to be calculated in under 10 seconds on a dual-core laptop/iMac. The underlying computational thermodynamic algorithms have been optimized for speed and robust behavior. Taken together, both of these advances facilitate in classroom demonstrations and permit novice users to work with the program effectively, focusing on problem specification and interpretation of results rather than on manipulation and mechanics of computation - a key feature of an effective instructional tool. The emphasis in this software package is graphical visualization, which aids in better comprehension of complex phase relations in multicomponent systems. Anecdotal experience in using Phase

  10. Nematodes: Model Organisms in High School Biology

    ERIC Educational Resources Information Center

    Bliss, TJ; Anderson, Margery; Dillman, Adler; Yourick, Debra; Jett, Marti; Adams, Byron J.; Russell, RevaBeth

    2007-01-01

    In a collaborative effort between university researchers and high school science teachers, an inquiry-based laboratory module was designed using two species of insecticidal nematodes to help students apply scientific inquiry and elements of thoughtful experimental design. The learning experience and model are described in this article. (Contains 4…

  11. Expatriate Training in International Nongovernmental Organizations: A Model for Research

    ERIC Educational Resources Information Center

    Chang, Wei-Wen

    2005-01-01

    In light of the massive tsunami relief efforts that were still being carried out by humanitarian organizations around the world when this article went to press, this article points out a lack of human resources development research in international nongovernmental organizations (INGOs) and proposes a conceptual model for future empirical research.…

  12. Resilient organizations: matrix model and service line management.

    PubMed

    Westphal, Judith A

    2005-09-01

    Resilient organizations modify structures to meet the demands of the marketplace. The author describes a structure that enables multihospital organizations to innovate and rapidly adapt to changes. Service line management within a matrix model is an evolving organizational structure for complex systems in which nurses are pivotal members. PMID:16200010

  13. Representational Translation with Concrete Models in Organic Chemistry

    ERIC Educational Resources Information Center

    Stull, Andrew T.; Hegarty, Mary; Dixon, Bonnie; Stieff, Mike

    2012-01-01

    In representation-rich domains such as organic chemistry, students must be facile and accurate when translating between different 2D representations, such as diagrams. We hypothesized that translating between organic chemistry diagrams would be more accurate when concrete models were used because difficult mental processes could be augmented by…

  14. A Model Linking the Learning Organization and Performance Job Satisfaction

    ERIC Educational Resources Information Center

    Dirani, Khalil M.

    2006-01-01

    The underlying theories of learning and performance are quite complex. This paper proposes a model that links the learning organization theory as a process with job satisfaction as a performance theory outcome. The literature reviewed considered three process levels of learning within the learning organization and three outcome levels of job…

  15. Volatile organic compounds (VOCs): Remediation for groundwater. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-11-01

    The bibliography contains citations concerning groundwater contamination by volatile organic compounds (VOCs) and treatment technology for reclamation. Citations discuss treatments such as activated carbon, biological degradation, stripping, aeration, and catalytic oxidation. Articles discuss applications of these techniques to landfills, hazardous waste sites, and Superfund sites. (Contains a minimum of 201 citations and includes a subject term index and title list.)

  16. Crystallography Open Database – an open-access collection of crystal structures

    PubMed Central

    Gražulis, Saulius; Chateigner, Daniel; Downs, Robert T.; Yokochi, A. F. T.; Quirós, Miguel; Lutterotti, Luca; Manakova, Elena; Butkus, Justas; Moeck, Peter; Le Bail, Armel

    2009-01-01

    The Crystallography Open Database (COD), which is a project that aims to gather all available inorganic, metal–organic and small organic molecule structural data in one database, is described. The database adopts an open-access model. The COD currently contains ∼80 000 entries in crystallographic information file format, with nearly full coverage of the International Union of Crystallography publications, and is growing in size and quality. PMID:22477773

  17. Nearly data-based optimal control for linear discrete model-free systems with delays via reinforcement learning

    NASA Astrophysics Data System (ADS)

    Zhang, Jilie; Zhang, Huaguang; Wang, Binrui; Cai, Tiaoyang

    2016-05-01

    In this paper, a nearly data-based optimal control scheme is proposed for linear discrete model-free systems with delays. The nearly optimal control can be obtained using only measured input/output data from systems, by reinforcement learning technology, which combines Q-learning with value iterative algorithm. First, we construct a state estimator by using the measured input/output data. Second, the quadratic functional is used to approximate the value function at each point in the state space, and the data-based control is designed by Q-learning method using the obtained state estimator. Then, the paper states the method, that is, how to solve the optimal inner kernel matrix ? in the least-square sense, by value iteration algorithm. Finally, the numerical examples are given to illustrate the effectiveness of our approach.

  18. Using infrared thermography for the creation of a window surface temperature database to validate computer heat transfer models

    SciTech Connect

    Beck, F.A.; Griffith, B.T.; Tuerler, D.; Arasteh, D.

    1995-04-01

    IR thermography is well suited for resolving small differences in the thermal performance of highly insulating window systems. Infrared thermographic measurements made in conjunction with reference emitter techniques in a controlled and characterized laboratory setting can have an absolute accuracy of {plus_minus}0.5{degree}C. Quantitative infrared thermography requires that a number of sources of error related to measurement accuracy and test environmental conditions be quantified and minimized to the extent possible. Laboratory-based infrared thermography can be used to generate window surface temperature profile databases which can be used to direct the development of 2-D and 3-D finite element and finite difference method fenestration heat transfer simulation codes, identify their strengths and weaknesses, set research priorities, and validate finished modeling tools. Development of such a database is under way at Lawrence Berkeley Laboratory, and will be made available for public use.

  19. YMDB: the Yeast Metabolome Database.

    PubMed

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated 'metabolomic' database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  20. A Comprehensive Opacities/Atomic Database for the Analysis of Astrophysical Spectra and Modeling

    NASA Technical Reports Server (NTRS)

    Pradhan, Anil K. (Principal Investigator)

    1997-01-01

    The main goals of this ADP award have been accomplished. The electronic database TOPBASE, consisting of the large volume of atomic data from the Opacity Project, has been installed and is operative at a NASA site at the Laboratory for High Energy Astrophysics Science Research Center (HEASRC) at the Goddard Space Flight Center. The database will be continually maintained and updated by the PI and collaborators. TOPBASE is publicly accessible from IP: topbase.gsfc.nasa.gov. During the last six months (since the previous progress report), considerable work has been carried out to: (1) put in the new data for low ionization stages of iron: Fe I - V, beginning with Fe II, (2) high-energy photoionization cross sections computed by Dr. Hong Lin Zhang (consultant on the Project) were 'merged' with the current Opacity Project data and input into TOPbase; (3) plans laid out for a further extension of TOPbase to include TIPbase, the database for collisional data to complement the radiative data in TOPbase.

  1. VERIFICATION OF A TOXIC ORGANIC SUBSTANCE TRANSPORT AND BIOACCUMULATION MODEL

    EPA Science Inventory

    A field verification of the Toxic Organic Substance Transport and Bioaccumulation Model (TOXIC) was conducted using the insecticide dieldrin and the herbicides alachlor and atrazine as the test compounds. The test sites were two Iowa reservoirs. The verification procedure include...

  2. A REVIEW OF BIOACCUMULATION MODELING APPROACHES FOR PERSISTENT ORGANIC POLLUTANTS

    EPA Science Inventory

    Persistent organic pollutants and mercury are likely to bioaccumulate in biological components of the environment, including fish and wildlife. The complex and long-term dynamics involved with bioaccumulation are often represented with models. Current scientific developments in t...

  3. Immediate Dissemination of Student Discoveries to a Model Organism Database Enhances Classroom-Based Research Experiences

    ERIC Educational Resources Information Center

    Wiley, Emily A.; Stover, Nicholas A.

    2014-01-01

    Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have…

  4. MaizeGDB: The Maize Model Organism Database for Basic, Translational, and Applied Research

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In 2001, maize became the number one production crop in the world (with over 614 million tons produced; http://faostat.fao.org). Its success is due to the high productivity per acre in tandem with a wide variety of commercial uses: not only is maize an excellent source of food, feed, and fuel, its...

  5. Knowledge-based model of hydrogen-bonding propensity in organic crystals.

    PubMed

    Galek, Peter T A; Fábián, László; Motherwell, W D Samuel; Allen, Frank H; Feeder, Neil

    2007-10-01

    A new method is presented to predict which donors and acceptors form hydrogen bonds in a crystal structure, based on the statistical analysis of hydrogen bonds in the Cambridge Structural Database (CSD). The method is named the logit hydrogen-bonding propensity (LHP) model. The approach has a potential application in identifying both likely and unusual hydrogen bonding, which can help to rationalize stable and metastable crystalline forms, of relevance to drug development in the pharmaceutical industry. Whilst polymorph prediction techniques are widely used, the LHP model is knowledge-based and is not restricted by the computational issues of polymorph prediction, and as such may form a valuable precursor to polymorph screening. Model construction applies logistic regression, using training data obtained with a new survey method based on the CSD system. The survey categorizes the hydrogen bonds and extracts model parameter values using descriptive structural and chemical properties from three-dimensional organic crystal structures. LHP predictions from a fitted model are made using two-dimensional observables alone. In the initial cases analysed, the model is highly accurate, achieving approximately 90% correct classification of both observed hydrogen bonds and non-interacting donor-acceptor pairs. Extensive statistical validation shows the LHP model to be robust across a range of small-molecule organic crystal structures. PMID:17873446

  6. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  7. NATIONAL URBAN DATABASE AND ACCESS PORTAL TOOL (NUDAPT): FACILITATING ADVANCEMENTS IN URBAN METEOROLOGY AND CLIMATE MODELING WITH COMMUNITY-BASED URBAN DATABASES

    EPA Science Inventory

    We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...

  8. Integrating ecological risk assessments across levels of organization using the Franklin-Noss model of biodiversity

    SciTech Connect

    Brugger, K.E.; Tiebout, H.M. III |

    1994-12-31

    Wildlife toxicologists pioneered methodologies for assessing ecological risk to nontarget species. Historically, ecological risk assessments (ERAS) focused on a limited array of species and were based on a relatively few population-level endpoints (mortality, reproduction). Currently, risk assessment models are becoming increasingly complex that factor in multi-species interactions (across trophic levels) and utilize an increasingly diverse number of ecologically significant endpoints. This trend suggests the increasing importance of safeguarding not only populations of individual species, but also the overall integrity of the larger biotic systems that support them. In this sense, ERAs are in alignment with Conservation Biology, an applied science of ecological knowledge used to conserve biodiversity. A theoretical conservation biology model could be incorporated in ERAs to quantify impacts to biodiversity (structure, function or composition across levels of biological organization). The authors suggest that the Franklin-Noss model for evaluating biodiversity, with its nested, hierarchical approach, may provide a suitable paradigm for assessing and integrating the ecological risk that chemical contaminants pose to biological systems from the simplest levels (genotypes, individual organisms) to the most complex levels of organization (communities and ecosystems). The Franklin-Noss model can accommodate the existing ecotoxicological database and, perhaps more importantly, indicate new areas in which critical endpoints should be identified and investigated.

  9. MouseNet v2: a database of gene networks for studying the laboratory mouse and eight other model vertebrates

    PubMed Central

    Kim, Eiru; Hwang, Sohyun; Kim, Hyojin; Shim, Hongseok; Kang, Byunghee; Yang, Sunmo; Shim, Jae Ho; Shin, Seung Yeon; Marcotte, Edward M.; Lee, Insuk

    2016-01-01

    Laboratory mouse, Mus musculus, is one of the most important animal tools in biomedical research. Functional characterization of the mouse genes, hence, has been a long-standing goal in mammalian and human genetics. Although large-scale knockout phenotyping is under progress by international collaborative efforts, a large portion of mouse genome is still poorly characterized for cellular functions and associations with disease phenotypes. A genome-scale functional network of mouse genes, MouseNet, was previously developed in context of MouseFunc competition, which allowed only limited input data for network inferences. Here, we present an improved mouse co-functional network, MouseNet v2 (available at http://www.inetbio.org/mousenet), which covers 17 714 genes (>88% of coding genome) with 788 080 links, along with a companion web server for network-assisted functional hypothesis generation. The network database has been substantially improved by large expansion of genomics data. For example, MouseNet v2 database contains 183 co-expression networks inferred from 8154 public microarray samples. We demonstrated that MouseNet v2 is predictive for mammalian phenotypes as well as human diseases, which suggests its usefulness in discovery of novel disease genes and dissection of disease pathways. Furthermore, MouseNet v2 database provides functional networks for eight other vertebrate models used in various research fields. PMID:26527726

  10. HITEMP derived spectral database for the prediction of jet engine exhaust infrared emission using a statistical band model

    NASA Astrophysics Data System (ADS)

    Lindermeir, E.; Beier, K.

    2012-08-01

    The spectroscopic database HITEMP 2010 is used to upgrade the parameters of the statistical molecular band model which is part of the infrared signature prediction code NIRATAM (NATO InfraRed Air TArget Model). This band model was recommended by NASA and is applied in several codes that determine the infrared emission of combustion gases. The upgrade regards spectral absorption coefficients and line densities of the gases H2O, CO2, and CO in the spectral region 400-5000 cm-1 (2-25μm) with a spectral resolution of 5 cm-1. The temperature range 100-3000 K is covered. Two methods to update the database are presented: the usually applied method as provided in the literature and an alternative, more laborious procedure that employs least squares fitting. The achieved improvements resulting from both methods are demonstrated by comparisons of radiance spectra obtained from the band model to line-by-line results. The performance in a realistic scenario is investigated on the basis of measured and predicted spectra of a jet aircraft plume in afterburner mode.

  11. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  12. Improved AIOMFAC model parameterisation of the temperature dependence of activity coefficients for aqueous organic mixtures

    NASA Astrophysics Data System (ADS)

    Ganbavale, G.; Zuend, A.; Marcolli, C.; Peter, T.

    2015-01-01

    This study presents a new, improved parameterisation of the temperature dependence of activity coefficients in the AIOMFAC (Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients) model applicable for aqueous as well as water-free organic solutions. For electrolyte-free organic and organic-water mixtures the AIOMFAC model uses a group-contribution approach based on UNIFAC (UNIversal quasi-chemical Functional-group Activity Coefficients). This group-contribution approach explicitly accounts for interactions among organic functional groups and between organic functional groups and water. The previous AIOMFAC version uses a simple parameterisation of the temperature dependence of activity coefficients, aimed to be applicable in the temperature range from ~ 275 to ~ 400 K. With the goal to improve the description of a wide variety of organic compounds found in atmospheric aerosols, we extend the AIOMFAC parameterisation for the functional groups carboxyl, hydroxyl, ketone, aldehyde, ether, ester, alkyl, aromatic carbon-alcohol, and aromatic hydrocarbon to atmospherically relevant low temperatures. To this end we introduce a new parameterisation for the temperature dependence. The improved temperature dependence parameterisation is derived from classical thermodynamic theory by describing effects from changes in molar enthalpy and heat capacity of a multi-component system. Thermodynamic equilibrium data of aqueous organic and water-free organic mixtures from the literature are carefully assessed and complemented with new measurements to establish a comprehensive database, covering a wide temperature range (~ 190 to ~ 440 K) for many of the functional group combinations considered. Different experimental data types and their processing for the estimation of AIOMFAC model parameters are discussed. The new AIOMFAC parameterisation for the temperature dependence of activity coefficients from low to high temperatures shows an overall improvement of 28% in

  13. Dietary Uptake Models Used for Modeling the Bioaccumulation of Organic Contaminants in Fish

    EPA Science Inventory

    Numerous models have been developed to predict the bioaccumulation of organic chemicals in fish. Although chemical dietary uptake can be modeled using assimilation efficiencies, bioaccumulation models fall into two distinct groups. The first group implicitly assumes that assimila...

  14. Spatial Arrangment of Organic Compounds on a Model Mineral Surface: Implications for Soil Organic Matter Stabilization

    SciTech Connect

    Petridis, Loukas; Ambaye, Haile Arena; Jagadamma, Sindhu; Kilbey, S. Michael; Lokitz, Bradley S; Lauter, Valeria; Mayes, Melanie

    2014-01-01

    The complexity of the mineral organic carbon interface may influence the extent of stabilization of organic carbon compounds in soils, which is important for global climate futures. The nanoscale structure of a model interface was examined here by depositing films of organic carbon compounds of contrasting chemical character, hydrophilic glucose and amphiphilic stearic acid, onto a soil mineral analogue (Al2O3). Neutron reflectometry, a technique which provides depth-sensitive insight into the organization of the thin films, indicates that glucose molecules reside in a layer between Al2O3 and stearic acid, a result that was verified by water contact angle measurements. Molecular dynamics simulations reveal the thermodynamic driving force behind glucose partitioning on the mineral interface: The entropic penalty of confining the less mobile glucose on the mineral surface is lower than for stearic acid. The fundamental information obtained here helps rationalize how complex arrangements of organic carbon on soil mineral surfaces may arise

  15. Principles of chromatin organization in yeast: relevance of polymer models to describe nuclear organization and dynamics.

    PubMed

    Wang, Renjie; Mozziconacci, Julien; Bancaud, Aurélien; Gadal, Olivier

    2015-06-01

    Nuclear organization can impact on all aspects of the genome life cycle. This organization is thoroughly investigated by advanced imaging and chromosome conformation capture techniques, providing considerable amount of datasets describing the spatial organization of chromosomes. In this review, we will focus on polymer models to describe chromosome statics and dynamics in the yeast Saccharomyces cerevisiae. We suggest that the equilibrium configuration of a polymer chain tethered at both ends and placed in a confined volume is consistent with the current literature, implying that local chromatin interactions play a secondary role in yeast nuclear organization. Future challenges are to reach an integrated multi-scale description of yeast chromosome organization, which is crucially needed to improve our understanding of the regulation of genomic transaction. PMID:25956973

  16. VibrioBase: a model for next-generation genome and annotation database development.

    PubMed

    Choo, Siew Woh; Heydari, Hamed; Tan, Tze King; Siow, Cheuk Chuen; Beh, Ching Yew; Wee, Wei Yee; Mutha, Naresh V R; Wong, Guat Jah; Ang, Mia Yang; Yazdi, Amir Hessam

    2014-01-01

    To facilitate the ongoing research of Vibrio spp., a dedicated platform for the Vibrio research community is needed to host the fast-growing amount of genomic data and facilitate the analysis of these data. We present VibrioBase, a useful resource platform, providing all basic features of a sequence database with the addition of unique analysis tools which could be valuable for the Vibrio research community. VibrioBase currently houses a total of 252 Vibrio genomes developed in a user-friendly manner and useful to enable the analysis of these genomic data, particularly in the field of comparative genomics. Besides general data browsing features, VibrioBase offers analysis tools such as BLAST interfaces and JBrowse genome browser. Other important features of this platform include our newly developed in-house tools, the pairwise genome comparison (PGC) tool, and pathogenomics profiling tool (PathoProT). The PGC tool is useful in the identification and comparative analysis of two genomes, whereas PathoProT is designed for comparative pathogenomics analysis of Vibrio strains. Both of these tools will enable researchers with little experience in bioinformatics to get meaningful information from Vibrio genomes with ease. We have tested the validity and suitability of these tools and features for use in the next-generation database development. PMID:25243218

  17. VibrioBase: A Model for Next-Generation Genome and Annotation Database Development

    PubMed Central

    Choo, Siew Woh; Tan, Tze King; Mutha, Naresh V. R.; Wong, Guat Jah

    2014-01-01

    To facilitate the ongoing research of Vibrio spp., a dedicated platform for the Vibrio research community is needed to host the fast-growing amount of genomic data and facilitate the analysis of these data. We present VibrioBase, a useful resource platform, providing all basic features of a sequence database with the addition of unique analysis tools which could be valuable for the Vibrio research community. VibrioBase currently houses a total of 252 Vibrio genomes developed in a user-friendly manner and useful to enable the analysis of these genomic data, particularly in the field of comparative genomics. Besides general data browsing features, VibrioBase offers analysis tools such as BLAST interfaces and JBrowse genome browser. Other important features of this platform include our newly developed in-house tools, the pairwise genome comparison (PGC) tool, and pathogenomics profiling tool (PathoProT). The PGC tool is useful in the identification and comparative analysis of two genomes, whereas PathoProT is designed for comparative pathogenomics analysis of Vibrio strains. Both of these tools will enable researchers with little experience in bioinformatics to get meaningful information from Vibrio genomes with ease. We have tested the validity and suitability of these tools and features for use in the next-generation database development. PMID:25243218

  18. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  19. Mutant mice: experimental organisms as materialised models in biomedicine.

    PubMed

    Huber, Lara; Keuck, Lara K

    2013-09-01

    Animal models have received particular attention as key examples of material models. In this paper, we argue that the specificities of establishing animal models-acknowledging their status as living beings and as epistemological tools-necessitate a more complex account of animal models as materialised models. This becomes particularly evident in animal-based models of diseases that only occur in humans: in these cases, the representational relation between animal model and human patient needs to be generated and validated. The first part of this paper presents an account of how disease-specific animal models are established by drawing on the example of transgenic mice models for Alzheimer's disease. We will introduce an account of validation that involves a three-fold process including (1) from human being to experimental organism; (2) from experimental organism to animal model; and (3) from animal model to human patient. This process draws upon clinical relevance as much as scientific practices and results in disease-specific, yet incomplete, animal models. The second part of this paper argues that the incompleteness of models can be described in terms of multi-level abstractions. We qualify this notion by pointing to different experimental techniques and targets of modelling, which give rise to a plurality of models for a specific disease. PMID:23545252

  20. A Workforce Design Model: Providing Energy to Organizations in Transition

    ERIC Educational Resources Information Center

    Halm, Barry J.

    2011-01-01

    The purpose of this qualitative study was to examine the change in performance realized by a professional services organization, which resulted in the Life Giving Workforce Design (LGWD) model through a grounded theory research design. This study produced a workforce design model characterized as an organizational blueprint that provides virtuous…

  1. Institutionalizing Innovation in an Organization: A Model and Case Study.

    ERIC Educational Resources Information Center

    Slawski, Carl

    A policy systems theoretical analysis of the problem of institutionalizing innovation in an organization is summarized in a flow diagram. The model is presented in terms of specific hypotheses, and then illustrated with a case of frustrated innovation, the 1968-69 crisis and strike at San Francisco State College. The model is set up (1) to help…

  2. Simple model of self-organized biological evolution

    SciTech Connect

    de Boer, J.; Derrida, B.; Flyvbjerg, H.; Jackson, A.D.; Wettig, T. The Isaac Newton Institute for Mathematical Sciences, 20 Clarkson Road, Cambridge, CB4 0EH Laboratoire de Physique Statistique, Ecole Normale Superieure, 24 rue Lhomond, F-75005 Paris Service de Physique Theorique, Centre de Etudes Nucleaires de Saclay, F-91191, Gif-Sur-Yvette CONNECT, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen )

    1994-08-08

    We give an exact solution of a recently proposed self-organized critical model of biological evolution. We show that the model has a power law distribution of durations of coevolutionary avalanches'' with a mean field exponent 3/2. We also calculate analytically the finite size effects which cut off this power law at times of the order of the system size.

  3. Mechanism for production of secondary organic aerosols and their representation in atmospheric models. Final report

    SciTech Connect

    Seinfeld, J.H.; Flagan, R.C.

    1999-06-07

    This document contains the following: organic aerosol formation from the oxidation of biogenic hydrocarbons; gas/particle partitioning of semivolatile organic compounds to model inorganic, organic, and ambient smog aerosols; and representation of secondary organic aerosol formation in atmospheric models.

  4. A case study for a digital seabed database: Bohai Sea engineering geology database

    NASA Astrophysics Data System (ADS)

    Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang

    2006-07-01

    This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.

  5. The expanding epigenetic landscape of non-model organisms

    PubMed Central

    Bonasio, Roberto

    2015-01-01

    Epigenetics studies the emergence of different phenotypes from a single genotype. Although these processes are essential to cellular differentiation and transcriptional memory, they are also widely used in all branches of the tree of life by organisms that require plastic but stable adaptation to their physical and social environment. Because of the inherent flexibility of epigenetic regulation, a variety of biological phenomena can be traced back to evolutionary adaptations of few conserved molecular pathways that converge on chromatin. For these reasons chromatin biology and epigenetic research have a rich history of chasing discoveries in a variety of model organisms, including yeast, flies, plants and humans. Many more fascinating examples of epigenetic plasticity lie outside the realm of model organisms and have so far been only sporadically investigated at a molecular level; however, recent progress on sequencing technology and genome editing tools have begun to blur the lines between model and non-model organisms, opening numerous new avenues for investigation. Here, I review examples of epigenetic phenomena in non-model organisms that have emerged as potential experimental systems, including social insects, fish and flatworms, and are becoming accessible to molecular approaches. PMID:25568458

  6. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  7. Making Organisms Model Human Behavior: Situated Models in North-American Alcohol Research, 1950-onwards

    PubMed Central

    Leonelli, Sabina; Ankeny, Rachel A.; Nelson, Nicole C.; Ramsden, Edmund

    2014-01-01

    Argument We examine the criteria used to validate the use of nonhuman organisms in North-American alcohol addiction research from the 1950s to the present day. We argue that this field, where the similarities between behaviors in humans and non-humans are particularly difficult to assess, has addressed questions of model validity by transforming the situatedness of non-human organisms into an experimental tool. We demonstrate that model validity does not hinge on the standardization of one type of organism in isolation, as often the case with genetic model organisms. Rather, organisms are viewed as necessarily situated: they cannot be understood as a model for human behavior in isolation from their environmental conditions. Hence the environment itself is standardized as part of the modeling process; and model validity is assessed with reference to the environmental conditions under which organisms are studied. PMID:25233743

  8. Model Organisms Fact Sheet: Using Model Organisms to Study Health and Disease

    MedlinePlus

    ... for the related ones, as well. What about computer models? Computers serve as virtual laboratories where scientists ... scientists have more confidence in the predictions. Can computer models replace animal models in research? Even though ...

  9. Modelling of organic matter dynamics during the composting process.

    PubMed

    Zhang, Y; Lashermes, G; Houot, S; Doublet, J; Steyer, J P; Zhu, Y G; Barriuso, E; Garnier, P

    2012-01-01

    Composting urban organic wastes enables the recycling of their organic fraction in agriculture. The objective of this new composting model was to gain a clearer understanding of the dynamics of organic fractions during composting and to predict the final quality of composts. Organic matter was split into different compartments according to its degradability. The nature and size of these compartments were studied using a biochemical fractionation method. The evolution of each compartment and the microbial biomass were simulated, as was the total organic carbon loss corresponding to organic carbon mineralisation into CO(2). Twelve composting experiments from different feedstocks were used to calibrate and validate our model. We obtained a unique set of estimated parameters. Good agreement was achieved between the simulated and experimental results that described the evolution of different organic fractions, with the exception of some compost because of a poor simulation of the cellulosic and soluble pools. The degradation rate of the cellulosic fraction appeared to be highly variable and dependent on the origin of the feedstocks. The initial soluble fraction could contain some degradable and recalcitrant elements that are not easily accessible experimentally. PMID:21978424

  10. Data-based magnetic field models: Present status and future prospects

    NASA Technical Reports Server (NTRS)

    Pulkkinen, T. I.; Koskinen, H. E. J.; Pellinen, R. J.; Sergeev, V. A.; Tsyganenko, N. A.; Opgenoorth, H. J.; Donovan, E.

    1997-01-01

    Empirical magnetic field models are discussed in terms of using models in multi-instrument data analysis. The variety of previous applications of field models are demonstrated. The problems found by using data based models are addressed and the prospects of their future development are outlined. Some issues related to time-dependency of the field configuration are presented.

  11. Precisely parameterized experimental and computational models of tissue organization.

    PubMed

    Molitoris, Jared M; Paliwal, Saurabh; Sekar, Rajesh B; Blake, Robert; Park, JinSeok; Trayanova, Natalia A; Tung, Leslie; Levchenko, Andre

    2016-02-01

    Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell-cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and topology

  12. The PIR-International databases.

    PubMed Central

    Barker, W C; George, D G; Mewes, H W; Pfeiffer, F; Tsugita, A

    1993-01-01

    PIR-International is an association of macromolecular sequence data collection centers dedicated to fostering international cooperation as an essential element in the development of scientific databases. PIR-International is most noted for the Protein Sequence Database. This database originated in the early 1960's with the pioneering work of the late Margaret Dayhoff as a research tool for the study of protein evolution and intersequence relationships; it is maintained as a scientific resource, organized by biological concepts, using sequence homology as a guiding principle. PIR-International also maintains a number of other genomic, protein sequence, and sequence-related databases. The databases of PIR-International are made widely available. This paper briefly describes the architecture of the Protein Sequence Database, a number of other PIR-International databases, and mechanisms for providing access to and for distribution of these databases. PMID:8332528

  13. Electrochemical model of the polyaniline based organic memristive device

    NASA Astrophysics Data System (ADS)

    Demin, V. A.; Erokhin, V. V.; Kashkarov, P. K.; Kovalchuk, M. V.

    2014-08-01

    The electrochemical organic memristive device with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, including the neuromorphic networks capable for learning. In this work, a new theoretical model of the polyaniline memristive is presented. The developed model of organic memristive functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment but also the quantitative similarities of the resultant current values. It is shown how the memristive could behave at zero potential difference relative to the reference electrode. This improved model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices.

  14. Electrochemical model of the polyaniline based organic memristive device

    SciTech Connect

    Demin, V. A. E-mail: victor.erokhin@fis.unipr.it; Erokhin, V. V. E-mail: victor.erokhin@fis.unipr.it; Kashkarov, P. K.; Kovalchuk, M. V.

    2014-08-14

    The electrochemical organic memristive device with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, including the neuromorphic networks capable for learning. In this work, a new theoretical model of the polyaniline memristive is presented. The developed model of organic memristive functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment but also the quantitative similarities of the resultant current values. It is shown how the memristive could behave at zero potential difference relative to the reference electrode. This improved model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices.

  15. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems

    PubMed Central

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K.; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C.; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com PMID:25887162

  16. An isolated line-shape model to go beyond the Voigt profile in spectroscopic databases and radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Ngo, N. H.; Lisak, D.; Tran, H.; Hartmann, J.-M.

    2013-11-01

    We demonstrate that a previously proposed model opens the route for the inclusion of refined non-Voigt profiles in spectroscopic databases and atmospheric radiative transfer codes. Indeed, this model fulfills many essential requirements: (i) it takes both velocity changes and the speed dependences of the pressure-broadening and -shifting coefficients into account. (ii) It leads to accurate descriptions of the line shapes of very different molecular systems. Tests made for pure H2, CO2 and O2 and for H2O diluted in N2 show that residuals are down to ≃0.2% of the peak absorption, (except for the untypical system of H2 where a maximum residual of ±3% is reached), thus fulfilling the precision requirements of the most demanding remote sensing experiments. (iii) It is based on a limited set of parameters for each absorption line that have known dependences on pressure and can thus be stored in databases. (iv) Its calculation requires very reasonable computer costs, only a few times higher than that of a usual Voigt profile. Its inclusion in radiative transfer codes will thus induce bearable CPU time increases. (v) It can be extended in order to take line-mixing effects into account, at least within the so-called first-order approximation.

  17. ECMDB: the E. coli Metabolome Database.

    PubMed

    Guo, An Chi; Jewison, Timothy; Wilson, Michael; Liu, Yifeng; Knox, Craig; Djoumbou, Yannick; Lo, Patrick; Mandal, Rupasri; Krishnamurthy, Ram; Wishart, David S

    2013-01-01

    The Escherichia coli Metabolome Database (ECMDB, http://www.ecmdb.ca) is a comprehensively annotated metabolomic database containing detailed information about the metabolome of E. coli (K-12). Modelled closely on the Human and Yeast Metabolome Databases, the ECMDB contains >2600 metabolites with links to ∼1500 different genes and proteins, including enzymes and transporters. The information in the ECMDB has been collected from dozens of textbooks, journal articles and electronic databases. Each metabolite entry in the ECMDB contains an average of 75 separate data fields, including comprehensive compound descriptions, names and synonyms, chemical taxonomy, compound structural and physicochemical data, bacterial growth conditions and substrates, reactions, pathway information, enzyme data, gene/protein sequence data and numerous hyperlinks to images, references and other public databases. The ECMDB also includes an extensive collection of intracellular metabolite concentration data compiled from our own work as well as other published metabolomic studies. This information is further supplemented with thousands of fully assigned reference nuclear magnetic resonance and mass spectrometry spectra obtained from pure E. coli metabolites that we (and others) have collected. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of E. coli's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers but also to molecular biologists, systems biologists and individuals in the biotechnology industry. PMID:23109553

  18. A Generalized Relational Schema for an Integrated Clinical Patient Database

    PubMed Central

    Friedman, Carol; Hripcsak, George; Johnson, Stephen B.; Cimino, James J.; Clayton, Paul D.

    1990-01-01

    Patient data is central to a Clinical Information System (CIS). The organization of the data in a patient database is essential to the functioning of the system. If the CIS contains a medical decision support component, further requirements are imposed on the database. It must be capable of accurately representing a broad range of clinical information in coded form, and be organized for efficient retrievals by patient, time, and type of clinical term. This paper presents a generalized schema for a clinical patient database within the relational database model. The general design makes it possible to represent diverse clinical data in a standard structure and to organize the data so that it is densely clustered by patient and time.

  19. A geographic information system on the potential distribution and abundance of Fasciola hepatica and F. gigantica in east Africa based on Food and Agriculture Organization databases.

    PubMed

    Malone, J B; Gommes, R; Hansen, J; Yilma, J M; Slingenberg, J; Snijders, F; Nachtergaele, F; Ataman, E

    1998-07-31

    An adaptation of a previously developed climate forecast computer model and digital agroecologic database resources available from FAO for developing countries were used to develop a geographic information system risk assessment model for fasciolosis in East Africa, a region where both F. hepatica and F. gigantica occur as a cause of major economic losses in livestock. Regional F. hepatica and F. gigantica forecast index maps were created. Results were compared to environmental data parameters, known life cycle micro-environment requirements and to available Fasciola prevalence survey data and distribution patterns reported in the literature for each species (F. hepatica above 1200 m elevation, F. gigantica below 1800 m, both at 1200-1800 m). The greatest risk, for both species, occurred in areas of extended high annual rainfall associated with high soil moisture and surplus water, with risk diminishing in areas of shorter wet season and/or lower temperatures. Arid areas were generally unsuitable (except where irrigation, water bodies or floods occur) due to soil moisture deficit and/or, in the case of F. hepatica, high average annual mean temperature >23 degrees C. Regions in the highlands of Ethiopia and Kenya were identified as unsuitable for F. gigantica due to inadequate thermal regime, below the 600 growing degree days required for completion of the life cycle in a single year. The combined forecast index (F. hepatica+F. gigantica) was significantly correlated to prevalence data available for 260 of the 1220 agroecologic crop production system zones (CPSZ) and to average monthly normalized difference vegetation index (NDVI) values derived from the advanced very high resolution radiometer (AVHRR) sensor on board the NOAA polar-orbiting satellites. For use in Fasciola control programs, results indicate that monthly forecast parameters, developed in a GIS with digital agroecologic zone databases and monthly climate databases, can be used to define the

  20. The models for assessment of chemopreventive agents: single organ models.

    PubMed

    Das, Sukta; Banerjee, Sarmistha; Saha, Prosenjit

    2004-01-01

    Research in cancer chemoprevention involves a number of activities, the first and foremost of which is acquisition of detailed knowledge concerning the process of carcinogenesis and identification of points of intervention whereby the process can be reversed or stalled. Parallel to this is the search for ideal chemopreventive agents--natural or synthetic--and screening for their activity and efficacy in vitro and in vivo. For ethical reasons it is not possible to test new agents on humans, so preclinical studies are dependent on results first being obtained with suitable animal models. Since it is not possible for a single model to reflect the diversity and heterogeneity of human cancers, it is necessary to have as many different models as possible, depending on the requirement of the studies on different aspects of cancer biology. Advances in research on carcinogenesis and chemoprevention therefore have to be accompanied by development of appropriate laboratory animal models using a variety of carcinogens that produce tumours at different sites. Animal models have contributed significantly to our understanding of carcinogenesis and ways to intervene in the underlying processes. Many animal carcinogenesis and tumour models have been found to mirror corresponding human cancers with respect to cell of origin, morphogenesis, phenotype markers and genetic alteration. In spite of the fact that interpolation of data from animal studies to humans is difficult for various reasons, animal models are widely used for assessment of new compounds with cancer chemopreventive potential and for preclinical trials. So despite the movements of animal rights activists, animal models will continue to be used for biomedical research for saving human lives. In doing so, care should be taken to treat and handle the animals with minimal discomfort to them and ensuring that alternatives are used whenever possible. PMID:15074999

  1. The power of an ontology-driven developmental toxicity database for data mining and computational modeling

    EPA Science Inventory

    Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...

  2. An Online Database and User Community for Physical Models in the Engineering Classroom

    ERIC Educational Resources Information Center

    Welch, Robert W.; Klosky, J. Ledlie

    2007-01-01

    This paper will present information about the Web site--www.handsonmechanics.com, the process to develop the Web site, the vetting and management process for inclusion of physical models by the faculty at West Point, and how faculty at other institutions can add physical models and participate in the site as it grows. Each physical model has a…

  3. European Community Databases: Online to Europe.

    ERIC Educational Resources Information Center

    Hensley, Colin

    1989-01-01

    Describes three groups of databases sponsored by the European Communities Commission: Eurobases, a textual database of the contents of the "Official Journal" of the European Community; the European Community Host Organization (ECHO) databases, which offer multilingual information about Europe; and statistical databases. Information on access and…

  4. EPIC modeling of soil organic carbon sequestration in croplands of Iowa.

    PubMed

    Causarano, Hector J; Doraiswamy, Paul C; McCarty, Gregory W; Hatfield, Jerry L; Milak, Sushil; Stern, Alan J

    2008-01-01

    Depending on management, soil organic carbon (SOC) is a potential source or sink for atmospheric CO(2). We used the EPIC model to study impacts of soil and crop management on SOC in corn (Zea mays L.) and soybean (Glycine max L. Merr.) croplands of Iowa. The National Agricultural Statistics Service crops classification maps were used to identify corn-soybean areas. Soil properties were obtained from a combination of SSURGO and STATSGO databases. Daily weather variables were obtained from first order meteorological stations in Iowa and neighboring states. Data on crop management, fertilizer application and tillage were obtained from publicly available databases maintained by the NRCS, USDA-Economic Research Service (ERS), and Conservation Technology Information Center. The EPIC model accurately simulated state averages of crop yields during 1970-2005 (R(2) = 0.87). Simulated SOC explained 75% of the variation in measured SOC. With current trends in conservation tillage adoption, total stock of SOC (0-20 cm) is predicted to reach 506 Tg by 2019, representing an increase of 28 Tg with respect to 1980. In contrast, when the whole soil profile was considered, EPIC estimated a decrease of SOC stocks with time, from 1835 Tg in 1980 to 1771 Tg in 2019. Hence, soil depth considered for calculations is an important factor that needs further investigation. Soil organic C sequestration rates (0-20 cm) were estimated at 0.50 to 0.63 Mg ha(-1) yr(-1) depending on climate and soil conditions. Overall, combining land use maps with EPIC proved valid for predicting impacts of management practices on SOC. However, more data on spatial and temporal variation in SOC are needed to improve model calibration and validation. PMID:18574164

  5. Research on an expert system for database operation of simulation-emulation math models. Volume 1, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.

  6. Organic carbon cycling in landfills: Model for a continuum approach

    SciTech Connect

    Bogner, J.; Lagerkvist, A.

    1997-09-01

    Organic carbon cycling in landfills can be addressed through a continuum model where the end-points are conventional anaerobic digestion of organic waste (short-term analogue) and geologic burial of organic material (long-term analogue). Major variables influencing status include moisture state, temperature, organic carbon loading, nutrient status, and isolation from the surrounding environment. Bioreactor landfills which are engineered for rapid decomposition approach (but cannot fully attain) the anaerobic digester end-point and incur higher unit costs because of their high degree of environmental isolation and control. At the other extreme, uncontrolled land disposal of organic waste materials is similar to geologic burial where organic carbon may be aerobically recycled to atmospheric CO{sub 2}, anaerobically converted to CH{sub 4} and CO{sub 2} during early diagenesis, or maintained as intermediate or recalcitrant forms into geologic time (> 1,000 years) for transformations via kerogen pathways. A family of improved landfill models are needed at several scales (molecular to landscape) which realistically address landfill processes and can be validated with field data.

  7. Driving risk assessment using near-crash database through data mining of tree-based model.

    PubMed

    Wang, Jianqiang; Zheng, Yang; Li, Xiaofei; Yu, Chenfei; Kodaka, Kenji; Li, Keqiang

    2015-11-01

    This paper considers a comprehensive naturalistic driving experiment to collect driving data under potential threats on actual Chinese roads. Using acquired real-world naturalistic driving data, a near-crash database is built, which contains vehicle status, potential crash objects, driving environment and road types, weather condition, and driver information and actions. The aims of this study are summarized into two aspects: (1) to cluster different driving-risk levels involved in near-crashes, and (2) to unveil the factors that greatly influence the driving-risk level. A novel method to quantify the driving-risk level of a near-crash scenario is proposed by clustering the braking process characteristics, namely maximum deceleration, average deceleration, and percentage reduction in vehicle kinetic energy. A classification and regression tree (CART) is employed to unveil the relationship among driving risk, driver/vehicle characteristics, and road environment. The results indicate that the velocity when braking, triggering factors, potential object type, and potential crash type exerted the greatest influence on the driving-risk levels in near-crashes. PMID:26319604

  8. Implementing marine organic aerosols into the GEOS-Chem model

    DOE PAGESBeta

    Gantt, B.; Johnson, M. S.; Crippa, M.; Prévôt, A. S. H.; Meskhidze, N.

    2014-09-09

    Marine organic aerosols (MOA) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem model. The implemented emission scheme improved the large underprediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in goodmore » agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOA observed during the summertime at an inland site near Paris, France. Our study shows that MOA have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having > 10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly-emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.« less

  9. Stochastic models for plant microtubule self-organization and structure.

    PubMed

    Eren, Ezgi C; Dixit, Ram; Gautam, Natarajan

    2015-12-01

    One of the key enablers of shape and growth in plant cells is the cortical microtubule (CMT) system, which is a polymer array that forms an appropriately-structured scaffolding in each cell. Plant biologists have shown that stochastic dynamics and simple rules of interactions between CMTs can lead to a coaligned CMT array structure. However, the mechanisms and conditions that cause CMT arrays to become organized are not well understood. It is prohibitively time-consuming to use actual plants to study the effect of various genetic mutations and environmental conditions on CMT self-organization. In fact, even computer simulations with multiple replications are not fast enough due to the spatio-temporal complexity of the system. To redress this shortcoming, we develop analytical models and methods for expeditiously computing CMT system metrics that are related to self-organization and array structure. In particular, we formulate a mean-field model to derive sufficient conditions for the organization to occur. We show that growth-prone dynamics itself is sufficient to lead to organization in presence of interactions in the system. In addition, for such systems, we develop predictive methods for estimation of system metrics such as expected average length and number of CMTs over time, using a stochastic fluid-flow model, transient analysis, and approximation algorithms tailored to our problem. We illustrate the effectiveness of our approach through numerical test instances and discuss biological insights. PMID:25700800

  10. Compartmental model for organic matter digestion in facultative ponds.

    PubMed

    Giraldo, E; Garzón, A

    2002-01-01

    A model has been developed for the digestion of organic matter in facultative ponds in tropical regions. Complete mixing has been assumed for the aerobic and anaerobic compartments. Settling, aerobic layer oxidation, and anaerobic layer methanogenesis are the main processes for organic matter removal in the water column. Exchange processes between layers are dispersive or soluble exchange, solubilization and transport of organic matter from sediments to water column are also taken into account. Degradation of organic matter in the sediments produces gaseous emissions to the water column. The exchange between bubbles ascending and the water column was measured. The model was calibrated with data obtained from a pilot facultative pond built in Muña Reservoir in Bogotá. The pond was sampled during 4 months to compare data between its water hyacinth covered section and uncovered section. The results clearly show the relative importance of different BOD removal processes in facultative ponds and suggest modifications to further improve performance. The results from the model suggest that internal loadings to facultative ponds due to solubilization and return of organic matter from the sediments to the aerobic layer greatly influence the soluble BOD effluent concentration. Aerobic degradation activity in the facultative pond does not affect significantly the effluent concentration. Anaerobic degradation activity in the facultative pond can more easily achieve increases in the removal efficiencies of BOD. PMID:11833730

  11. Implementing marine organic aerosols into the GEOS-Chem model

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Johnson, M. S.; Crippa, M.; Prévôt, A. S. H.; Meskhidze, N.

    2014-09-01

    Marine organic aerosols (MOA) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem model. The implemented emission scheme improved the large underprediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOA observed during the summertime at an inland site near Paris, France. Our study shows that MOA have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having > 10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly-emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.

  12. Implementing marine organic aerosols into the GEOS-Chem model

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Johnson, M. S.; Crippa, M.; Prévôt, A. S. H.; Meskhidze, N.

    2015-03-01

    Marine-sourced organic aerosols (MOAs) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem (Global Earth Observing System Chemistry) model. The implemented emission scheme improved the large underprediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOAs observed during the summertime at an inland site near Paris, France. Our study shows that MOAs have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having >10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.

  13. Model of electrodialysis process associated with organic adsorption

    SciTech Connect

    Chatchupong, T.; Murphy, R.J.

    1996-02-01

    A convective-diffusion model was developed to predict the performance of both electrodialysis (ED) and adsorption on species in an aqueous solution. The quasi-steady-state model was solved by finite difference to assess the effects of a packed bed of graphite on separation of an univalent electrolyte and an organic compound in an ED cell. A sensitivity analysis of parameters in the model was also performed. Comparison of simulation results with experimental data of 2-Naphthol (2-C{sub 10}H{sub 7}OH) in sodium chloride solution was used for this case study. The model satisfactorily predicts 2-Naphthol removal at the 95% confidence level.

  14. Financial incentives: alternatives to the altruistic model of organ donation.

    PubMed

    Siminoff, L A; Leonard, M D

    1999-12-01

    Improvements in transplantation techniques have resulted in a demand for transplantable organs that far outpaces supply. Present efforts to secure organs use an altruistic system designed to appeal to a public that will donate organs because they are needed. Efforts to secure organs under this system have not been as successful as hoped. Many refinements to the altruistic model have been or are currently being proposed, such as "required request," "mandated choice," "routine notification," and "presumed consent." Recent calls for market approaches to organ procurement reflect growing doubts about the efficacy of these refinements. Market approaches generally use a "futures market," with benefits payable either periodically or when or if organs are procured. Lump-sum arrangements could include donations to surviving family or contributions to charities or to funeral costs. Possibilities for a periodic system of payments include reduced premiums for health or life insurance, or a reciprocity system whereby individuals who periodically reaffirm their willingness to donate are given preference if they require a transplant. Market approaches do raise serious ethical issues, including potential exploitation of the poor. Such approaches may also be effectively proscribed by the 1984 National Organ Transplant Act. PMID:10889698

  15. Regeneration across metazoan phylogeny: lessons from model organisms.

    PubMed

    Li, Qiao; Yang, Hao; Zhong, Tao P

    2015-02-20

    Comprehending the diversity of the regenerative potential across metazoan phylogeny represents a fundamental challenge in biology. Invertebrates like Hydra and planarians exhibit amazing feats of regeneration, in which an entire organism can be restored from minute body segments. Vertebrates like teleost fish and amphibians can also regrow large sections of the body. While this regenerative capacity is greatly attenuated in mammals, there are portions of major organs that remain regenerative. Regardless of the extent, there are common basic strategies to regeneration, including activation of adult stem cells and proliferation of differentiated cells. Here, we discuss the cellular features and molecular mechanisms that are involved in regeneration in different model organisms, including Hydra, planarians, zebrafish and newts as well as in several mammalian organs. PMID:25697100

  16. Green Algae as Model Organisms for Biological Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Goldstein, Raymond E.

    2015-01-01

    In the past decade, the volvocine green algae, spanning from the unicellular Chlamydomonas to multicellular Volvox, have emerged as model organisms for a number of problems in biological fluid dynamics. These include flagellar propulsion, nutrient uptake by swimming organisms, hydrodynamic interactions mediated by walls, collective dynamics and transport within suspensions of microswimmers, the mechanism of phototaxis, and the stochastic dynamics of flagellar synchronization. Green algae are well suited to the study of such problems because of their range of sizes (from 10 μm to several millimeters), their geometric regularity, the ease with which they can be cultured, and the availability of many mutants that allow for connections between molecular details and organism-level behavior. This review summarizes these recent developments and highlights promising future directions in the study of biological fluid dynamics, especially in the context of evolutionary biology, that can take advantage of these remarkable organisms.

  17. Green Algae as Model Organisms for Biological Fluid Dynamics*

    PubMed Central

    Goldstein, Raymond E.

    2015-01-01

    In the past decade the volvocine green algae, spanning from the unicellular Chlamydomonas to multicellular Volvox, have emerged as model organisms for a number of problems in biological fluid dynamics. These include flagellar propulsion, nutrient uptake by swimming organisms, hydrodynamic interactions mediated by walls, collective dynamics and transport within suspensions of microswimmers, the mechanism of phototaxis, and the stochastic dynamics of flagellar synchronization. Green algae are well suited to the study of such problems because of their range of sizes (from 10 μm to several millimetres), their geometric regularity, the ease with which they can be cultured and the availability of many mutants that allow for connections between molecular details and organism-level behavior. This review summarizes these recent developments and highlights promising future directions in the study of biological fluid dynamics, especially in the context of evolutionary biology, that can take advantage of these remarkable organisms. PMID:26594068

  18. The Society of Thoracic Surgeons Congenital Heart Surgery Database Mortality Risk Model: Part 2—Clinical Application

    PubMed Central

    Jacobs, Jeffrey P.; O’Brien, Sean M.; Pasquali, Sara K.; Gaynor, J. William; Mayer, John E.; Karamlou, Tara; Welke, Karl F.; Filardo, Giovanni; Han, Jane M.; Kim, Sunghee; Quintessenza, James A.; Pizarro, Christian; Tchervenkov, Christo I.; Lacour-Gayet, Francois; Mavroudis, Constantine; Backer, Carl L.; Austin, Erle H.; Fraser, Charles D.; Tweddell, James S.; Jonas, Richard A.; Edwards, Fred H.; Grover, Frederick L.; Prager, Richard L.; Shahian, David M.; Jacobs, Marshall L.

    2016-01-01

    Background The empirically derived 2014 Society of Thoracic Surgeons Congenital Heart Surgery Database Mortality Risk Model incorporates adjustment for procedure type and patient-specific factors. The purpose of this report is to describe this model and its application in the assessment of variation in outcomes across centers. Methods All index cardiac operations in The Society of Thoracic Surgeons Congenital Heart Surgery Database (January 1, 2010, to December 31, 2013) were eligible for inclusion. Isolated patent ductus arteriosus closures in patients weighing less than or equal to 2.5 kg were excluded, as were centers with more than 10% missing data and patients with missing data for key variables. The model includes the following covariates: primary procedure, age, any prior cardiovascular operation, any noncardiac abnormality, any chromosomal abnormality or syndrome, important preoperative factors (mechanical circulatory support, shock persisting at time of operation, mechanical ventilation, renal failure requiring dialysis or renal dysfunction (or both), and neurological deficit), any other preoperative factor, prematurity (neonates and infants), and weight (neonates and infants). Variation across centers was assessed. Centers for which the 95% confidence interval for the observed-to-expected mortality ratio does not include unity are identified as lower-performing or higher-performing programs with respect to operative mortality. Results Included were 52,224 operations from 86 centers. Overall discharge mortality was 3.7% (1,931 of 52,224). Discharge mortality by age category was neonates, 10.1% (1,129 of 11,144); infants, 3.0% (564 of 18,554), children, 0.9% (167 of 18,407), and adults, 1.7% (71 of 4,119). For all patients, 12 of 86 centers (14%) were lower-performing programs, 67 (78%) were not outliers, and 7 (8%) were higher-performing programs. Conclusions The 2014 Society of Thoracic Surgeons Congenital Heart Surgery Database Mortality Risk Model

  19. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    NASA Astrophysics Data System (ADS)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  20. Review of existing terrestrial bioaccumulation models and terrestrial bioaccumulation modeling needs for organic chemicals

    EPA Science Inventory

    Protocols for terrestrial bioaccumulation assessments are far less-developed than for aquatic systems. This manuscript reviews modeling approaches that can be used to assess the terrestrial bioaccumulation potential of commercial organic chemicals. Models exist for plant, inver...

  1. A Comparative Data-Based Modeling Study on Respiratory CO2 Gas Exchange during Mechanical Ventilation

    PubMed Central

    Kim, Chang-Sei; Ansermino, J. Mark; Hahn, Jin-Oh

    2016-01-01

    The goal of this study is to derive a minimally complex but credible model of respiratory CO2 gas exchange that may be used in systematic design and pilot testing of closed-loop end-tidal CO2 controllers in mechanical ventilation. We first derived a candidate model that captures the essential mechanisms involved in the respiratory CO2 gas exchange process. Then, we simplified the candidate model to derive two lower-order candidate models. We compared these candidate models for predictive capability and reliability using experimental data collected from 25 pediatric subjects undergoing dynamically varying mechanical ventilation during surgical procedures. A two-compartment model equipped with transport delay to account for CO2 delivery between the lungs and the tissues showed modest but statistically significant improvement in predictive capability over the same model without transport delay. Aggregating the lungs and the tissues into a single compartment further degraded the predictive fidelity of the model. In addition, the model equipped with transport delay demonstrated superior reliability to the one without transport delay. Further, the respiratory parameters derived from the model equipped with transport delay, but not the one without transport delay, were physiologically plausible. The results suggest that gas transport between the lungs and the tissues must be taken into account to accurately reproduce the respiratory CO2 gas exchange process under conditions of wide-ranging and dynamically varying mechanical ventilation conditions. PMID:26870728

  2. BIOMARKERS DATABASE

    EPA Science Inventory

    This database was developed by assembling and evaluating the literature relevant to human biomarkers. It catalogues and evaluates the usefulness of biomarkers of exposure, susceptibility and effect which may be relevant for a longitudinal cohort study. In addition to describing ...

  3. A dynamical phyllotaxis model to determine floral organ number.

    PubMed

    Kitazawa, Miho S; Fujimoto, Koichi

    2015-05-01

    How organisms determine particular organ numbers is a fundamental key to the development of precise body structures; however, the developmental mechanisms underlying organ-number determination are unclear. In many eudicot plants, the primordia of sepals and petals (the floral organs) first arise sequentially at the edge of a circular, undifferentiated region called the floral meristem, and later transition into a concentric arrangement called a whorl, which includes four or five organs. The properties controlling the transition to whorls comprising particular numbers of organs is little explored. We propose a development-based model of floral organ-number determination, improving upon earlier models of plant phyllotaxis that assumed two developmental processes: the sequential initiation of primordia in the least crowded space around the meristem and the constant growth of the tip of the stem. By introducing mutual repulsion among primordia into the growth process, we numerically and analytically show that the whorled arrangement emerges spontaneously from the sequential initiation of primordia. Moreover, by allowing the strength of the inhibition exerted by each primordium to decrease as the primordium ages, we show that pentamerous whorls, in which the angular and radial positions of the primordia are consistent with those observed in sepal and petal primordia in Silene coeli-rosa, Caryophyllaceae, become the dominant arrangement. The organ number within the outmost whorl, corresponding to the sepals, takes a value of four or five in a much wider parameter space than that in which it takes a value of six or seven. These results suggest that mutual repulsion among primordia during growth and a temporal decrease in the strength of the inhibition during initiation are required for the development of the tetramerous and pentamerous whorls common in eudicots. PMID:25950739

  4. A Dynamical Phyllotaxis Model to Determine Floral Organ Number

    PubMed Central

    Kitazawa, Miho S.; Fujimoto, Koichi

    2015-01-01

    How organisms determine particular organ numbers is a fundamental key to the development of precise body structures; however, the developmental mechanisms underlying organ-number determination are unclear. In many eudicot plants, the primordia of sepals and petals (the floral organs) first arise sequentially at the edge of a circular, undifferentiated region called the floral meristem, and later transition into a concentric arrangement called a whorl, which includes four or five organs. The properties controlling the transition to whorls comprising particular numbers of organs is little explored. We propose a development-based model of floral organ-number determination, improving upon earlier models of plant phyllotaxis that assumed two developmental processes: the sequential initiation of primordia in the least crowded space around the meristem and the constant growth of the tip of the stem. By introducing mutual repulsion among primordia into the growth process, we numerically and analytically show that the whorled arrangement emerges spontaneously from the sequential initiation of primordia. Moreover, by allowing the strength of the inhibition exerted by each primordium to decrease as the primordium ages, we show that pentamerous whorls, in which the angular and radial positions of the primordia are consistent with those observed in sepal and petal primordia in Silene coeli-rosa, Caryophyllaceae, become the dominant arrangement. The organ number within the outmost whorl, corresponding to the sepals, takes a value of four or five in a much wider parameter space than that in which it takes a value of six or seven. These results suggest that mutual repulsion among primordia during growth and a temporal decrease in the strength of the inhibition during initiation are required for the development of the tetramerous and pentamerous whorls common in eudicots. PMID:25950739

  5. Incorporating organic soil into a global climate model

    NASA Astrophysics Data System (ADS)

    Lawrence, David M.; Slater, Andrew G.

    2008-02-01

    Organic matter significantly alters a soil’s thermal and hydraulic properties but is not typically included in land-surface schemes used in global climate models. This omission has consequences for ground thermal and moisture regimes, particularly in the high-latitudes where soil carbon content is generally high. Global soil carbon data is used to build a geographically distributed, profiled soil carbon density dataset for the Community Land Model (CLM). CLM parameterizations for soil thermal and hydraulic properties are modified to accommodate both mineral and organic soil matter. Offline simulations including organic soil are characterized by cooler annual mean soil temperatures (up to ˜2.5°C cooler for regions of high soil carbon content). Cooling is strong in summer due to modulation of early and mid-summer soil heat flux. Winter temperatures are slightly warmer as organic soils do not cool as efficiently during fall and winter. High porosity and hydraulic conductivity of organic soil leads to a wetter soil column but with comparatively low surface layer saturation levels and correspondingly low soil evaporation. When CLM is coupled to the Community Atmosphere Model, the reduced latent heat flux drives deeper boundary layers, associated reductions in low cloud fraction, and warmer summer air temperatures in the Arctic. Lastly, the insulative properties of organic soil reduce interannual soil temperature variability, but only marginally. This result suggests that, although the mean soil temperature cooling will delay the simulated date at which frozen soil begins to thaw, organic matter may provide only limited insulation from surface warming.

  6. Modeling organic matter stabilization during windrow composting of livestock effluents.

    PubMed

    Oudart, D; Paul, E; Robin, P; Paillat, J M

    2012-01-01

    Composting is a complex bioprocess, requiring a lot of empirical experiments to optimize the process. A dynamical mathematical model for the biodegradation of the organic matter during the composting process has been developed. The initial organic matter expressed by chemical oxygen demand (COD) is decomposed into rapidly and slowly degraded compartments and an inert one. The biodegradable COD is hydrolysed and consumed by microorganisms and produces metabolic water and carbon dioxide. This model links a biochemical characterization of the organic matter by Van Soest fractionating with COD. The comparison of experimental and simulation results for carbon dioxide emission, dry matter and carbon content balance showed good correlation. The initial sizes of the biodegradable COD compartments are explained by the soluble, hemicellulose-like and lignin fraction. Their sizes influence the amplitude of the carbon dioxide emission peak. The initial biomass is a sensitive variable too, influencing the time at which the emission peak occurs. PMID:23393964

  7. An Ontology for Modeling Complex Inter-relational Organizations

    NASA Astrophysics Data System (ADS)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  8. Unified electronic charge transport model for organic solar cells

    NASA Astrophysics Data System (ADS)

    Mottaghian, Seyyed Sadegh; Biesecker, Matt; Bayat, Khadijeh; Farrokh Baroughi, Mahdi

    2013-07-01

    This paper provides a comprehensive modeling approach for simulation of electronic charge transport in excitonic solar cells with organic and organic/inorganic structures. Interaction of energy carrying particles (electrons, holes, singlet excitons, and triplet excitons) with each other and their transformation in the bulk of the donor and acceptor media as well as the donor/acceptor interfaces are incorporated in form of coupling matrices into the continuity equations and interface boundary conditions. As a case study, the model is applied to simulate an organic bilayer photovoltaic (PV) device to quantify the effects of photo generation, recombination coefficient, carrier mobility, and electrode work function on its PV characteristics. The study proves that electron-hole recombination at the donor/acceptor interface is the dominant mechanism that limits open circuit voltage of the device.

  9. The MIntAct Project and Molecular Interaction Databases.

    PubMed

    Licata, Luana; Orchard, Sandra

    2016-01-01

    Molecular interaction databases collect, organize, and enable the analysis of the increasing amounts of molecular interaction data being produced and published as we move towards a more complete understanding of the interactomes of key model organisms. The organization of these data in a structured format supports analyses such as the modeling of pairwise relationships between interactors into interaction networks and is a powerful tool for understanding the complex molecular machinery of the cell. This chapter gives an overview of the principal molecular interaction databases, in particular the IMEx databases, and their curation policies, use of standardized data formats and quality control rules. Special attention is given to the MIntAct project, in which IntAct and MINT joined forces to create a single resource to improve curation and software development efforts. This is exemplified as a model for the future of molecular interaction data collation and dissemination. PMID:27115627

  10. Modelling the fate of organic micropollutants in stormwater ponds.

    PubMed

    Vezzaro, Luca; Eriksson, Eva; Ledin, Anna; Mikkelsen, Peter S

    2011-06-01

    Urban water managers need to estimate the potential removal of organic micropollutants (MP) in stormwater treatment systems to support MP pollution control strategies. This study documents how the potential removal of organic MP in stormwater treatment systems can be quantified by using multimedia models. The fate of four different MP in a stormwater retention pond was simulated by applying two steady-state multimedia fate models (EPI Suite and SimpleBox) commonly applied in chemical risk assessment and a dynamic multimedia fate model (Stormwater Treatment Unit Model for Micro Pollutants--STUMP). The four simulated organic stormwater MP (iodopropynyl butylcarbamate--IPBC, benzene, glyphosate and pyrene) were selected according to their different urban sources and environmental fate. This ensures that the results can be extended to other relevant stormwater pollutants. All three models use substance inherent properties to calculate MP fate but differ in their ability to represent the small physical scale and high temporal variability of stormwater treatment systems. Therefore the three models generate different results. A Global Sensitivity Analysis (GSA) highlighted that settling/resuspension of particulate matter was the most sensitive process for the dynamic model. The uncertainty of the estimated MP fluxes can be reduced by calibrating the dynamic model against total suspended solids data. This reduction in uncertainty was more significant for the substances with strong tendency to sorb, i.e. glyphosate and pyrene and less significant for substances with a smaller tendency to sorb, i.e. IPBC and benzene. The results provide support to the elaboration of MP pollution control strategies by limiting the need for extensive and complex monitoring campaigns targeting the wide range of specific organic MP found in stormwater runoff. PMID:21496881

  11. Atmospheric modeling of air pollution. (Latest citations from the NTIS database). Published Search

    SciTech Connect

    Not Available

    1993-02-01

    The bibliography contains citations concerning the development, validation, and application of mathematical models for air pollution studies of mobile and stationary pollution sources. The models cover a wide range of mathematical complexity, utilizing factors such as terrain features, wake effects, diffusion, atmospheric stability, atmospheric wind, precipitation scavenging, gravitational deposition, atmospheric photochemistry, and urban heat islands. The models are used to support environmental impact studies and effects of proposed emission control strategies. Excluded are models of stratospheric pollution behavior, as applied to high flying aircraft. (Contains 250 citations and includes a subject term index and title list.)

  12. Atmospheric modeling of air pollution. (Latest citations from the NTIS bibliographic database). NewSearch

    SciTech Connect

    Not Available

    1994-11-01

    The bibliography contains citations concerning the development, validation, and application of mathematical models for air pollution studies of mobile and stationary pollution sources. The models cover a wide range of mathematical complexity, utilizing factors such as terrain features, wake effects, diffusion, atmospheric stability, atmospheric wind, precipitation scavenging, gravitational deposition, atmospheric photochemistry, and urban heat islands. The models are used to support environmental impact studies and effects of proposed emission control strategies. Excluded are models of stratospheric pollution behavior, as applied to high flying aircraft. (Contains 250 citations and includes a subject term index and title list.)

  13. Atmospheric modeling of air pollution. (Latest citations from the NTIS Bibliographic database). Published Search

    SciTech Connect

    Not Available

    1993-11-01

    The bibliography contains citations concerning the development, validation, and application of mathematical models for air pollution studies of mobile and stationary pollution sources. The models cover a wide range of mathematical complexity, utilizing factors such as terrain features, wake effects, diffusion, atmospheric stability, atmospheric wind, precipitation scavenging, gravitational deposition, atmospheric photochemistry, and urban heat islands. The models are used to support environmental impact studies and effects of proposed emission control strategies. Excluded are models of stratospheric pollution behavior, as applied to high flying aircraft. (Contains 250 citations and includes a subject term index and title list.)

  14. Atmospheric modeling of air pollution. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    1996-03-01

    The bibliography contains citations concerning the development, validation, and application of mathematical models for air pollution studies of mobile and stationary pollution sources. The models cover a wide range of mathematical complexity, utilizing factors such as terrain features, wake effects, diffusion, atmospheric stability, atmospheric wind, precipitation scavenging, gravitational deposition, atmospheric photochemistry, and urban heat islands. The models are used to support environmental impact studies and effects of proposed emission control strategies. Excluded are models of stratospheric pollution behavior, as applied to high flying aircraft.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  15. MODELING THE FATE OF TOXIC ORGANIC MATERIALS IN AQUATIC ENVIRONMENTS

    EPA Science Inventory

    Documentation is given for PEST, a dynamic simulation model for evaluating the fate of toxic organic materials (TOM) in freshwater environments. PEST represents the time-varying concentration (in ppm) of a given TOM in each of as many as 16 carrier compartments; it also computes ...

  16. Modeling emissions of volatile organic compounds from silage

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Volatile organic compounds (VOCs), necessary reactants for photochemical smog formation, are emitted from numerous sources. Limited available data suggest that dairy farms emit VOCs with cattle feed, primarily silage, being the primary source. Process-based models of VOC transfer within and from si...

  17. An Integrated Model for Effective Knowledge Management in Chinese Organizations

    ERIC Educational Resources Information Center

    An, Xiaomi; Deng, Hepu; Wang, Yiwen; Chao, Lemen

    2013-01-01

    Purpose: The purpose of this paper is to provide organizations in the Chinese cultural context with a conceptual model for an integrated adoption of existing knowledge management (KM) methods and to improve the effectiveness of their KM activities. Design/methodology/approaches: A comparative analysis is conducted between China and the western…

  18. VOC (VOLATILE ORGANIC COMPOUND) FUGITIVE EMISSION PREDICTIVE MODEL - USER'S GUIDE

    EPA Science Inventory

    The report discusses a mathematical model that can be used to evaluate the effectiveness of various leak detection and repair (LDAR) programs on controlling volatile organic compound (VOC) fugitive emissions from chemical, petroleum, and other process units. The report also descr...

  19. Functional genomics of the chicken - a model organism

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The chicken has reached model organism status after genome sequencing and development of high-throughput tools for the exploration of functional elements of the genome. Functional genomics focuses on understanding the function and regulation of genes and gene products on a global or genome-wide scal...

  20. MODELING MULTIPHASE ORGANIC CHEMICAL TRANSPORT IN SOILS AND GROUND WATER

    EPA Science Inventory

    Subsurface contamination due to immiscible organic liquids is a widespread problem which poses a serious threat to ground-water resources. n order to understand the movement of such materials in the subsurface, a mathematical model was developed for multiphase flow and multicompo...

  1. A Process Model for the Comprehension of Organic Chemistry Notation

    ERIC Educational Resources Information Center

    Havanki, Katherine L.

    2012-01-01

    This dissertation examines the cognitive processes individuals use when reading organic chemistry equations and factors that affect these processes, namely, visual complexity of chemical equations and participant characteristics (expertise, spatial ability, and working memory capacity). A six stage process model for the comprehension of organic…

  2. A database of wavefront measurements for laser system modeling, optical component development and fabrication process qualification

    SciTech Connect

    Wolfe, C.R.; Lawson, J.K.; Aikens, D.M.; English, R.E.

    1995-04-12

    In the second half of the 1990`s, LLNL and others anticipate designing and beginning construction of the National Ignition Facility (NIF). The NIF will be capable of producing the worlds first laboratory scale fusion ignition and bum reaction by imploding a small target. The NIF will utilize approximately 192 simultaneous laser beams for this purpose. The laser will be capable of producing a shaped energy pulse of at least 1.8 million joules (MJ) with peak power of at least 500 trillion watts (TV). In total, the facility will require more than 7,000 large optical components. The performance of a high power laser of this kind can be seriously degraded by the presence of low amplitude, periodic modulations in the surface and transmitted wavefronts of the optics used. At high peak power, these phase modulations can convert into large intensity modulations by non-linear optical processes. This in turn can lead to loss in energy on target via many well known mechanisms. In some cases laser damage to the optics downstream of the source of the phase modulation can occur. The database described here contains wavefront phase maps of early prototype optical components for the NIF. It has only recently become possible to map the wavefront of these large aperture components with high spatial resolution. Modem large aperture static fringe and phase shifting interferometers equipped with large area solid state detectors have made this possible. In a series of measurements with these instruments, wide spatial bandwidth can be detected in the wavefront.

  3. Supramolecular organization of functional organic materials in the bulk and at organic/organic interfaces: a modeling and computer simulation approach.

    PubMed

    Muccioli, Luca; D'Avino, Gabriele; Berardi, Roberto; Orlandi, Silvia; Pizzirusso, Antonio; Ricci, Matteo; Roscioni, Otello Maria; Zannoni, Claudio

    2014-01-01

    The molecular organization of functional organic materials is one of the research areas where the combination of theoretical modeling and experimental determinations is most fruitful. Here we present a brief summary of the simulation approaches used to investigate the inner structure of organic materials with semiconducting behavior, paying special attention to applications in organic photovoltaics and clarifying the often obscure jargon hindering the access of newcomers to the literature of the field. Special attention is paid to the choice of the computational "engine" (Monte Carlo or Molecular Dynamics) used to generate equilibrium configurations of the molecular system under investigation and, more importantly, to the choice of the chemical details in describing the molecular interactions. Recent literature dealing with the simulation of organic semiconductors is critically reviewed in order of increasing complexity of the system studied, from low molecular weight molecules to semiflexible polymers, including the challenging problem of determining the morphology of heterojunctions between two different materials. PMID:24322782

  4. Unexpected capacity for organic carbon assimilation by Thermosynechococcus elongatus, a crucial photosynthetic model organism.

    PubMed

    Zilliges, Yvonne; Dau, Holger

    2016-04-01

    Genetic modification of key residues of photosystems is essential to identify functionally crucial processes by spectroscopic and crystallographic investigation; the required protein stability favours use of thermophilic species. The currently unique thermophilic photosynthetic model organism is the cyanobacterial genus Thermosynechococcus. We report the ability of Thermosynechococcus elongatus to assimilate organic carbon, specifically D-fructose. Growth in the presence of a photosynthesis inhibitor opens the door towards crucial amino acid substitutions in photosystems by the rescue of otherwise lethal mutations. Yet depression of batch-culture growth after 7 days implies that additional developments are needed. PMID:26935247

  5. Lamination of organic solar cells and organic light emitting devices: Models and experiments

    SciTech Connect

    Oyewole, O. K.; Yu, D.; Du, J.; Asare, J.; Fashina, A.; Anye, V. C.; Zebaze Kana, M. G.; Soboyejo, W. O.

    2015-08-21

    In this paper, a combined experimental, computational, and analytical approach is used to provide new insights into the lamination of organic solar cells and light emitting devices at macro- and micro-scales. First, the effects of applied lamination force (on contact between the laminated layers) are studied. The crack driving forces associated with the interfacial cracks (at the bi-material interfaces) are estimated along with the critical interfacial crack driving forces associated with the separation of thin films, after layer transfer. The conditions for successful lamination are predicted using a combination of experiments and computational models. Guidelines are developed for the lamination of low-cost organic electronic structures.

  6. A general framework for modelling the vertical organic matter profile in mineral and organic soils

    NASA Astrophysics Data System (ADS)

    Braakhekke, Maarten; Ahrens, Bernhard

    2016-04-01

    The vertical distribution of soil organic matter (SOM) within the mineral soil and surface organic layer is an important property of terrestrial ecosystems that affects carbon and nutrient cycling and soil heat and moisture transport. The overwhelming majority of models of SOM dynamics are zero-dimensional, i.e. they do not resolve heterogeneity of SOM concentration along the vertical profile. In recent years, however, a number of new vertically explicit SOM models or vertically explicit versions of existing models have been published. These models describe SOM in units of concentration (mass per unit volume) by means of a reactive-transport model that includes diffusion and/or advection terms for SOM transport, and vertically resolves SOM inputs and factors that influence decomposition. An important assumption behind these models is that the volume of soil elements is constant over time, i.e. not affected by SOM dynamics. This assumption only holds if the SOM content is negligible compared to the mineral content. When this is not the case, SOM input or loss in a soil element may cause a change in volume of the element rather than a change in SOM concentration. Furthermore, these volume changes can cause vertical shifts of material relative to the surface. This generally causes material in an organic layer to gradually move downward, even in absence of mixing processes. Since the classical reactive-transport model of the SOM profile can only be applied to the mineral soil, the surface organic layer is usually either treated separately or not explicitly considered. We present a new and elegant framework that treats the surface organic layer and mineral soil as one continuous whole. It explicitly accounts for volume changes due to SOM dynamics and changes in bulk density. The vertical shifts resulting from these volume changes are included in an Eulerian representation as an additional advective transport flux. Our approach offers a more elegant and realistic

  7. [Simulation of methane emissions from rice fields in the Taihu Lake region, China by using different unit of soil database with the DNDC model].

    PubMed

    Zhang, Li-ming; Yu, Dong-sheng; Shi, Xue-zheng; Zhao, Li-min; Ding, Wei-xin; Wang, Hong-jie; Pan, Jian-jun

    2009-08-15

    Application of a biogeochemical model, DeNitrification and DeComposition or DNDC, was discussed to assess the impact of CH4 emissions on different soil database from rice fields in Taihu Lake region of China. The results showed that CH4 emissions of the polygon-based soil database of 1:50000, which contained 52034 polygons of paddy soils representing 1107 paddy soil profiles extracted from the latest national soil map (1:50000), were located within the ranges produced by the county-based soil database of 1:50000. However, total emissions of the whole area differed by about 1680 Gg CH4-C. Moreover, CH4 emissions of the polygon-based soil database of 1:50000 and the county-based soil database of 14,000,000, which was the most popular data source when DNDC model was applied in China, have a big estimation discrepancy among each county-based unit in spite of total emissions of the whole area by a difference of 180 Gg CH4-C. This indicated that the more precise soil database was necessary to better simulate CH4 emissions from rice fields in Taihu Lake region using the DNDC model. PMID:19799272

  8. MOAtox: A Comprehensive Mode of Action and Acute Aquatic Toxicity Database for Predictive Model Development

    EPA Science Inventory

    tThe mode of toxic action (MOA) has been recognized as a key determinant of chemical toxicity andas an alternative to chemical class-based predictive toxicity modeling. However, the development ofquantitative structure activity relationship (QSAR) and other models has been limite...

  9. Millennial Students' Mental Models of Search: Implications for Academic Librarians and Database Developers

    ERIC Educational Resources Information Center

    Holman, Lucy

    2011-01-01

    Today's students exhibit generational differences in the way they search for information. Observations of first-year students revealed a proclivity for simple keyword or phrases searches with frequent misspellings and incorrect logic. Although no students had strong mental models of search mechanisms, those with stronger models did construct more…

  10. Modeling aspects of estuarine eutrophication. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-05-01

    The bibliography contains citations concerning mathematical modeling of existing water quality stresses in estuaries, harbors, bays, and coves. Both physical hydraulic and numerical models for estuarine circulation are discussed. (Contains a minimum of 96 citations and includes a subject term index and title list.)

  11. Fractured rock hydrogeology: Modeling studies. (Latest citations from the Selected Water Resources Abstracts database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning the use of mathematical and conceptual models in describing the hydraulic parameters of fluid flow in fractured rock. Topics include the use of tracers, solute and mass transport studies, and slug test analyses. The use of modeling techniques in injection well performance prediction is also discussed. (Contains 250 citations and includes a subject term index and title list.)

  12. Construction and analysis of a human hepatotoxicity database suitable for QSAR modeling using post-market safety data.

    PubMed

    Zhu, Xiao; Kruhlak, Naomi L

    2014-07-01

    Drug-induced liver injury (DILI) is one of the most common drug-induced adverse events (AEs) leading to life-threatening conditions such as acute liver failure. It has also been recognized as the single most common cause of safety-related post-market withdrawals or warnings. Efforts to develop new predictive methods to assess the likelihood of a drug being a hepatotoxicant have been challenging due to the complexity and idiosyncrasy of clinical manifestations of DILI. The FDA adverse event reporting system (AERS) contains post-market data that depict the morbidity of AEs. Here, we developed a scalable approach to construct a hepatotoxicity database using post-market data for the purpose of quantitative structure-activity relationship (QSAR) modeling. A set of 2029 unique and modelable drug entities with 13,555 drug-AE combinations was extracted from the AERS database using 37 hepatotoxicity-related query preferred terms (PTs). In order to determine the optimal classification scheme to partition positive from negative drugs, a manually-curated DILI calibration set composed of 105 negatives and 177 positives was developed based on the published literature. The final classification scheme combines hepatotoxicity-related PT data with supporting information that optimize the predictive performance across the calibration set. Data for other toxicological endpoints related to liver injury such as liver enzyme abnormalities, cholestasis, and bile duct disorders, were also extracted and classified. Collectively, these datasets can be used to generate a battery of QSAR models that assess a drug's potential to cause DILI. PMID:24721472

  13. Developing Interpretive Turbulence Models from a Database with Applications to Wind Farms and Shipboard Operations

    NASA Astrophysics Data System (ADS)

    Schau, Kyle A.

    This thesis presents a complete method of modeling the autospectra of turbulence in closed form via an expansion series using the von Karman model as a basis function. It is capable of modeling turbulence in all three directions of fluid flow: longitudinal, lateral, and vertical, separately, thus eliminating the assumption of homogeneous, isotropic flow. A thorough investigation into the expansion series is presented, with the strengths and weaknesses highlighted. Furthermore, numerical aspects and theoretical derivations are provided. This method is then tested against three highly complex flow fields: wake turbulence inside wind farms, helicopter downwash, and helicopter downwash coupled with turbulence shed from a ship superstructure. These applications demonstrate that this method is remarkably robust, that the developed autospectral models are virtually tailored to the design of white noise driven shaping filters, and that these models in closed form facilitate a greater understanding of complex flow fields in wind engineering.

  14. An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design

    NASA Technical Reports Server (NTRS)

    Lin, Risheng; Afjeh, Abdollah A.

    2003-01-01

    Crucial to an efficient aircraft simulation-based design is a robust data modeling methodology for both recording the information and providing data transfer readily and reliably. To meet this goal, data modeling issues involved in the aircraft multidisciplinary design are first analyzed in this study. Next, an XML-based. extensible data object model for multidisciplinary aircraft design is constructed and implemented. The implementation of the model through aircraft databinding allows the design applications to access and manipulate any disciplinary data with a lightweight and easy-to-use API. In addition, language independent representation of aircraft disciplinary data in the model fosters interoperability amongst heterogeneous systems thereby facilitating data sharing and exchange between various design tools and systems.

  15. Data-base development for water-quality modeling of the Patuxent River basin, Maryland

    USGS Publications Warehouse

    Fisher, G.T.; Summers, R.M.

    1987-01-01

    Procedures and rationale used to develop a data base and data management system for the Patuxent Watershed Nonpoint Source Water Quality Monitoring and Modeling Program of the Maryland Department of the Environment and the U.S. Geological Survey are described. A detailed data base and data management system has been developed to facilitate modeling of the watershed for water quality planning purposes; statistical analysis; plotting of meteorologic, hydrologic and water quality data; and geographic data analysis. The system is Maryland 's prototype for development of a basinwide water quality management program. A key step in the program is to build a calibrated and verified water quality model of the basin using the Hydrological Simulation Program--FORTRAN (HSPF) hydrologic model, which has been used extensively in large-scale basin modeling. The compilation of the substantial existing data base for preliminary calibration of the basin model, including meteorologic, hydrologic, and water quality data from federal and state data bases and a geographic information system containing digital land use and soils data is described. The data base development is significant in its application of an integrated, uniform approach to data base management and modeling. (Lantz-PTT)

  16. Investigation of realistic PET simulations incorporating tumor patient's specificity using anthropomorphic models: Creation of an oncology database

    SciTech Connect

    Papadimitroulas, Panagiotis; Efthimiou, Nikos; Nikiforidis, George C.; Kagadis, George C.; Loudos, George; Le Maitre, Amandine; Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris

    2013-11-15

    Purpose: The GATE Monte Carlo simulation toolkit is used for the implementation of realistic PET simulations incorporating tumor heterogeneous activity distributions. The reconstructed patient images include noise from the acquisition process, imaging system's performance restrictions and have limited spatial resolution. For those reasons, the measured intensity cannot be simply introduced in GATE simulations, to reproduce clinical data. Investigation of the heterogeneity distribution within tumors applying partial volume correction (PVC) algorithms was assessed. The purpose of the present study was to create a simulated oncology database based on clinical data with realistic intratumor uptake heterogeneity properties.Methods: PET/CT data of seven oncology patients were used in order to create a realistic tumor database investigating the heterogeneity activity distribution of the simulated tumors. The anthropomorphic models (NURBS based cardiac torso and Zubal phantoms) were adapted to the CT data of each patient, and the activity distribution was extracted from the respective PET data. The patient-specific models were simulated with the Monte Carlo Geant4 application for tomography emission (GATE) in three different levels for each case: (a) using homogeneous activity within the tumor, (b) using heterogeneous activity distribution in every voxel within the tumor as it was extracted from the PET image, and (c) using heterogeneous activity distribution corresponding to the clinical image following PVC. The three different types of simulated data in each case were reconstructed with two iterations and filtered with a 3D Gaussian postfilter, in order to simulate the intratumor heterogeneous uptake. Heterogeneity in all generated images was quantified using textural feature derived parameters in 3D according to the ground truth of the simulation, and compared to clinical measurements. Finally, profiles were plotted in central slices of the tumors, across lines with

  17. Energy supply and demand modeling. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    Not Available

    1994-01-01

    The bibliography contains citations concerning the use of mathematical models in trend analysis and forecasting of energy supply and demand factors. Models are presented for the industrial, transportation, and residential sectors. Aspects of long term energy strategies and markets are discussed at the global, national, state, and regional levels. Energy demand and pricing, and econometrics of energy, are explored for electric utilities and natural resources, such as coal, oil, and natural gas. Energy resources are modeled both for fuel usage and for reserves. (Contains 250 citations and includes a subject term index and title list.)

  18. Energy supply and demand modeling. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    Not Available

    1994-12-01

    The bibliography contains citations concerning the use of mathematical models in trend analysis and forecasting of energy supply and demand factors. Models are presented for the industrial, transportation, and residential sectors. Aspects of long term energy strategies and markets are discussed at the global, national, state, and regional levels. Energy demand and pricing, and econometrics of energy, are explored for electric utilities and natural resources, such as coal, oil, and natural gas. Energy resources are modeled both for fuel usage and for reserves. (Contains 250 citations and includes a subject term index and title list.)

  19. Implementing marine organic aerosols into the GEOS-Chem model

    DOE PAGESBeta

    Gantt, B.; Johnson, M. S.; Crippa, M.; Prévôt, A. S. H.; Meskhidze, N.

    2015-03-17

    Marine-sourced organic aerosols (MOAs) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem (Global Earth Observing System Chemistry) model. The implemented emission scheme improved the large underprediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Modelmore » predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOAs observed during the summertime at an inland site near Paris, France. Our study shows that MOAs have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having >10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.« less

  20. Accounting for natural organic matter in aqueous chemical equilibrium models: a review of the theories and applications

    NASA Astrophysics Data System (ADS)

    Dudal, Yves; Gérard, Frédéric

    2004-08-01

    Soil organic matter consists of a highly complex and diversified blend of organic molecules, ranging from low molecular weight organic acids (LMWOAs), sugars, amines, alcohols, etc., to high apparent molecular weight fulvic and humic acids. The presence of a wide range of functional groups on these molecules makes them very reactive and influential in soil chemistry, in regards to acid-base chemistry, metal complexation, precipitation and dissolution of minerals and microbial reactions. Out of these functional groups, the carboxylic and phenolic ones are the most abundant and most influential in regards to metal complexation. Therefore, chemical equilibrium models have progressively dealt with organic matter in their calculations. This paper presents a review of six chemical equilibrium models, namely N ICA-Donnan, E Q3/6, G EOCHEM, M INTEQA2, P HREEQC and W HAM, in light of the account they make of natural organic matter (NOM) with the objective of helping potential users in choosing a modelling approach. The account has taken various faces, mainly by adding specific molecules within the existing model databases (E Q3/6, G EOCHEM, and P HREEQC) or by using either a discrete (W HAM) or a continuous (N ICA-Donnan and M INTEQA2) distribution of the deprotonated carboxylic and phenolic groups. The different ways in which soil organic matter has been integrated into these models are discussed in regards to the model-experiment comparisons that were found in the literature, concerning applications to either laboratory or natural systems. Much of the attention has been focused on the two most advanced models, W HAM and N ICA-Donnan, which are able to reasonably describe most of the experimental results. Nevertheless, a better knowledge of the humic substances metal-binding properties is needed to better constrain model inputs with site-specific parameter values. This represents the main axis of research that needs to be carried out to improve the models. In addition to