Sample records for comparative toxicogenomics database

  1. Workshop report: Identifying opportunities for global integration of toxicogenomics databases, 26-27 June 2013, Research Triangle Park, NC, USA.

    PubMed

    Hendrickx, Diana M; Boyles, Rebecca R; Kleinjans, Jos C S; Dearry, Allen

    2014-12-01

    A joint US-EU workshop on enhancing data sharing and exchange in toxicogenomics was held at the National Institute for Environmental Health Sciences. Currently, efficient reuse of data is hampered by problems related to public data availability, data quality, database interoperability (the ability to exchange information), standardization and sustainability. At the workshop, experts from universities and research institutes presented databases, studies, organizations and tools that attempt to deal with these problems. Furthermore, a case study showing that combining toxicogenomics data from multiple resources leads to more accurate predictions in risk assessment was presented. All participants agreed that there is a need for a web portal describing the diverse, heterogeneous data resources relevant for toxicogenomics research. Furthermore, there was agreement that linking more data resources would improve toxicogenomics data analysis. To outline a roadmap to enhance interoperability between data resources, the participants recommend collecting user stories from the toxicogenomics research community on barriers in data sharing and exchange currently hampering answering to certain research questions. These user stories may guide the prioritization of steps to be taken for enhancing integration of toxicogenomics databases.

  2. The Comparative Toxicogenomics Database: update 2017.

    PubMed

    Davis, Allan Peter; Grondin, Cynthia J; Johnson, Robin J; Sciaky, Daniela; King, Benjamin L; McMorran, Roy; Wiegers, Jolene; Wiegers, Thomas C; Mattingly, Carolyn J

    2017-01-04

    The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) provides information about interactions between chemicals and gene products, and their relationships to diseases. Core CTD content (chemical-gene, chemical-disease and gene-disease interactions manually curated from the literature) are integrated with each other as well as with select external datasets to generate expanded networks and predict novel associations. Today, core CTD includes more than 30.5 million toxicogenomic connections relating chemicals/drugs, genes/proteins, diseases, taxa, Gene Ontology (GO) annotations, pathways, and gene interaction modules. In this update, we report a 33% increase in our core data content since 2015, describe our new exposure module (that harmonizes exposure science information with core toxicogenomic data) and introduce a novel dataset of GO-disease inferences (that identify common molecular underpinnings for seemingly unrelated pathologies). These advancements centralize and contextualize real-world chemical exposures with molecular pathways to help scientists generate testable hypotheses in an effort to understand the etiology and mechanisms underlying environmentally influenced diseases. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Release of (and lessons learned from mining) a pioneering large toxicogenomics database.

    PubMed

    Sandhu, Komal S; Veeramachaneni, Vamsi; Yao, Xiang; Nie, Alex; Lord, Peter; Amaratunga, Dhammika; McMillian, Michael K; Verheyen, Geert R

    2015-07-01

    We release the Janssen Toxicogenomics database. This rat liver gene-expression database was generated using Codelink microarrays, and has been used over the past years within Janssen to derive signatures for multiple end points and to classify proprietary compounds. The release consists of gene-expression responses to 124 compounds, selected to give a broad coverage of liver-active compounds. A selection of the compounds were also analyzed on Affymetrix microarrays. The release includes results of an in-house reannotation pipeline to Entrez gene annotations, to classify probes into different confidence classes. High confidence unambiguously annotated probes were used to create gene-level data which served as starting point for cross-platform comparisons. Connectivity map-based similarity methods show excellent agreement between Codelink and Affymetrix runs of the same samples. We also compared our dataset with the Japanese Toxicogenomics Project and observed reasonable agreement, especially for compounds with stronger gene signatures. We describe an R-package containing the gene-level data and show how it can be used for expression-based similarity searches. Comparing the same biological samples run on the Affymetrix and the Codelink platform, good correspondence is observed using connectivity mapping approaches. As expected, this correspondence is smaller when the data are compared with an independent dataset such as TG-GATE. We hope that this collection of gene-expression profiles will be incorporated in toxicogenomics pipelines of users.

  4. The Comparative Toxicogenomics Database (CTD): A Resource for Comparative Toxicological Studies

    PubMed Central

    CJ, Mattingly; MC, Rosenstein; GT, Colby; JN, Forrest; JL, Boyer

    2006-01-01

    The etiology of most chronic diseases involves interactions between environmental factors and genes that modulate important biological processes (Olden and Wilson, 2000). We are developing the publicly available Comparative Toxicogenomics Database (CTD) to promote understanding about the effects of environmental chemicals on human health. CTD identifies interactions between chemicals and genes and facilitates cross-species comparative studies of these genes. The use of diverse animal models and cross-species comparative sequence studies has been critical for understanding basic physiological mechanisms and gene and protein functions. Similarly, these approaches will be valuable for exploring the molecular mechanisms of action of environmental chemicals and the genetic basis of differential susceptibility. PMID:16902965

  5. A DATABASE FOR TRACKING REPRODUCTIVE TOXICOGENOMIC DATA

    EPA Science Inventory

    A Database for Tracking Reproductive Toxicogenomic Data
    Wenjun Bao, Judy Schmid, Amber Goetz, Hongzu Ren and David Dix
    Reproductive Toxicology Division, National Health and Environmental Effects Research Laboratory, Office of Research and Development, U.S. Environmental Pr...

  6. Using binary classification to prioritize and curate articles for the Comparative Toxicogenomics Database.

    PubMed

    Vishnyakova, Dina; Pasche, Emilie; Ruch, Patrick

    2012-01-01

    We report on the original integration of an automatic text categorization pipeline, so-called ToxiCat (Toxicogenomic Categorizer), that we developed to perform biomedical documents classification and prioritization in order to speed up the curation of the Comparative Toxicogenomics Database (CTD). The task can be basically described as a binary classification task, where a scoring function is used to rank a selected set of articles. Then components of a question-answering system are used to extract CTD-specific annotations from the ranked list of articles. The ranking function is generated using a Support Vector Machine, which combines three main modules: an information retrieval engine for MEDLINE (EAGLi), a gene normalization service (NormaGene) developed for a previous BioCreative campaign and finally, a set of answering components and entity recognizer for diseases and chemicals. The main components of the pipeline are publicly available both as web application and web services. The specific integration performed for the BioCreative competition is available via a web user interface at http://pingu.unige.ch:8080/Toxicat.

  7. A DATABASE FOR TRACKING TOXICOGENOMIC SAMPLES AND PROCEDURES WITH GENOMIC, PROTEOMIC AND METABONOMIC COMPONENTS

    EPA Science Inventory

    A Database for Tracking Toxicogenomic Samples and Procedures with Genomic, Proteomic and Metabonomic Components
    Wenjun Bao1, Jennifer Fostel2, Michael D. Waters2, B. Alex Merrick2, Drew Ekman3, Mitchell Kostich4, Judith Schmid1, David Dix1
    Office of Research and Developmen...

  8. The curation paradigm and application tool used for manual curation of the scientific literature at the Comparative Toxicogenomics Database

    PubMed Central

    Davis, Allan Peter; Wiegers, Thomas C.; Murphy, Cynthia G.; Mattingly, Carolyn J.

    2011-01-01

    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and convert free-text information into a structured format using official nomenclature, integrating third party controlled vocabularies for chemicals, genes, diseases and organisms, and a novel controlled vocabulary for molecular interactions. Manual curation produces a robust, richly annotated dataset of highly accurate and detailed information. Currently, CTD describes over 349 000 molecular interactions between 6800 chemicals, 20 900 genes (for 330 organisms) and 4300 diseases that have been manually curated from over 25 400 peer-reviewed articles. This manually curated data are further integrated with other third party data (e.g. Gene Ontology, KEGG and Reactome annotations) to generate a wealth of toxicogenomic relationships. Here, we describe our approach to manual curation that uses a powerful and efficient paradigm involving mnemonic codes. This strategy allows biocurators to quickly capture detailed information from articles by generating simple statements using codes to represent the relationships between data types. The paradigm is versatile, expandable, and able to accommodate new data challenges that arise. We have incorporated this strategy into a web-based curation tool to further increase efficiency and productivity, implement quality control in real-time and accommodate biocurators working remotely. Database URL: http://ctd.mdibl.org PMID:21933848

  9. A DATABASE FOR TRACKING TOXICOGENOMIC SAMPLES AND PROCEDURES

    EPA Science Inventory

    Reproductive toxicogenomic studies generate large amounts of toxicological and genomic data. On the toxicology side, a substantial quantity of data accumulates from conventional endpoints such as histology, reproductive physiology and biochemistry. The largest source of genomics...

  10. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uehara, Takeki, E-mail: takeki.uehara@shionogi.co.jp; Toxicogenomics Informatics Project, National Institute of Biomedical Innovation, 7-6-8 Asagi, Ibaraki, Osaka 567-0085; Minowa, Yohsuke

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificitymore » in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model enables us to detect genotoxic as well as non-genotoxic hepatocarcinogens.« less

  11. Prioritizing PubMed articles for the Comparative Toxicogenomic Database utilizing semantic information

    PubMed Central

    Wilbur, W. John

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical–gene interactions, chemical–disease relationships and gene–disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein–protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team. PMID:23160415

  12. Prioritizing PubMed articles for the Comparative Toxicogenomic Database utilizing semantic information.

    PubMed

    Kim, Sun; Kim, Won; Wei, Chih-Hsuan; Lu, Zhiyong; Wilbur, W John

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) contains manually curated literature that describes chemical-gene interactions, chemical-disease relationships and gene-disease relationships. Finding articles containing this information is the first and an important step to assist manual curation efficiency. However, the complex nature of named entities and their relationships make it challenging to choose relevant articles. In this article, we introduce a machine learning framework for prioritizing CTD-relevant articles based on our prior system for the protein-protein interaction article classification task in BioCreative III. To address new challenges in the CTD task, we explore a new entity identification method for genes, chemicals and diseases. In addition, latent topics are analyzed and used as a feature type to overcome the small size of the training set. Applied to the BioCreative 2012 Triage dataset, our method achieved 0.8030 mean average precision (MAP) in the official runs, resulting in the top MAP system among participants. Integrated with PubTator, a Web interface for annotating biomedical literature, the proposed system also received a positive review from the CTD curation team.

  13. Use of genomic data in risk assessment case study: I. Evaluation of the dibutyl phthalate male reproductive development toxicity data set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makris, Susan L., E-mail: makris.susan@epa.gov; Euling, Susan Y.; Gray, L. Earl

    2013-09-15

    A case study was conducted, using dibutyl phthalate (DBP), to explore an approach to using toxicogenomic data in risk assessment. The toxicity and toxicogenomic data sets relative to DBP-related male reproductive developmental outcomes were considered conjointly to derive information about mode and mechanism of action. In this manuscript, we describe the case study evaluation of the toxicological database for DBP, focusing on identifying the full spectrum of male reproductive developmental effects. The data were assessed to 1) evaluate low dose and low incidence findings and 2) identify male reproductive toxicity endpoints without well-established modes of action (MOAs). These efforts ledmore » to the characterization of data gaps and research needs for the toxicity and toxicogenomic studies in a risk assessment context. Further, the identification of endpoints with unexplained MOAs in the toxicity data set was useful in the subsequent evaluation of the mechanistic information that the toxicogenomic data set evaluation could provide. The extensive analysis of the toxicology data set within the MOA context provided a resource of information for DBP in attempts to hypothesize MOAs (for endpoints without a well-established MOA) and to phenotypically anchor toxicogenomic and other mechanistic data both to toxicity endpoints and to available toxicogenomic data. This case study serves as an example of the steps that can be taken to develop a toxicological data source for a risk assessment, both in general and especially for risk assessments that include toxicogenomic data.« less

  14. Integrative data mining of high-throughput in vitro screens, in vivo data, and disease information to identify Adverse Outcome Pathway (AOP) signatures:ToxCast high-throughput screening data and Comparative Toxicogenomics Database (CTD) as a case study.

    EPA Science Inventory

    The Adverse Outcome Pathway (AOP) framework provides a systematic way to describe linkages between molecular and cellular processes and organism or population level effects. The current AOP assembly methods however, are inefficient. Our goal is to generate computationally-pr...

  15. Similar compounds searching system by using the gene expression microarray database.

    PubMed

    Toyoshiba, Hiroyoshi; Sawada, Hiroshi; Naeshiro, Ichiro; Horinouchi, Akira

    2009-04-10

    Numbers of microarrays have been examined and several public and commercial databases have been developed. However, it is not easy to compare in-house microarray data with those in a database because of insufficient reproducibility due to differences in the experimental conditions. As one of the approach to use these databases, we developed the similar compounds searching system (SCSS) on a toxicogenomics database. The datasets of 55 compounds administered to rats in the Toxicogenomics Project (TGP) database in Japan were used in this study. Using the fold-change ranking method developed by Lamb et al. [Lamb, J., Crawford, E.D., Peck, D., Modell, J.W., Blat, I.C., Wrobel, M.J., Lerner, J., Brunet, J.P., Subramanian, A., Ross, K.N., Reich, M., Hieronymus, H., Wei, G., Armstrong, S.A., Haggarty, S.J., Clemons, P.A., Wei, R., Carr, S.A., Lander, E.S., Golub, T.R., 2006. The connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. Science 313, 1929-1935] and criteria called hit ratio, the system let us compare in-house microarray data and those in the database. In-house generated data for clofibrate, phenobarbital, and a proprietary compound were tested to evaluate the performance of the SCSS method. Phenobarbital and clofibrate, which were included in the TGP database, scored highest by the SCSS method. Other high scoring compounds had effects similar to either phenobarbital (a cytochrome P450s inducer) or clofibrate (a peroxisome proliferator). Some of high scoring compounds identified using the proprietary compound-administered rats have been known to cause similar toxicological changes in different species. Our results suggest that the SCSS method could be used in drug discovery and development. Moreover, this method may be a powerful tool to understand the mechanisms by which biological systems respond to various chemical compounds and may also predict adverse effects of new compounds.

  16. Predicting Drug-induced Hepatotoxicity Using QSAR and Toxicogenomics Approaches

    PubMed Central

    Low, Yen; Uehara, Takeki; Minowa, Yohsuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro; Sedykh, Alexander; Muratov, Eugene; Fourches, Denis; Zhu, Hao; Rusyn, Ivan; Tropsha, Alexander

    2014-01-01

    Quantitative Structure-Activity Relationship (QSAR) modeling and toxicogenomics are used independently as predictive tools in toxicology. In this study, we evaluated the power of several statistical models for predicting drug hepatotoxicity in rats using different descriptors of drug molecules, namely their chemical descriptors and toxicogenomic profiles. The records were taken from the Toxicogenomics Project rat liver microarray database containing information on 127 drugs (http://toxico.nibio.go.jp/datalist.html). The model endpoint was hepatotoxicity in the rat following 28 days of exposure, established by liver histopathology and serum chemistry. First, we developed multiple conventional QSAR classification models using a comprehensive set of chemical descriptors and several classification methods (k nearest neighbor, support vector machines, random forests, and distance weighted discrimination). With chemical descriptors alone, external predictivity (Correct Classification Rate, CCR) from 5-fold external cross-validation was 61%. Next, the same classification methods were employed to build models using only toxicogenomic data (24h after a single exposure) treated as biological descriptors. The optimized models used only 85 selected toxicogenomic descriptors and had CCR as high as 76%. Finally, hybrid models combining both chemical descriptors and transcripts were developed; their CCRs were between 68 and 77%. Although the accuracy of hybrid models did not exceed that of the models based on toxicogenomic data alone, the use of both chemical and biological descriptors enriched the interpretation of the models. In addition to finding 85 transcripts that were predictive and highly relevant to the mechanisms of drug-induced liver injury, chemical structural alerts for hepatotoxicity were also identified. These results suggest that concurrent exploration of the chemical features and acute treatment-induced changes in transcript levels will both enrich the mechanistic understanding of sub-chronic liver injury and afford models capable of accurate prediction of hepatotoxicity from chemical structure and short-term assay results. PMID:21699217

  17. Entitymetrics: Measuring the Impact of Entities

    PubMed Central

    Ding, Ying; Song, Min; Han, Jia; Yu, Qi; Yan, Erjia; Lin, Lili; Chambers, Tamy

    2013-01-01

    This paper proposes entitymetrics to measure the impact of knowledge units. Entitymetrics highlight the importance of entities embedded in scientific literature for further knowledge discovery. In this paper, we use Metformin, a drug for diabetes, as an example to form an entity-entity citation network based on literature related to Metformin. We then calculate the network features and compare the centrality ranks of biological entities with results from Comparative Toxicogenomics Database (CTD). The comparison demonstrates the usefulness of entitymetrics to detect most of the outstanding interactions manually curated in CTD. PMID:24009660

  18. Comparison of toxicogenomics and traditional approaches to inform mode of action and points of departure in human health risk assessment of benzo[a]pyrene in drinking water

    PubMed Central

    Labib, Sarah; Bourdon-Lacombe, Julie; Kuo, Byron; Buick, Julie K.; Lemieux, France; Williams, Andrew; Halappanavar, Sabina; Malik, Amal; Luijten, Mirjam; Aubrecht, Jiri; Hyduke, Daniel R.; Fornace, Albert J.; Swartz, Carol D.; Recio, Leslie; Yauk, Carole L.

    2015-01-01

    Toxicogenomics is proposed to be a useful tool in human health risk assessment. However, a systematic comparison of traditional risk assessment approaches with those applying toxicogenomics has never been done. We conducted a case study to evaluate the utility of toxicogenomics in the risk assessment of benzo[a]pyrene (BaP), a well-studied carcinogen, for drinking water exposures. Our study was intended to compare methodologies, not to evaluate drinking water safety. We compared traditional (RA1), genomics-informed (RA2) and genomics-only (RA3) approaches. RA2 and RA3 applied toxicogenomics data from human cell cultures and mice exposed to BaP to determine if these data could provide insight into BaP's mode of action (MOA) and derive tissue-specific points of departure (POD). Our global gene expression analysis supported that BaP is genotoxic in mice and allowed the development of a detailed MOA. Toxicogenomics analysis in human lymphoblastoid TK6 cells demonstrated a high degree of consistency in perturbed pathways with animal tissues. Quantitatively, the PODs for traditional and transcriptional approaches were similar (liver 1.2 vs. 1.0 mg/kg-bw/day; lung 0.8 vs. 3.7 mg/kg-bw/day; forestomach 0.5 vs. 7.4 mg/kg-bw/day). RA3, which applied toxicogenomics in the absence of apical toxicology data, demonstrates that this approach provides useful information in data-poor situations. Overall, our study supports the use of toxicogenomics as a relatively fast and cost-effective tool for hazard identification, preliminary evaluation of potential carcinogens, and carcinogenic potency, in addition to identifying current limitations and practical questions for future work. PMID:25605026

  19. Toward a public toxicogenomics capability for supporting predictive toxicology: survey of current resources and chemical indexing of experiments in GEO and ArrayExpress.

    PubMed

    Williams-Devane, ClarLynda R; Wolf, Maritja A; Richard, Ann M

    2009-06-01

    A publicly available toxicogenomics capability for supporting predictive toxicology and meta-analysis depends on availability of gene expression data for chemical treatment scenarios, the ability to locate and aggregate such information by chemical, and broad data coverage within chemical, genomics, and toxicological information domains. This capability also depends on common genomics standards, protocol description, and functional linkages of diverse public Internet data resources. We present a survey of public genomics resources from these vantage points and conclude that, despite progress in many areas, the current state of the majority of public microarray databases is inadequate for supporting these objectives, particularly with regard to chemical indexing. To begin to address these inadequacies, we focus chemical annotation efforts on experimental content contained in the two primary public genomic resources: ArrayExpress and Gene Expression Omnibus. Automated scripts and extensive manual review were employed to transform free-text experiment descriptions into a standardized, chemically indexed inventory of experiments in both resources. These files, which include top-level summary annotations, allow for identification of current chemical-associated experimental content, as well as chemical-exposure-related (or "Treatment") content of greatest potential value to toxicogenomics investigation. With these chemical-index files, it is possible for the first time to assess the breadth and overlap of chemical study space represented in these databases, and to begin to assess the sufficiency of data with shared protocols for chemical similarity inferences. Chemical indexing of public genomics databases is a first important step toward integrating chemical, toxicological and genomics data into predictive toxicology.

  20. Reconciled Rat and Human Metabolic Networks for Comparative Toxicogenomics and Biomarker Predictions

    DTIC Science & Technology

    2017-02-08

    compared with the original human GPR rules (Supplementary Fig. 3). The consensus-based approach for filtering orthology annotations was designed to...ARTICLE Received 29 Jan 2016 | Accepted 13 Dec 2016 | Published 8 Feb 2017 Reconciled rat and human metabolic networks for comparative toxicogenomics...predictions in response to 76 drugs. We validate comparative predictions for xanthine derivatives with new experimental data and literature- based evidence

  1. Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database

    PubMed Central

    Davis, Allan Peter; Johnson, Robin J.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Rosenstein, Michael C.; Wiegers, Thomas C.; Mattingly, Carolyn J.

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and manually curate a triad of chemical–gene, chemical–disease and gene–disease interactions. Typically, articles for CTD are selected using a chemical-centric approach by querying PubMed to retrieve a corpus containing the chemical of interest. Although this technique ensures adequate coverage of knowledge about the chemical (i.e. data completeness), it does not necessarily reflect the most current state of all toxicological research in the community at large (i.e. data currency). Keeping databases current with the most recent scientific results, as well as providing a rich historical background from legacy articles, is a challenging process. To address this issue of data currency, CTD designed and tested a journal-centric approach of curation to complement our chemical-centric method. We first identified priority journals based on defined criteria. Next, over 7 weeks, three biocurators reviewed 2425 articles from three consecutive years (2009–2011) of three targeted journals. From this corpus, 1252 articles contained relevant data for CTD and 52 752 interactions were manually curated. Here, we describe our journal selection process, two methods of document delivery for the biocurators and the analysis of the resulting curation metrics, including data currency, and both intra-journal and inter-journal comparisons of research topics. Based on our results, we expect that curation by select journals can (i) be easily incorporated into the curation pipeline to complement our chemical-centric approach; (ii) build content more evenly for chemicals, genes and diseases in CTD (rather than biasing data by chemicals-of-interest); (iii) reflect developing areas in environmental health and (iv) improve overall data currency for chemicals, genes and diseases. Database URL: http://ctdbase.org/ PMID:23221299

  2. The ToxCast Pathway Database for Identifying Toxicity Signatures and Potential Modes of Action from Chemical Screening Data

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA), through its ToxCast program, is developing predictive toxicity approaches that will use in vitro high-throughput screening (HTS), high-content screening (HCS) and toxicogenomic data to predict in vivo toxicity phenotypes. There are ...

  3. Effect of the difference in vehicles on gene expression in the rat liver--analysis of the control data in the Toxicogenomics Project Database.

    PubMed

    Takashima, Kayoko; Mizukawa, Yumiko; Morishita, Katsumi; Okuyama, Manabu; Kasahara, Toshihiko; Toritsuka, Naoki; Miyagishima, Toshikazu; Nagao, Taku; Urushidani, Tetsuro

    2006-05-08

    The Toxicogenomics Project is a 5-year collaborative project by the Japanese government and pharmaceutical companies in 2002. Its aim is to construct a large-scale toxicology database of 150 compounds orally administered to rats. The test consists of a single administration test (3, 6, 9 and 24 h) and a repeated administration test (3, 7, 14 and 28 days), and the conventional toxicology data together with the gene expression data in liver as analyzed by using Affymetrix GeneChip are being accumulated. In the project, either methylcellulose or corn oil is employed as vehicle. We examined whether the vehicle itself affects the analysis of gene expression and found that corn oil alone affected the food consumption and biochemical parameters mainly related to lipid metabolism, and this accompanied typical changes in the gene expression. Most of the genes modulated by corn oil were related to cholesterol or fatty acid metabolism (e.g., CYP7A1, CYP8B1, 3-hydroxy-3-methylglutaryl-Coenzyme A reductase, squalene epoxidase, angiopoietin-like protein 4, fatty acid synthase, fatty acid binding proteins), suggesting that the response was physiologic to the oil intake. Many of the lipid-related genes showed circadian rhythm within a day, but the expression pattern of general clock genes (e.g., period 2, arylhydrocarbon nuclear receptor translocator-like, D site albumin promoter binding protein) were unaffected by corn oil, suggesting that the effects are specific for lipid metabolism. These results would be useful for usage of the database especially when drugs with different vehicle control are compared.

  4. Text Mining Effectively Scores and Ranks the Literature for Improving Chemical-Gene-Disease Curation at the Comparative Toxicogenomics Database

    PubMed Central

    Johnson, Robin J.; Lay, Jean M.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; Murphy, Cynthia Grondin; Mattingly, Carolyn J.

    2013-01-01

    The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) is a public resource that curates interactions between environmental chemicals and gene products, and their relationships to diseases, as a means of understanding the effects of environmental chemicals on human health. CTD provides a triad of core information in the form of chemical-gene, chemical-disease, and gene-disease interactions that are manually curated from scientific articles. To increase the efficiency, productivity, and data coverage of manual curation, we have leveraged text mining to help rank and prioritize the triaged literature. Here, we describe our text-mining process that computes and assigns each article a document relevancy score (DRS), wherein a high DRS suggests that an article is more likely to be relevant for curation at CTD. We evaluated our process by first text mining a corpus of 14,904 articles triaged for seven heavy metals (cadmium, cobalt, copper, lead, manganese, mercury, and nickel). Based upon initial analysis, a representative subset corpus of 3,583 articles was then selected from the 14,094 articles and sent to five CTD biocurators for review. The resulting curation of these 3,583 articles was analyzed for a variety of parameters, including article relevancy, novel data content, interaction yield rate, mean average precision, and biological and toxicological interpretability. We show that for all measured parameters, the DRS is an effective indicator for scoring and improving the ranking of literature for the curation of chemical-gene-disease information at CTD. Here, we demonstrate how fully incorporating text mining-based DRS scoring into our curation pipeline enhances manual curation by prioritizing more relevant articles, thereby increasing data content, productivity, and efficiency. PMID:23613709

  5. Generating Gene Ontology-Disease Inferences to Explore Mechanisms of Human Disease at the Comparative Toxicogenomics Database

    PubMed Central

    Davis, Allan Peter; Wiegers, Thomas C.; King, Benjamin L.; Wiegers, Jolene; Grondin, Cynthia J.; Sciaky, Daniela; Johnson, Robin J.; Mattingly, Carolyn J.

    2016-01-01

    Strategies for discovering common molecular events among disparate diseases hold promise for improving understanding of disease etiology and expanding treatment options. One technique is to leverage curated datasets found in the public domain. The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) manually curates chemical-gene, chemical-disease, and gene-disease interactions from the scientific literature. The use of official gene symbols in CTD interactions enables this information to be combined with the Gene Ontology (GO) file from NCBI Gene. By integrating these GO-gene annotations with CTD’s gene-disease dataset, we produce 753,000 inferences between 15,700 GO terms and 4,200 diseases, providing opportunities to explore presumptive molecular underpinnings of diseases and identify biological similarities. Through a variety of applications, we demonstrate the utility of this novel resource. As a proof-of-concept, we first analyze known repositioned drugs (e.g., raloxifene and sildenafil) and see that their target diseases have a greater degree of similarity when comparing GO terms vs. genes. Next, a computational analysis predicts seemingly non-intuitive diseases (e.g., stomach ulcers and atherosclerosis) as being similar to bipolar disorder, and these are validated in the literature as reported co-diseases. Additionally, we leverage other CTD content to develop testable hypotheses about thalidomide-gene networks to treat seemingly disparate diseases. Finally, we illustrate how CTD tools can rank a series of drugs as potential candidates for repositioning against B-cell chronic lymphocytic leukemia and predict cisplatin and the small molecule inhibitor JQ1 as lead compounds. The CTD dataset is freely available for users to navigate pathologies within the context of extensive biological processes, molecular functions, and cellular components conferred by GO. This inference set should aid researchers, bioinformaticists, and pharmaceutical drug makers in finding commonalities in disease mechanisms, which in turn could help identify new therapeutics, new indications for existing pharmaceuticals, potential disease comorbidities, and alerts for side effects. PMID:27171405

  6. Generating Gene Ontology-Disease Inferences to Explore Mechanisms of Human Disease at the Comparative Toxicogenomics Database.

    PubMed

    Davis, Allan Peter; Wiegers, Thomas C; King, Benjamin L; Wiegers, Jolene; Grondin, Cynthia J; Sciaky, Daniela; Johnson, Robin J; Mattingly, Carolyn J

    2016-01-01

    Strategies for discovering common molecular events among disparate diseases hold promise for improving understanding of disease etiology and expanding treatment options. One technique is to leverage curated datasets found in the public domain. The Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) manually curates chemical-gene, chemical-disease, and gene-disease interactions from the scientific literature. The use of official gene symbols in CTD interactions enables this information to be combined with the Gene Ontology (GO) file from NCBI Gene. By integrating these GO-gene annotations with CTD's gene-disease dataset, we produce 753,000 inferences between 15,700 GO terms and 4,200 diseases, providing opportunities to explore presumptive molecular underpinnings of diseases and identify biological similarities. Through a variety of applications, we demonstrate the utility of this novel resource. As a proof-of-concept, we first analyze known repositioned drugs (e.g., raloxifene and sildenafil) and see that their target diseases have a greater degree of similarity when comparing GO terms vs. genes. Next, a computational analysis predicts seemingly non-intuitive diseases (e.g., stomach ulcers and atherosclerosis) as being similar to bipolar disorder, and these are validated in the literature as reported co-diseases. Additionally, we leverage other CTD content to develop testable hypotheses about thalidomide-gene networks to treat seemingly disparate diseases. Finally, we illustrate how CTD tools can rank a series of drugs as potential candidates for repositioning against B-cell chronic lymphocytic leukemia and predict cisplatin and the small molecule inhibitor JQ1 as lead compounds. The CTD dataset is freely available for users to navigate pathologies within the context of extensive biological processes, molecular functions, and cellular components conferred by GO. This inference set should aid researchers, bioinformaticists, and pharmaceutical drug makers in finding commonalities in disease mechanisms, which in turn could help identify new therapeutics, new indications for existing pharmaceuticals, potential disease comorbidities, and alerts for side effects.

  7. Transcriptional Responses Reveal Similarities Between Preclinical Rat Liver Testing Systems.

    PubMed

    Liu, Zhichao; Delavan, Brian; Roberts, Ruth; Tong, Weida

    2018-01-01

    Toxicogenomics (TGx) is an important tool to gain an enhanced understanding of toxicity at the molecular level. Previously, we developed a pair ranking (PRank) method to assess in vitro to in vivo extrapolation (IVIVE) using toxicogenomic datasets from the Open Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System (TG-GATEs) database. With this method, we investiagted three important questions that were not addressed in our previous study: (1) is a 1-day in vivo short-term assay able to replace the 28-day standard and expensive toxicological assay? (2) are some biological processes more conservative across different preclinical testing systems than others? and (3) do these preclinical testing systems have the similar resolution in differentiating drugs by their therapeutic uses? For question 1, a high similarity was noted (PRank score = 0.90), indicating the potential utility of shorter term in vivo studies to predict outcome in longer term and more expensive in vivo model systems. There was a moderate similarity between rat primary hepatocytes and in vivo repeat-dose studies (PRank score = 0.71) but a low similarity (PRank score = 0.56) between rat primary hepatocytes and in vivo single dose studies. To address question 2, we limited the analysis to gene sets relevant to specific toxicogenomic pathways and we found that pathways such as lipid metabolism were consistently over-represented in all three assay systems. For question 3, all three preclinical assay systems could distinguish compounds from different therapeutic categories. This suggests that any noted differences in assay systems was biological process-dependent and furthermore that all three systems have utility in assessing drug responses within a certain drug class. In conclusion, this comparison of three commonly used rat TGx systems provides useful information in utility and application of TGx assays.

  8. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kienhuis, Anne S., E-mail: anne.kienhuis@rivm.nl; RIKILT, Institute of Food Safety, Wageningen UR, PO Box 230, 6700 AE, Wageningen; Netherlands Toxicogenomics Centre

    Hepatic systems toxicology is the integrative analysis of toxicogenomic technologies, e.g., transcriptomics, proteomics, and metabolomics, in combination with traditional toxicology measures to improve the understanding of mechanisms of hepatotoxic action. Hepatic toxicology studies that have employed toxicogenomic technologies to date have already provided a proof of principle for the value of hepatic systems toxicology in hazard identification. In the present review, acetaminophen is used as a model compound to discuss the application of toxicogenomics in hepatic systems toxicology for its potential role in the risk assessment process, to progress from hazard identification towards hazard characterization. The toxicogenomics-based parallelogram is usedmore » to identify current achievements and limitations of acetaminophen toxicogenomic in vivo and in vitro studies for in vitro-to-in vivo and interspecies comparisons, with the ultimate aim to extrapolate animal studies to humans in vivo. This article provides a model for comparison of more species and more in vitro models enhancing the robustness of common toxicogenomic responses and their relevance to human risk assessment. To progress to quantitative dose-response analysis needed for hazard characterization, in hepatic systems toxicology studies, generation of toxicogenomic data of multiple doses/concentrations and time points is required. Newly developed bioinformatics tools for quantitative analysis of toxicogenomic data can aid in the elucidation of dose-responsive effects. The challenge herein is to assess which toxicogenomic responses are relevant for induction of the apical effect and whether perturbations are sufficient for the induction of downstream events, eventually causing toxicity.« less

  10. TOXICOGENOMICS DRUG DISCOVERY AND THE PATHOLOGIST

    EPA Science Inventory

    Toxicogenomics, drug discovery, and pathologist.

    The field of toxicogenomics, which currently focuses on the application of large-scale differential gene expression (DGE) data to toxicology, is starting to influence drug discovery and development in the pharmaceutical indu...

  11. NPACT: Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database

    PubMed Central

    Mangal, Manu; Sagar, Parul; Singh, Harinder; Raghava, Gajendra P. S.; Agarwal, Subhash M.

    2013-01-01

    Plant-derived molecules have been highly valued by biomedical researchers and pharmaceutical companies for developing drugs, as they are thought to be optimized during evolution. Therefore, we have collected and compiled a central resource Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database (NPACT, http://crdd.osdd.net/raghava/npact/) that gathers the information related to experimentally validated plant-derived natural compounds exhibiting anti-cancerous activity (in vitro and in vivo), to complement the other databases. It currently contains 1574 compound entries, and each record provides information on their structure, manually curated published data on in vitro and in vivo experiments along with reference for users referral, inhibitory values (IC50/ED50/EC50/GI50), properties (physical, elemental and topological), cancer types, cell lines, protein targets, commercial suppliers and drug likeness of compounds. NPACT can easily be browsed or queried using various options, and an online similarity tool has also been made available. Further, to facilitate retrieval of existing data, each record is hyperlinked to similar databases like SuperNatural, Herbal Ingredients’ Targets, Comparative Toxicogenomics Database, PubChem and NCI-60 GI50 data. PMID:23203877

  12. A CTD–Pfizer collaboration: manual curation of 88 000 scientific articles text mined for drug–disease and drug–phenotype interactions

    PubMed Central

    Davis, Allan Peter; Wiegers, Thomas C.; Roberts, Phoebe M.; King, Benjamin L.; Lay, Jean M.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Johnson, Robin; Keating, Heather; Greene, Nigel; Hernandez, Robert; McConnell, Kevin J.; Enayetallah, Ahmed E.; Mattingly, Carolyn J.

    2013-01-01

    Improving the prediction of chemical toxicity is a goal common to both environmental health research and pharmaceutical drug development. To improve safety detection assays, it is critical to have a reference set of molecules with well-defined toxicity annotations for training and validation purposes. Here, we describe a collaboration between safety researchers at Pfizer and the research team at the Comparative Toxicogenomics Database (CTD) to text mine and manually review a collection of 88 629 articles relating over 1 200 pharmaceutical drugs to their potential involvement in cardiovascular, neurological, renal and hepatic toxicity. In 1 year, CTD biocurators curated 2 54 173 toxicogenomic interactions (1 52 173 chemical–disease, 58 572 chemical–gene, 5 345 gene–disease and 38 083 phenotype interactions). All chemical–gene–disease interactions are fully integrated with public CTD, and phenotype interactions can be downloaded. We describe Pfizer’s text-mining process to collate the articles, and CTD’s curation strategy, performance metrics, enhanced data content and new module to curate phenotype information. As well, we show how data integration can connect phenotypes to diseases. This curation can be leveraged for information about toxic endpoints important to drug safety and help develop testable hypotheses for drug–disease events. The availability of these detailed, contextualized, high-quality annotations curated from seven decades’ worth of the scientific literature should help facilitate new mechanistic screening assays for pharmaceutical compound survival. This unique partnership demonstrates the importance of resource sharing and collaboration between public and private entities and underscores the complementary needs of the environmental health science and pharmaceutical communities. Database URL: http://ctdbase.org/ PMID:24288140

  13. A CTD-Pfizer collaboration: manual curation of 88,000 scientific articles text mined for drug-disease and drug-phenotype interactions.

    PubMed

    Davis, Allan Peter; Wiegers, Thomas C; Roberts, Phoebe M; King, Benjamin L; Lay, Jean M; Lennon-Hopkins, Kelley; Sciaky, Daniela; Johnson, Robin; Keating, Heather; Greene, Nigel; Hernandez, Robert; McConnell, Kevin J; Enayetallah, Ahmed E; Mattingly, Carolyn J

    2013-01-01

    Improving the prediction of chemical toxicity is a goal common to both environmental health research and pharmaceutical drug development. To improve safety detection assays, it is critical to have a reference set of molecules with well-defined toxicity annotations for training and validation purposes. Here, we describe a collaboration between safety researchers at Pfizer and the research team at the Comparative Toxicogenomics Database (CTD) to text mine and manually review a collection of 88,629 articles relating over 1,200 pharmaceutical drugs to their potential involvement in cardiovascular, neurological, renal and hepatic toxicity. In 1 year, CTD biocurators curated 254,173 toxicogenomic interactions (152,173 chemical-disease, 58,572 chemical-gene, 5,345 gene-disease and 38,083 phenotype interactions). All chemical-gene-disease interactions are fully integrated with public CTD, and phenotype interactions can be downloaded. We describe Pfizer's text-mining process to collate the articles, and CTD's curation strategy, performance metrics, enhanced data content and new module to curate phenotype information. As well, we show how data integration can connect phenotypes to diseases. This curation can be leveraged for information about toxic endpoints important to drug safety and help develop testable hypotheses for drug-disease events. The availability of these detailed, contextualized, high-quality annotations curated from seven decades' worth of the scientific literature should help facilitate new mechanistic screening assays for pharmaceutical compound survival. This unique partnership demonstrates the importance of resource sharing and collaboration between public and private entities and underscores the complementary needs of the environmental health science and pharmaceutical communities. Database URL: http://ctdbase.org/

  14. Utilizing toxicogenomic data to understand chemical mechanism of action in risk assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Vickie S., E-mail: wilson.vickie@epa.gov; Keshava, Nagalakshmi; Hester, Susan

    2013-09-15

    The predominant role of toxicogenomic data in risk assessment, thus far, has been one of augmentation of more traditional in vitro and in vivo toxicology data. This article focuses on the current available examples of instances where toxicogenomic data has been evaluated in human health risk assessment (e.g., acetochlor and arsenicals) which have been limited to the application of toxicogenomic data to inform mechanism of action. This article reviews the regulatory policy backdrop and highlights important efforts to ultimately achieve regulatory acceptance. A number of research efforts on specific chemicals that were designed for risk assessment purposes have employed mechanismmore » or mode of action hypothesis testing and generating strategies. The strides made by large scale efforts to utilize toxicogenomic data in screening, testing, and risk assessment are also discussed. These efforts include both the refinement of methodologies for performing toxicogenomics studies and analysis of the resultant data sets. The current issues limiting the application of toxicogenomics to define mode or mechanism of action in risk assessment are discussed together with interrelated research needs. In summary, as chemical risk assessment moves away from a single mechanism of action approach toward a toxicity pathway-based paradigm, we envision that toxicogenomic data from multiple technologies (e.g., proteomics, metabolomics, transcriptomics, supportive RT-PCR studies) can be used in conjunction with one another to understand the complexities of multiple, and possibly interacting, pathways affected by chemicals which will impact human health risk assessment.« less

  15. Asymmetric author-topic model for knowledge discovering of big data in toxicogenomics.

    PubMed

    Chung, Ming-Hua; Wang, Yuping; Tang, Hailin; Zou, Wen; Basinger, John; Xu, Xiaowei; Tong, Weida

    2015-01-01

    The advancement of high-throughput screening technologies facilitates the generation of massive amount of biological data, a big data phenomena in biomedical science. Yet, researchers still heavily rely on keyword search and/or literature review to navigate the databases and analyses are often done in rather small-scale. As a result, the rich information of a database has not been fully utilized, particularly for the information embedded in the interactive nature between data points that are largely ignored and buried. For the past 10 years, probabilistic topic modeling has been recognized as an effective machine learning algorithm to annotate the hidden thematic structure of massive collection of documents. The analogy between text corpus and large-scale genomic data enables the application of text mining tools, like probabilistic topic models, to explore hidden patterns of genomic data and to the extension of altered biological functions. In this paper, we developed a generalized probabilistic topic model to analyze a toxicogenomics dataset that consists of a large number of gene expression data from the rat livers treated with drugs in multiple dose and time-points. We discovered the hidden patterns in gene expression associated with the effect of doses and time-points of treatment. Finally, we illustrated the ability of our model to identify the evidence of potential reduction of animal use.

  16. Integrating toxicogenomics into human health risk assessment: lessons learned from the benzo[a]pyrene case study.

    PubMed

    Chepelev, Nikolai L; Moffat, Ivy D; Labib, Sarah; Bourdon-Lacombe, Julie; Kuo, Byron; Buick, Julie K; Lemieux, France; Malik, Amal I; Halappanavar, Sabina; Williams, Andrew; Yauk, Carole L

    2015-01-01

    The use of short-term toxicogenomic tests to predict cancer (or other health effects) offers considerable advantages relative to traditional toxicity testing methods. The advantages include increased throughput, increased mechanistic data, and significantly reduced costs. However, precisely how toxicogenomics data can be used to support human health risk assessment (RA) is unclear. In a companion paper ( Moffat et al. 2014 ), we present a case study evaluating the utility of toxicogenomics in the RA of benzo[a]pyrene (BaP), a known human carcinogen. The case study is meant as a proof-of-principle exercise using a well-established mode of action (MOA) that impacts multiple tissues, which should provide a best case example. We found that toxicogenomics provided rich mechanistic data applicable to hazard identification, dose-response analysis, and quantitative RA of BaP. Based on this work, here we share some useful lessons for both research and RA, and outline our perspective on how toxicogenomics can benefit RA in the short- and long-term. Specifically, we focus on (1) obtaining biologically relevant data that are readily suitable for establishing an MOA for toxicants, (2) examining the human relevance of an MOA from animal testing, and (3) proposing appropriate quantitative values for RA. We describe our envisioned strategy on how toxicogenomics can become a tool in RA, especially when anchored to other short-term toxicity tests (apical endpoints) to increase confidence in the proposed MOA, and emphasize the need for additional studies on other MOAs to define the best practices in the application of toxicogenomics in RA.

  17. Complementary roles for toxicologic pathology and mathematics in toxicogenomics, with special reference to data interpretation and oscillatory dynamics.

    PubMed

    Morgan, Kevin T; Pino, Michael; Crosby, Lynn M; Wang, Min; Elston, Timothy C; Jayyosi, Zaid; Bonnefoi, Marc; Boorman, Gary

    2004-01-01

    Toxicogenomics is an emerging multidisciplinary science that will profoundly impact the practice of toxicology. New generations of biologists, using evolving toxicogenomics tools, will generate massive data sets in need of interpretation. Mathematical tools are necessary to cluster and otherwise find meaningful structure in such data. The linking of this structure to gene functions and disease processes, and finally the generation of useful data interpretation remains a significant challenge. The training and background of pathologists make them ideally suited to contribute to the field of toxicogenomics, from experimental design to data interpretation. Toxicologic pathology, a discipline based on pattern recognition, requires familiarity with the dynamics of disease processes and interactions between organs, tissues, and cell populations. Optimal involvement of toxicologic pathologists in toxicogenomics requires that they communicate effectively with the many other scientists critical for the effective application of this complex discipline to societal problems. As noted by Petricoin III et al (Nature Genetics 32, 474-479, 2002), cooperation among regulators, sponsors and experts will be essential for realizing the potential of microarrays for public health. Following a brief introduction to the role of mathematics in toxicogenomics, "data interpretation" from the perspective of a pathologist is briefly discussed. Based on oscillatory behavior in the liver, the importance of an understanding of mathematics is addressed, and an approach to learning mathematics "later in life" is provided. An understanding of pathology by mathematicians involved in toxicogenomics is equally critical, as both mathematics and pathology are essential for transforming toxicogenomics data sets into useful knowledge.

  18. Comparison of MeHg-induced toxicogenomic responses across in vivo and in vitro models used in developmental toxicology.

    PubMed

    Robinson, Joshua F; Theunissen, Peter T; van Dartel, Dorien A M; Pennings, Jeroen L; Faustman, Elaine M; Piersma, Aldert H

    2011-09-01

    Toxicogenomic evaluations may improve toxicity prediction of in vitro-based developmental models, such as whole embryo culture (WEC) and embryonic stem cells (ESC), by providing a robust mechanistic marker which can be linked with responses associated with developmental toxicity in vivo. While promising in theory, toxicogenomic comparisons between in vivo and in vitro models are complex due to inherent differences in model characteristics and experimental design. Determining factors which influence these global comparisons are critical in the identification of reliable mechanistic-based markers of developmental toxicity. In this study, we compared available toxicogenomic data assessing the impact of the known teratogen, methylmercury (MeHg) across a diverse set of in vitro and in vivo models to investigate the impact of experimental variables (i.e. model, dose, time) on our comparative assessments. We evaluated common and unique aspects at both the functional (Gene Ontology) and gene level of MeHg-induced response. At the functional level, we observed stronger similarity in MeHg-response between mouse embryos exposed in utero (2 studies), ESC, and WEC as compared to liver, brain and mouse embryonic fibroblast MeHg studies. These findings were strongly correlated to the presence of a MeHg-induced developmentally related gene signature. In addition, we identified specific MeHg-induced gene expression alterations associated with developmental signaling and heart development across WEC, ESC and in vivo systems. However, the significance of overlap between studies was highly dependent on traditional experimental variables (i.e. dose, time). In summary, we identify promising examples of unique gene expression responses which show in vitro-in vivo similarities supporting the relevance of in vitro developmental models for predicting in vivo developmental toxicity. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Signal transduction disturbance related to hepatocarcinogenesis in mouse by prolonged exposure to Nanjing drinking water.

    PubMed

    Zhang, Rui; Sun, Jie; Zhang, Yan; Cheng, Shupei; Zhang, Xiaowei

    2013-09-01

    Toxicogenomic approaches were used to investigate the potential hepatocarcinogenic effects on mice by oral exposure to Nanjing drinking water (NJDW). Changes in the hepatic transcriptome of 3 weeks male mice (Mus musculus) were monitored and dissected after oral exposure to NJDW for 90 days. No preneoplastic and neoplastic lesions were observed in the hepatic tissue by the end of NJDW exposure. However, total of 746 genes were changed transcriptionally. Thirty-one percent of differentially expressed genes (DEGs) were associated with the functional categories of cell cycle regulation, adhesion, growth, apoptosis, and signal transduction, which are closely implicated in tumorigenesis and progression. Interrogation of Kyoto Encyclopedia of Genes and Genomes revealed that 43 DEGs were mapped to several crucial signaling pathways implicated in the pathogenesis of hepatocellular carcinoma (HCC). In signal transduction network constructed via Genes2Networks software, Egfr, Akt1, Atf2, Ctnnb1, Hras, Mapk1, Smad2, and Ccnd1 were hubs. Direct gene-disease relationships obtained from Comparative Toxicogenomics Database and scientific literatures revealed that the hubs have direct mechanism or biomarker relationships with hepatocellular preneoplastic lesions or hepatocarcinogenesis. Therefore, prolonged intake of NJDW without employing any indoor water treatment strategy might predispose mouse to HCC. Furthermore, Egfr, Akt1, Ctnnb1, Hras, Mapk1, Smad2, and Ccnd1 were identified as promising biomarkers of the potential combined hepatocarcinogenicity.

  20. Use of Genomic Data in Risk Assessment Caes Study: II. Evaluation of the Dibutyl Phthalate Toxicogenomic Dataset

    EPA Science Inventory

    An evaluation of the toxicogenomic data set for dibutyl phthalate (DBP) and male reproductive developmental effects was performed as part of a larger case study to test an approach for incorporating genomic data in risk assessment. The DBP toxicogenomic data set is composed of ni...

  1. Meeting Report: Validation of Toxicogenomics-Based Test Systems: ECVAM–ICCVAM/NICEATM Considerations for Regulatory Use

    PubMed Central

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.

    2006-01-01

    This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466

  2. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use.

    PubMed

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M

    2006-03-01

    This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.

  3. Connection Map for Compounds (CMC): A Server for Combinatorial Drug Toxicity and Efficacy Analysis.

    PubMed

    Liu, Lei; Tsompana, Maria; Wang, Yong; Wu, Dingfeng; Zhu, Lixin; Zhu, Ruixin

    2016-09-26

    Drug discovery and development is a costly and time-consuming process with a high risk for failure resulting primarily from a drug's associated clinical safety and efficacy potential. Identifying and eliminating inapt candidate drugs as early as possible is an effective way for reducing unnecessary costs, but limited analytical tools are currently available for this purpose. Recent growth in the area of toxicogenomics and pharmacogenomics has provided with a vast amount of drug expression microarray data. Web servers such as CMap and LTMap have used this information to evaluate drug toxicity and mechanisms of action independently; however, their wider applicability has been limited by the lack of a combinatorial drug-safety type of analysis. Using available genome-wide drug transcriptional expression profiles, we developed the first web server for combinatorial evaluation of toxicity and efficacy of candidate drugs named "Connection Map for Compounds" (CMC). Using CMC, researchers can initially compare their query drug gene signatures with prebuilt gene profiles generated from two large-scale toxicogenomics databases, and subsequently perform a drug efficacy analysis for identification of known mechanisms of drug action or generation of new predictions. CMC provides a novel approach for drug repositioning and early evaluation in drug discovery with its unique combination of toxicity and efficacy analyses, expansibility of data and algorithms, and customization of reference gene profiles. CMC can be freely accessed at http://cadd.tongji.edu.cn/webserver/CMCbp.jsp .

  4. Toward a Public Toxicogenomics Capability for Supporting ...

    EPA Pesticide Factsheets

    A publicly available toxicogenomics capability for supporting predictive toxicology and meta-analysis depends on availability of gene expression data for chemical treatment scenarios, the ability to locate and aggregate such information by chemical, and broad data coverage within chemical, genomics, and toxicological information domains. This capability also depends on common genomics standards, protocol description, and functional linkages of diverse public Internet data resources. We present a survey of public genomics resources from these vantage points and conclude that, despite progress in many areas, the current state of the majority of public microarray databases is inadequate for supporting these objectives, particularly with regard to chemical indexing. To begin to address these inadequacies, we focus chemical annotation efforts on experimental content contained in the two primary public genomic resources: ArrayExpress and Gene Expression Omnibus. Automated scripts and extensive manual review were employed to transform free-text experiment descriptions into a standardized, chemically indexed inventory of experiments in both resources. These files, which include top-level summary annotations, allow for identification of current chemical-associated experimental content, as well as chemical-exposure–related (or

  5. The Disease Portals, disease-gene annotation and the RGD disease ontology at the Rat Genome Database.

    PubMed

    Hayman, G Thomas; Laulederkind, Stanley J F; Smith, Jennifer R; Wang, Shur-Jen; Petri, Victoria; Nigam, Rajni; Tutaj, Marek; De Pons, Jeff; Dwinell, Melinda R; Shimoyama, Mary

    2016-01-01

    The Rat Genome Database (RGD;http://rgd.mcw.edu/) provides critical datasets and software tools to a diverse community of rat and non-rat researchers worldwide. To meet the needs of the many users whose research is disease oriented, RGD has created a series of Disease Portals and has prioritized its curation efforts on the datasets important to understanding the mechanisms of various diseases. Gene-disease relationships for three species, rat, human and mouse, are annotated to capture biomarkers, genetic associations, molecular mechanisms and therapeutic targets. To generate gene-disease annotations more effectively and in greater detail, RGD initially adopted the MEDIC disease vocabulary from the Comparative Toxicogenomics Database and adapted it for use by expanding this framework with the addition of over 1000 terms to create the RGD Disease Ontology (RDO). The RDO provides the foundation for, at present, 10 comprehensive disease area-related dataset and analysis platforms at RGD, the Disease Portals. Two major disease areas are the focus of data acquisition and curation efforts each year, leading to the release of the related Disease Portals. Collaborative efforts to realize a more robust disease ontology are underway. Database URL:http://rgd.mcw.edu. © The Author(s) 2016. Published by Oxford University Press.

  6. Use of genomic data in risk assessment case study: II. Evaluation of the dibutyl phthalate toxicogenomic data set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Euling, Susan Y., E-mail: euling.susan@epa.gov; White, Lori D.; Kim, Andrea S.

    An evaluation of the toxicogenomic data set for dibutyl phthalate (DBP) and male reproductive developmental effects was performed as part of a larger case study to test an approach for incorporating genomic data in risk assessment. The DBP toxicogenomic data set is composed of nine in vivo studies from the published literature that exposed rats to DBP during gestation and evaluated gene expression changes in testes or Wolffian ducts of male fetuses. The exercise focused on qualitative evaluation, based on a lack of available dose–response data, of the DBP toxicogenomic data set to postulate modes and mechanisms of action formore » the male reproductive developmental outcomes, which occur in the lower dose range. A weight-of-evidence evaluation was performed on the eight DBP toxicogenomic studies of the rat testis at the gene and pathway levels. The results showed relatively strong evidence of DBP-induced downregulation of genes in the steroidogenesis pathway and lipid/sterol/cholesterol transport pathway as well as effects on immediate early gene/growth/differentiation, transcription, peroxisome proliferator-activated receptor signaling and apoptosis pathways in the testis. Since two established modes of action (MOAs), reduced fetal testicular testosterone production and Insl3 gene expression, explain some but not all of the testis effects observed in rats after in utero DBP exposure, other MOAs are likely to be operative. A reanalysis of one DBP microarray study identified additional pathways within cell signaling, metabolism, hormone, disease, and cell adhesion biological processes. These putative new pathways may be associated with DBP effects on the testes that are currently unexplained. This case study on DBP identified data gaps and research needs for the use of toxicogenomic data in risk assessment. Furthermore, this study demonstrated an approach for evaluating toxicogenomic data in human health risk assessment that could be applied to future chemicals. - Highlights: ► We evaluate the dibutyl phthalate toxicogenomic data for use in risk assessment. ► We focus on information about the mechanism of action for the developing testis. ► Multiple studies report effects on testosterone and insl3-related pathways. ► We identify additional affected pathways that may explain some testis effects. ► The case study is a template for evaluating toxicogenomic data in risk assessment.« less

  7. Integrating genetic and toxicogenomic information for determining underlying susceptibility to developmental disorders.

    PubMed

    Robinson, Joshua F; Port, Jesse A; Yu, Xiaozhong; Faustman, Elaine M

    2010-10-01

    To understand the complex etiology of developmental disorders, an understanding of both genetic and environmental risk factors is needed. Human and rodent genetic studies have identified a multitude of gene candidates for specific developmental disorders such as neural tube defects (NTDs). With the emergence of toxicogenomic-based assessments, scientists now also have the ability to compare and understand the expression of thousands of genes simultaneously across strain, time, and exposure in developmental models. Using a systems-based approach in which we are able to evaluate information from various parts and levels of the developing organism, we propose a framework for integrating genetic information with toxicogenomic-based studies to better understand gene-environmental interactions critical for developmental disorders. This approach has allowed us to characterize candidate genes in the context of variables critical for determining susceptibility such as strain, time, and exposure. Using a combination of toxicogenomic studies and complementary bioinformatic tools, we characterize NTD candidate genes during normal development by function (gene ontology), linked phenotype (disease outcome), location, and expression (temporally and strain-dependent). In addition, we show how environmental exposures (cadmium, methylmercury) can influence expression of these genes in a strain-dependent manner. Using NTDs as an example of developmental disorder, we show how simple integration of genetic information from previous studies into the standard microarray design can enhance analysis of gene-environment interactions to better define environmental exposure-disease pathways in sensitive and resistant mouse strains. © Wiley-Liss, Inc.

  8. TOXICOGENOMIC STUDY OF TRIAZOLE FUNGICIDES AND PERFLUOROALKYL ACIDS

    EPA Science Inventory

    Toxicogenomic analysis of five environmental contaminants was performed to investigate the ability of genomics to categorize chemicals and elucidate mechanisms of toxicity. Three triazole antifungals (myclobutanil, propiconazole and triadimefon) and two perfluorinated compounds (...

  9. Disease model curation improvements at Mouse Genome Informatics

    PubMed Central

    Bello, Susan M.; Richardson, Joel E.; Davis, Allan P.; Wiegers, Thomas C.; Mattingly, Carolyn J.; Dolan, Mary E.; Smith, Cynthia L.; Blake, Judith A.; Eppig, Janan T.

    2012-01-01

    Optimal curation of human diseases requires an ontology or structured vocabulary that contains terms familiar to end users, is robust enough to support multiple levels of annotation granularity, is limited to disease terms and is stable enough to avoid extensive reannotation following updates. At Mouse Genome Informatics (MGI), we currently use disease terms from Online Mendelian Inheritance in Man (OMIM) to curate mouse models of human disease. While OMIM provides highly detailed disease records that are familiar to many in the medical community, it lacks structure to support multilevel annotation. To improve disease annotation at MGI, we evaluated the merged Medical Subject Headings (MeSH) and OMIM disease vocabulary created by the Comparative Toxicogenomics Database (CTD) project. Overlaying MeSH onto OMIM provides hierarchical access to broad disease terms, a feature missing from the OMIM. We created an extended version of the vocabulary to meet the genetic disease-specific curation needs at MGI. Here we describe our evaluation of the CTD application, the extensions made by MGI and discuss the strengths and weaknesses of this approach. Database URL: http://www.informatics.jax.org/ PMID:22434831

  10. Toxicogenomics and the Regulatory Framework

    EPA Science Inventory

    Toxicogenomics presents regulatory agencies with the opportunity to revolutionize their analyses by enabling the collection of information on a broader range of responses than currently considered in traditional regulatory decision making. Analyses of genomic responses are expec...

  11. EPA'S TOXICOGENOMICS PARTNERSHIPS ACROSS GOVERNMENT, ACADEMIA AND INDUSTRY

    EPA Science Inventory

    Genomics, proteomics and metabonomics technologies are transforming the science of toxicology, and concurrent advances in computing and informatics are providing management and analysis solutions for this onslaught of toxicogenomic data. EPA has been actively developing an intra...

  12. TOXICOGENOMICS AND HUMAN DISEASE RISK ASSESSMENT

    EPA Science Inventory


    Toxicogenomics and Human Disease Risk Assessment.

    Complete sequencing of human and other genomes, availability of large-scale gene
    expression arrays with ever-increasing numbers of genes displayed, and steady
    improvements in protein expression technology can hav...

  13. Integrating toxicogenomics data into cancer adverse outcome pathways

    EPA Science Inventory

    Integrating toxicogenomics data into adverse outcome pathways for cancer.J. Christopher CortonNHEERL/ORD, EPA, Research Triangle Park, NCAs the toxicology field continues to move towards a new paradigm in toxicity testing and safety assessment, there is the expectation that model...

  14. EPA SCIENCE FORUM - EPA'S TOXICOGENOMICS PARTNERSHIPS ACROSS GOVERNMENT, ACADEMIA AND INDUSTRY

    EPA Science Inventory

    Over the past decade genomics, proteomics and metabonomics technologies have transformed the science of toxicology, and concurrent advances in computing and informatics have provided management and analysis solutions for this onslaught of toxicogenomic data. EPA has been actively...

  15. Toxicogenomics concepts and applications to study hepatic effects of food additives and chemicals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stierum, Rob; Heijne, Wilbert; Kienhuis, Anne

    2005-09-01

    Transcriptomics, proteomics and metabolomics are genomics technologies with great potential in toxicological sciences. Toxicogenomics involves the integration of conventional toxicological examinations with gene, protein or metabolite expression profiles. An overview together with selected examples of the possibilities of genomics in toxicology is given. The expectations raised by toxicogenomics are earlier and more sensitive detection of toxicity. Furthermore, toxicogenomics will provide a better understanding of the mechanism of toxicity and may facilitate the prediction of toxicity of unknown compounds. Mechanism-based markers of toxicity can be discovered and improved interspecies and in vitro-in vivo extrapolations will drive model developments in toxicology. Toxicologicalmore » assessment of chemical mixtures will benefit from the new molecular biological tools. In our laboratory, toxicogenomics is predominantly applied for elucidation of mechanisms of action and discovery of novel pathway-supported mechanism-based markers of liver toxicity. In addition, we aim to integrate transcriptome, proteome and metabolome data, supported by bioinformatics to develop a systems biology approach for toxicology. Transcriptomics and proteomics studies on bromobenzene-mediated hepatotoxicity in the rat are discussed. Finally, an example is shown in which gene expression profiling together with conventional biochemistry led to the discovery of novel markers for the hepatic effects of the food additives butylated hydroxytoluene, curcumin, propyl gallate and thiabendazole.« less

  16. Evaluation of sequencing approaches for high-throughput toxicogenomics (SOT)

    EPA Science Inventory

    Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. We present the evaluation of three toxicogenomics platfo...

  17. Developing Computational Tools for Application of Toxicogenomics to Environmental Regulations and Risk Assessment

    EPA Science Inventory

    Toxicogenomics is the study of changes in gene expression, protein, and metabolite profiles within cells and tissues, complementary to more traditional toxicological methods. Genomics tools provide detailed molecular data about the underlying biochemical mechanisms of toxicity, a...

  18. Toxicogenomics and cancer risk assessment: a framework for key event analysis and dose-response assessment for nongenotoxic carcinogens.

    PubMed

    Bercu, Joel P; Jolly, Robert A; Flagella, Kelly M; Baker, Thomas K; Romero, Pedro; Stevens, James L

    2010-12-01

    In order to determine a threshold for nongenotoxic carcinogens, the traditional risk assessment approach has been to identify a mode of action (MOA) with a nonlinear dose-response. The dose-response for one or more key event(s) linked to the MOA for carcinogenicity allows a point of departure (POD) to be selected from the most sensitive effect dose or no-effect dose. However, this can be challenging because multiple MOAs and key events may exist for carcinogenicity and oftentimes extensive research is required to elucidate the MOA. In the present study, a microarray analysis was conducted to determine if a POD could be identified following short-term oral rat exposure with two nongenotoxic rodent carcinogens, fenofibrate and methapyrilene, using a benchmark dose analysis of genes aggregated in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways and Gene Ontology (GO) biological processes, which likely encompass key event(s) for carcinogenicity. The gene expression response for fenofibrate given to rats for 2days was consistent with its MOA and known key events linked to PPARα activation. The temporal response from daily dosing with methapyrilene demonstrated biological complexity with waves of pathways/biological processes occurring over 1, 3, and 7days; nonetheless, the benchmark dose values were consistent over time. When comparing the dose-response of toxicogenomic data to tumorigenesis or precursor events, the toxicogenomics POD was slightly below any effect level. Our results suggest that toxicogenomic analysis using short-term studies can be used to identify a threshold for nongenotoxic carcinogens based on evaluation of potential key event(s) which then can be used within a risk assessment framework. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. CONCEPTUAL FRAMEWORK FOR THE CHEMICAL EFFECTS IN BIOLOGICAL SYSTEMS (CEBS) TOXICOGENOMICS KNOWLEDGE BASE

    EPA Science Inventory

    Conceptual Framework for the Chemical Effects in Biological Systems (CEBS) T oxicogenomics Knowledge Base

    Abstract
    Toxicogenomics studies how the genome is involved in responses to environmental stressors or toxicants. It combines genetics, genome-scale mRNA expressio...

  20. TOXICOGENOMIC STUDY OF TRIAZOLE FUNGICIDES AND PERFLUOROALKYL ACIDS IN RAT LIVERS ACCURATELY CATEGORIZES CHEMICALS AND IDENTIFIES MECHANISMS OF TOXICITY

    EPA Science Inventory

    Toxicogenomic analysis of five environmental chemicals was performed to investigate the ability of genomics to predict toxicity, categorize chemicals, and elucidate mechanisms of toxicity. Three triazole antifungals (myclobutanil, propiconazole, and triadimefon) and two perfluori...

  1. Comparative analysis of predictive models for nongenotoxic hepatocarcinogenicity using both toxicogenomics and quantitative structure-activity relationships.

    PubMed

    Liu, Zhichao; Kelly, Reagan; Fang, Hong; Ding, Don; Tong, Weida

    2011-07-18

    The primary testing strategy to identify nongenotoxic carcinogens largely relies on the 2-year rodent bioassay, which is time-consuming and labor-intensive. There is an increasing effort to develop alternative approaches to prioritize the chemicals for, supplement, or even replace the cancer bioassay. In silico approaches based on quantitative structure-activity relationships (QSAR) are rapid and inexpensive and thus have been investigated for such purposes. A slightly more expensive approach based on short-term animal studies with toxicogenomics (TGx) represents another attractive option for this application. Thus, the primary questions are how much better predictive performance using short-term TGx models can be achieved compared to that of QSAR models, and what length of exposure is sufficient for high quality prediction based on TGx. In this study, we developed predictive models for rodent liver carcinogenicity using gene expression data generated from short-term animal models at different time points and QSAR. The study was focused on the prediction of nongenotoxic carcinogenicity since the genotoxic chemicals can be inexpensively removed from further development using various in vitro assays individually or in combination. We identified 62 chemicals whose hepatocarcinogenic potential was available from the National Center for Toxicological Research liver cancer database (NCTRlcdb). The gene expression profiles of liver tissue obtained from rats treated with these chemicals at different time points (1 day, 3 days, and 5 days) are available from the Gene Expression Omnibus (GEO) database. Both TGx and QSAR models were developed on the basis of the same set of chemicals using the same modeling approach, a nearest-centroid method with a minimum redundancy and maximum relevancy-based feature selection with performance assessed using compound-based 5-fold cross-validation. We found that the TGx models outperformed QSAR in every aspect of modeling. For example, the TGx models' predictive accuracy (0.77, 0.77, and 0.82 for the 1-day, 3-day, and 5-day models, respectively) was much higher for an independent validation set than that of a QSAR model (0.55). Permutation tests confirmed the statistical significance of the model's prediction performance. The study concluded that a short-term 5-day TGx animal model holds the potential to predict nongenotoxic hepatocarcinogenicity. © 2011 American Chemical Society

  2. SOURCES OF VARIATION IN BASELINE GENE EXPRESSION LEVELS FROM TOXICOGENOMIC STUDY CONTROL ANIMALS ACROSS MULTIPLE LABORATORIES

    EPA Science Inventory

    Variations in study design are typical for toxicogenomic studies, but their impact on gene expression in control animals has not been well characterized. A dataset of control animal microarray expression data was assembled by a working group of the Health and Environmental Scienc...

  3. Pathway Analysis Revealed Potential Diverse Health Impacts of Flavonoids that Bind Estrogen Receptors

    PubMed Central

    Ye, Hao; Ng, Hui Wen; Sakkiah, Sugunadevi; Ge, Weigong; Perkins, Roger; Tong, Weida; Hong, Huixiao

    2016-01-01

    Flavonoids are frequently used as dietary supplements in the absence of research evidence regarding health benefits or toxicity. Furthermore, ingested doses could far exceed those received from diet in the course of normal living. Some flavonoids exhibit binding to estrogen receptors (ERs) with consequential vigilance by regulatory authorities at the U.S. EPA and FDA. Regulatory authorities must consider both beneficial claims and potential adverse effects, warranting the increases in research that has spanned almost two decades. Here, we report pathway enrichment of 14 targets from the Comparative Toxicogenomics Database (CTD) and the Herbal Ingredients’ Targets (HIT) database for 22 flavonoids that bind ERs. The selected flavonoids are confirmed ER binders from our earlier studies, and were here found in mainly involved in three types of biological processes, ER regulation, estrogen metabolism and synthesis, and apoptosis. Besides cancers, we conjecture that the flavonoids may affect several diseases via apoptosis pathways. Diseases such as amyotrophic lateral sclerosis, viral myocarditis and non-alcoholic fatty liver disease could be implicated. More generally, apoptosis processes may be importantly evolved biological functions of flavonoids that bind ERs and high dose ingestion of those flavonoids could adversely disrupt the cellular apoptosis process. PMID:27023590

  4. Toward a Public Toxicogenomics Capability for Supporting Predictive Toxicology: Survey of Current Resources and Chemical Indexing of Experiments in GEO and ArrayExpress

    EPA Science Inventory

    A publicly available toxicogenomics capability for supporting predictive toxicology and meta-analysis depends on availability of gene expression data for chemical treatment scenarios, the ability to locate and aggregate such information by chemical, and broad data coverage within...

  5. US FDA and USA EPA Voluntary Submission of Genomic Data Guidance: Current and Future Use of Genomics in Decision Making

    EPA Science Inventory

    Appropriate utilization of data from toxicogenomic studies ins an ongoing concern of the regulated industries and the agencies charged with assessing safety or risk. An area of current interest is the possibility of toxicogenomics to enhance our ability to develop higher or high-...

  6. Reconciled rat and human metabolic networks for comparative toxicogenomics and biomarker predictions

    PubMed Central

    Blais, Edik M.; Rawls, Kristopher D.; Dougherty, Bonnie V.; Li, Zhuo I.; Kolling, Glynis L.; Ye, Ping; Wallqvist, Anders; Papin, Jason A.

    2017-01-01

    The laboratory rat has been used as a surrogate to study human biology for more than a century. Here we present the first genome-scale network reconstruction of Rattus norvegicus metabolism, iRno, and a significantly improved reconstruction of human metabolism, iHsa. These curated models comprehensively capture metabolic features known to distinguish rats from humans including vitamin C and bile acid synthesis pathways. After reconciling network differences between iRno and iHsa, we integrate toxicogenomics data from rat and human hepatocytes, to generate biomarker predictions in response to 76 drugs. We validate comparative predictions for xanthine derivatives with new experimental data and literature-based evidence delineating metabolite biomarkers unique to humans. Our results provide mechanistic insights into species-specific metabolism and facilitate the selection of biomarkers consistent with rat and human biology. These models can serve as powerful computational platforms for contextualizing experimental data and making functional predictions for clinical and basic science applications. PMID:28176778

  7. Using Bioinformatic Approaches to Identify Pathways Targeted by Human Leukemogens

    PubMed Central

    Thomas, Reuben; Phuong, Jimmy; McHale, Cliona M.; Zhang, Luoping

    2012-01-01

    We have applied bioinformatic approaches to identify pathways common to chemical leukemogens and to determine whether leukemogens could be distinguished from non-leukemogenic carcinogens. From all known and probable carcinogens classified by IARC and NTP, we identified 35 carcinogens that were associated with leukemia risk in human studies and 16 non-leukemogenic carcinogens. Using data on gene/protein targets available in the Comparative Toxicogenomics Database (CTD) for 29 of the leukemogens and 11 of the non-leukemogenic carcinogens, we analyzed for enrichment of all 250 human biochemical pathways in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The top pathways targeted by the leukemogens included metabolism of xenobiotics by cytochrome P450, glutathione metabolism, neurotrophin signaling pathway, apoptosis, MAPK signaling, Toll-like receptor signaling and various cancer pathways. The 29 leukemogens formed 18 distinct clusters comprising 1 to 3 chemicals that did not correlate with known mechanism of action or with structural similarity as determined by 2D Tanimoto coefficients in the PubChem database. Unsupervised clustering and one-class support vector machines, based on the pathway data, were unable to distinguish the 29 leukemogens from 11 non-leukemogenic known and probable IARC carcinogens. However, using two-class random forests to estimate leukemogen and non-leukemogen patterns, we estimated a 76% chance of distinguishing a random leukemogen/non-leukemogen pair from each other. PMID:22851955

  8. USE OF TOXICOGENOMICS DATA IN RISK ASSESSMENT: CASE STUDY FOR A CHEMICAL IN THE ANDROGEN-MEDIATED MALE REPRODUCTIVE DEVELOPMENT TOXICITY PATHWAY

    EPA Science Inventory

    The goal of this project is to address the question, “Can existing toxicogenomics (TG) data improve Environmental Protection Agency (EPA) chemical health or risk assessments?” Although genomics data promises to impact multiple areas of science, medicine, law, and policy, there ar...

  9. 75 FR 1770 - An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-13

    ... qualitative aspects of the risk assessment because of the type of genomic data available for DBP. It is... Assessment (NCEA) within EPA's Office of Research and Development (ORD). Toxicogenomics is the application of... exploratory methods for analyzing genomic data for application to risk assessment and some preliminary results...

  10. An Approach to Using Toxicogenomic Data in US EPA Human ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate Case Study. This report outlines an approach to evaluate genomic data for use in risk assessment and a case study to illustrate the approach. The dibutyl phthalate (DBP) case study example focuses on male reproductive developmental effects and the qualitative application of genomic data because of the available data on DBP. The case study presented in this report is a separate activity from any of the ongoing IRIS human health assessments for the phthalates. The National Center for Environmental Assessment (NCEA) prepared this document for the purpose of describing and illustrating an approach for using toxicogenomic data in risk assessment.

  11. Toxicogenomics to Evaluate Endocrine Disrupting Effects of Environmental Chemicals Using the Zebrafish Model

    PubMed Central

    Caballero-Gallardo, Karina; Olivero-Verbel, Jesus; Freeman, Jennifer L.

    2016-01-01

    The extent of our knowledge on the number of chemical compounds related to anthropogenic activities that can cause damage to the environment and to organisms is increasing. Endocrine disrupting chemicals (EDCs) are one group of potentially hazardous substances that include natural and synthetic chemicals and have the ability to mimic endogenous hormones, interfering with their biosynthesis, metabolism, and normal functions. Adverse effects associated with EDC exposure have been documented in aquatic biota and there is widespread interest in the characterization and understanding of their modes of action. Fish are considered one of the primary risk organisms for EDCs. Zebrafish (Danio rerio) are increasingly used as an animal model to study the effects of endocrine disruptors, due to their advantages compared to other model organisms. One approach to assess the toxicity of a compound is to identify those patterns of gene expression found in a tissue or organ exposed to particular classes of chemicals, through new technologies in genomics (toxicogenomics), such as microarrays or whole-genome sequencing. Application of these technologies permit the quantitative analysis of thousands of gene expression changes simultaneously in a single experiment and offer the opportunity to use transcript profiling as a tool to predict toxic outcomes of exposure to particular compounds. The application of toxicogenomic tools for identification of chemicals with endocrine disrupting capacity using the zebrafish model system is reviewed. PMID:28217008

  12. Natural Variation in Fish Transcriptomes: Comparative Analysis of the Fathead Minnow (Pimephales promelas) and Zebrafish (Danio rerio)

    EPA Science Inventory

    Fathead minnow and zebrafish are among the most intensively studied fish species in environmental toxicogenomics. To aid the assessment and interpretation of subtle transcriptomic effects from treatment conditions of interest, there needs to be a better characterization and unde...

  13. An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate Case Study (Final Report, 2010)

    EPA Science Inventory

    EPA announced the availability of the final report, An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate Case Study. This report outlines an approach to evaluate genomic data for use in risk assessment and a case study to ...

  14. Lynx web services for annotations and systems analysis of multi-gene disorders.

    PubMed

    Sulakhe, Dinanath; Taylor, Andrew; Balasubramanian, Sandhya; Feng, Bo; Xie, Bingqing; Börnigen, Daniela; Dave, Utpal J; Foster, Ian T; Gilliam, T Conrad; Maltsev, Natalia

    2014-07-01

    Lynx is a web-based integrated systems biology platform that supports annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Lynx has integrated multiple classes of biomedical data (genomic, proteomic, pathways, phenotypic, toxicogenomic, contextual and others) from various public databases as well as manually curated data from our group and collaborators (LynxKB). Lynx provides tools for gene list enrichment analysis using multiple functional annotations and network-based gene prioritization. Lynx provides access to the integrated database and the analytical tools via REST based Web Services (http://lynx.ci.uchicago.edu/webservices.html). This comprises data retrieval services for specific functional annotations, services to search across the complete LynxKB (powered by Lucene), and services to access the analytical tools built within the Lynx platform. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Inferring drug-disease associations based on known protein complexes.

    PubMed

    Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.

  16. Inferring drug-disease associations based on known protein complexes

    PubMed Central

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949

  17. Intersection of toxicogenomics and high throughput screening in the Tox21 program: an NIEHS perspective.

    PubMed

    Merrick, B Alex; Paules, Richard S; Tice, Raymond R

    Humans are exposed to thousands of chemicals with inadequate toxicological data. Advances in computational toxicology, robotic high throughput screening (HTS), and genome-wide expression have been integrated into the Tox21 program to better predict the toxicological effects of chemicals. Tox21 is a collaboration among US government agencies initiated in 2008 that aims to shift chemical hazard assessment from traditional animal toxicology to target-specific, mechanism-based, biological observations using in vitro assays and lower organism models. HTS uses biocomputational methods for probing thousands of chemicals in in vitro assays for gene-pathway response patterns predictive of adverse human health outcomes. In 1999, NIEHS began exploring the application of toxicogenomics to toxicology and recent advances in NextGen sequencing should greatly enhance the biological content obtained from HTS platforms. We foresee an intersection of new technologies in toxicogenomics and HTS as an innovative development in Tox21. Tox21 goals, priorities, progress, and challenges will be reviewed.

  18. The Metamorphosis of Amphibian Toxicogenomics

    PubMed Central

    Helbing, Caren C.

    2012-01-01

    Amphibians are important vertebrates in toxicology often representing both aquatic and terrestrial forms within the life history of the same species. Of the thousands of species, only two have substantial genomics resources: the recently published genome of the Pipid, Xenopus (Silurana) tropicalis, and transcript information (and ongoing genome sequencing project) of Xenopus laevis. However, many more species representative of regional ecological niches and life strategies are used in toxicology worldwide. Since Xenopus species diverged from the most populous frog family, the Ranidae, ~200 million years ago, there are notable differences between them and the even more distant Caudates (salamanders) and Caecilians. These differences include genome size, gene composition, and extent of polyploidization. Application of toxicogenomics to amphibians requires the mobilization of resources and expertise to develop de novo sequence assemblies and analysis strategies for a broader range of amphibian species. The present mini-review will present the advances in toxicogenomics as pertains to amphibians with particular emphasis upon the development and use of genomic techniques (inclusive of transcriptomics, proteomics, and metabolomics) and the challenges inherent therein. PMID:22435070

  19. Yeast Toxicogenomics: Genome-Wide Responses to Chemical Stresses with Impact in Environmental Health, Pharmacology, and Biotechnology

    PubMed Central

    dos Santos, Sandra C.; Teixeira, Miguel Cacho; Cabrito, Tânia R.; Sá-Correia, Isabel

    2012-01-01

    The emerging transdisciplinary field of Toxicogenomics aims to study the cell response to a given toxicant at the genome, transcriptome, proteome, and metabolome levels. This approach is expected to provide earlier and more sensitive biomarkers of toxicological responses and help in the delineation of regulatory risk assessment. The use of model organisms to gather such genomic information, through the exploitation of Omics and Bioinformatics approaches and tools, together with more focused molecular and cellular biology studies are rapidly increasing our understanding and providing an integrative view on how cells interact with their environment. The use of the model eukaryote Saccharomyces cerevisiae in the field of Toxicogenomics is discussed in this review. Despite the limitations intrinsic to the use of such a simple single cell experimental model, S. cerevisiae appears to be very useful as a first screening tool, limiting the use of animal models. Moreover, it is also one of the most interesting systems to obtain a truly global understanding of the toxicological response and resistance mechanisms, being in the frontline of systems biology research and developments. The impact of the knowledge gathered in the yeast model, through the use of Toxicogenomics approaches, is highlighted here by its use in prediction of toxicological outcomes of exposure to pesticides and pharmaceutical drugs, but also by its impact in biotechnology, namely in the development of more robust crops and in the improvement of yeast strains as cell factories. PMID:22529852

  20. Impact of Genomics Platform and Statistical Filtering on Transcriptional Benchmark Doses (BMD) and Multiple Approaches for Selection of Chemical Point of Departure (PoD)

    PubMed Central

    Webster, A. Francina; Chepelev, Nikolai; Gagné, Rémi; Kuo, Byron; Recio, Leslie; Williams, Andrew; Yauk, Carole L.

    2015-01-01

    Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD) values derived from toxicogenomics data be used as point of departure (PoD) values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd) and carcinogenic (4, 8 mkd) doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses. PMID:26313361

  1. Discriminating between adaptive and carcinogenic liver hypertrophy in rat studies using logistic ridge regression analysis of toxicogenomic data: The mode of action and predictive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Shujie; Kawamoto, Taisuke; Morita, Osamu

    Chemical exposure often results in liver hypertrophy in animal tests, characterized by increased liver weight, hepatocellular hypertrophy, and/or cell proliferation. While most of these changes are considered adaptive responses, there is concern that they may be associated with carcinogenesis. In this study, we have employed a toxicogenomic approach using a logistic ridge regression model to identify genes responsible for liver hypertrophy and hypertrophic hepatocarcinogenesis and to develop a predictive model for assessing hypertrophy-inducing compounds. Logistic regression models have previously been used in the quantification of epidemiological risk factors. DNA microarray data from the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System weremore » used to identify hypertrophy-related genes that are expressed differently in hypertrophy induced by carcinogens and non-carcinogens. Data were collected for 134 chemicals (72 non-hypertrophy-inducing chemicals, 27 hypertrophy-inducing non-carcinogenic chemicals, and 15 hypertrophy-inducing carcinogenic compounds). After applying logistic ridge regression analysis, 35 genes for liver hypertrophy (e.g., Acot1 and Abcc3) and 13 genes for hypertrophic hepatocarcinogenesis (e.g., Asns and Gpx2) were selected. The predictive models built using these genes were 94.8% and 82.7% accurate, respectively. Pathway analysis of the genes indicates that, aside from a xenobiotic metabolism-related pathway as an adaptive response for liver hypertrophy, amino acid biosynthesis and oxidative responses appear to be involved in hypertrophic hepatocarcinogenesis. Early detection and toxicogenomic characterization of liver hypertrophy using our models may be useful for predicting carcinogenesis. In addition, the identified genes provide novel insight into discrimination between adverse hypertrophy associated with carcinogenesis and adaptive hypertrophy in risk assessment. - Highlights: • Hypertrophy (H) and hypertrophic carcinogenesis (C) were studied by toxicogenomics. • Important genes for H and C were selected by logistic ridge regression analysis. • Amino acid biosynthesis and oxidative responses may be involved in C. • Predictive models for H and C provided 94.8% and 82.7% accuracy, respectively. • The identified genes could be useful for assessment of liver hypertrophy.« less

  2. Human cell toxicogenomic analysis of bromoacetic acid: a regulated drinking water disinfection by-product.

    PubMed

    Muellner, Mark G; Attene-Ramos, Matias S; Hudson, Matthew E; Wagner, Elizabeth D; Plewa, Michael J

    2010-04-01

    The disinfection of drinking water is a major achievement in protecting the public health. However, current disinfection methods also generate disinfection by-products (DBPs). Many DBPs are cytotoxic, genotoxic, teratogenic, and carcinogenic and represent an important class of environmentally hazardous chemicals that may carry long-term human health implications. The objective of this research was to integrate in vitro toxicology with focused toxicogenomic analysis of the regulated DBP, bromoacetic acid (BAA) and to evaluate modulation of gene expression involved in DNA damage/repair and toxic responses, with nontransformed human cells. We generated transcriptome profiles for 168 genes with 30 min and 4 hr exposure times that did not induce acute cytotoxicity. Using qRT-PCR gene arrays, the levels of 25 transcripts were modulated to a statistically significant degree in response to a 30 min treatment with BAA (16 transcripts upregulated and nine downregulated). The largest changes were observed for RAD9A and BRCA1. The majority of the altered transcript profiles are genes involved in DNA repair, especially the repair of double strand DNA breaks, and in cell cycle regulation. With 4 hr of treatment the expression of 28 genes was modulated (12 upregulated and 16 downregulated); the largest fold changes were in HMOX1 and FMO1. This work represents the first nontransformed human cell toxicogenomic study with a regulated drinking water disinfection by-product. These data implicate double strand DNA breaks as a feature of BAA exposure. Future toxicogenomic studies of DBPs will further strengthen our limited knowledge in this growing area of drinking water research. Copyright 2009 Wiley-Liss, Inc.

  3. Systems toxicology of chemically induced liver and kidney injuries: histopathology‐associated gene co‐expression modules

    PubMed Central

    Te, Jerez A.; AbdulHameed, Mohamed Diwan M.

    2016-01-01

    Abstract Organ injuries caused by environmental chemical exposures or use of pharmaceutical drugs pose a serious health risk that may be difficult to assess because of a lack of non‐invasive diagnostic tests. Mapping chemical injuries to organ‐specific histopathology outcomes via biomarkers will provide a foundation for designing precise and robust diagnostic tests. We identified co‐expressed genes (modules) specific to injury endpoints using the Open Toxicogenomics Project‐Genomics Assisted Toxicity Evaluation System (TG‐GATEs) – a toxicogenomics database containing organ‐specific gene expression data matched to dose‐ and time‐dependent chemical exposures and adverse histopathology assessments in Sprague–Dawley rats. We proposed a protocol for selecting gene modules associated with chemical‐induced injuries that classify 11 liver and eight kidney histopathology endpoints based on dose‐dependent activation of the identified modules. We showed that the activation of the modules for a particular chemical exposure condition, i.e., chemical‐time‐dose combination, correlated with the severity of histopathological damage in a dose‐dependent manner. Furthermore, the modules could distinguish different types of injuries caused by chemical exposures as well as determine whether the injury module activation was specific to the tissue of origin (liver and kidney). The generated modules provide a link between toxic chemical exposures, different molecular initiating events among underlying molecular pathways and resultant organ damage. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. Journal of Applied Toxicology published by John Wiley & Sons, Ltd. PMID:26725466

  4. Transcriptomic Dose-Response Analysis for Mode of Action ...

    EPA Pesticide Factsheets

    Microarray and RNA-seq technologies can play an important role in assessing the health risks associated with environmental exposures. The utility of gene expression data to predict hazard has been well documented. Early toxicogenomics studies used relatively high, single doses with minimal replication. Thus, they were not useful in understanding health risks at environmentally-relevant doses. Until the past decade, application of toxicogenomics in dose response assessment and determination of chemical mode of action has been limited. New transcriptomic biomarkers have evolved to detect chemical hazards in multiple tissues together with pathway methods to study biological effects across the full dose response range and critical time course. Comprehensive low dose datasets are now available and with the use of transcriptomic benchmark dose estimation techniques within a mode of action framework, the ability to incorporate informative genomic data into human health risk assessment has substantially improved. The key advantage to applying transcriptomic technology to risk assessment is both the sensitivity and comprehensive examination of direct and indirect molecular changes that lead to adverse outcomes. Book Chapter with topic on future application of toxicogenomics technologies for MoA and risk assessment

  5. Sieve-based coreference resolution enhances semi-supervised learning model for chemical-induced disease relation extraction.

    PubMed

    Le, Hoang-Quynh; Tran, Mai-Vu; Dang, Thanh Hai; Ha, Quang-Thuy; Collier, Nigel

    2016-07-01

    The BioCreative V chemical-disease relation (CDR) track was proposed to accelerate the progress of text mining in facilitating integrative understanding of chemicals, diseases and their relations. In this article, we describe an extension of our system (namely UET-CAM) that participated in the BioCreative V CDR. The original UET-CAM system's performance was ranked fourth among 18 participating systems by the BioCreative CDR track committee. In the Disease Named Entity Recognition and Normalization (DNER) phase, our system employed joint inference (decoding) with a perceptron-based named entity recognizer (NER) and a back-off model with Semantic Supervised Indexing and Skip-gram for named entity normalization. In the chemical-induced disease (CID) relation extraction phase, we proposed a pipeline that includes a coreference resolution module and a Support Vector Machine relation extraction model. The former module utilized a multi-pass sieve to extend entity recall. In this article, the UET-CAM system was improved by adding a 'silver' CID corpus to train the prediction model. This silver standard corpus of more than 50 thousand sentences was automatically built based on the Comparative Toxicogenomics Database (CTD) database. We evaluated our method on the CDR test set. Results showed that our system could reach the state of the art performance with F1 of 82.44 for the DNER task and 58.90 for the CID task. Analysis demonstrated substantial benefits of both the multi-pass sieve coreference resolution method (F1 + 4.13%) and the silver CID corpus (F1 +7.3%).Database URL: SilverCID-The silver-standard corpus for CID relation extraction is freely online available at: https://zenodo.org/record/34530 (doi:10.5281/zenodo.34530). © The Author(s) 2016. Published by Oxford University Press.

  6. Discriminating between adaptive and carcinogenic liver hypertrophy in rat studies using logistic ridge regression analysis of toxicogenomic data: The mode of action and predictive models.

    PubMed

    Liu, Shujie; Kawamoto, Taisuke; Morita, Osamu; Yoshinari, Kouichi; Honda, Hiroshi

    2017-03-01

    Chemical exposure often results in liver hypertrophy in animal tests, characterized by increased liver weight, hepatocellular hypertrophy, and/or cell proliferation. While most of these changes are considered adaptive responses, there is concern that they may be associated with carcinogenesis. In this study, we have employed a toxicogenomic approach using a logistic ridge regression model to identify genes responsible for liver hypertrophy and hypertrophic hepatocarcinogenesis and to develop a predictive model for assessing hypertrophy-inducing compounds. Logistic regression models have previously been used in the quantification of epidemiological risk factors. DNA microarray data from the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System were used to identify hypertrophy-related genes that are expressed differently in hypertrophy induced by carcinogens and non-carcinogens. Data were collected for 134 chemicals (72 non-hypertrophy-inducing chemicals, 27 hypertrophy-inducing non-carcinogenic chemicals, and 15 hypertrophy-inducing carcinogenic compounds). After applying logistic ridge regression analysis, 35 genes for liver hypertrophy (e.g., Acot1 and Abcc3) and 13 genes for hypertrophic hepatocarcinogenesis (e.g., Asns and Gpx2) were selected. The predictive models built using these genes were 94.8% and 82.7% accurate, respectively. Pathway analysis of the genes indicates that, aside from a xenobiotic metabolism-related pathway as an adaptive response for liver hypertrophy, amino acid biosynthesis and oxidative responses appear to be involved in hypertrophic hepatocarcinogenesis. Early detection and toxicogenomic characterization of liver hypertrophy using our models may be useful for predicting carcinogenesis. In addition, the identified genes provide novel insight into discrimination between adverse hypertrophy associated with carcinogenesis and adaptive hypertrophy in risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Toxicogenomic analysis in the combined effect of tributyltin and benzo[a]pyrene on the development of zebrafish embryos.

    PubMed

    Huang, Lixing; Zuo, Zhenghong; Zhang, Youyu; Wang, Chonggang

    2015-01-01

    There is a growing recognition that the toxic effects of chemical mixtures are been an important issue in toxicological sciences. Tributyltin (TBT) and benzo[a]pyrene (BaP) are widespread pollutants that occur simultaneously in the aquatic environments. This study was designed to examine comprehensively the combined effects of TBT and BaP on zebrafish (Danio rerio) embryos using toxicogenomic approach combined with biochemical detection and morphological analysis, and tried to gain insight into the mechanisms underlying the combined effects of TBT and BaP. The results of toxicogenomic data indicated that: (1) TBT cotreatment rescued the embryos from decreased hatching ratio caused by BaP alone, while the alteration of gene expression (in this article the phrase gene expression is used as a synonym to gene transcription, although in is acknowledged that gene expression can also be regulated by, e.g., translation and mRNA or protein stability) relative to zebrafish hatching in the BaP groups was resumed by the cotreatment with TBT; (2) BaP cotreatment decreased TBT-mediated dorsal curvature, and alleviated the perturbation of Notch pathway caused by TBT alone; (3) cotreatment with TBT decreased BaP-mediated bradycardia, which might be due to that TBT cotreatment alleviated the perturbation in expression of genes related to cardiac muscle cell development and calcium handling caused by BaP alone; 4) TBT cotreatment brought an antagonistic effect on the BaP-mediated oxidative stress and DNA damage. These results suggested that toxicogenomic approach was available for analyzing combined toxicity with high sensitivity and accuracy, which might improve our understanding and predictability for the combined effects of chemicals. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. The extraction of drug-disease correlations based on module distance in incomplete human interactome.

    PubMed

    Yu, Liang; Wang, Bingbo; Ma, Xiaoke; Gao, Lin

    2016-12-23

    Extracting drug-disease correlations is crucial in unveiling disease mechanisms, as well as discovering new indications of available drugs, or drug repositioning. Both the interactome and the knowledge of disease-associated and drug-associated genes remain incomplete. We present a new method to predict the associations between drugs and diseases. Our method is based on a module distance, which is originally proposed to calculate distances between modules in incomplete human interactome. We first map all the disease genes and drug genes to a combined protein interaction network. Then based on the module distance, we calculate the distances between drug gene sets and disease gene sets, and take the distances as the relationships of drug-disease pairs. We also filter possible false positive drug-disease correlations by p-value. Finally, we validate the top-100 drug-disease associations related to six drugs in the predicted results. The overlapping between our predicted correlations with those reported in Comparative Toxicogenomics Database (CTD) and literatures, and their enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways demonstrate our approach can not only effectively identify new drug indications, but also provide new insight into drug-disease discovery.

  9. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS.

    PubMed

    Regenbogen, Sam; Wilkins, Angela D; Lichtarge, Olivier

    2016-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses.

  10. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS

    PubMed Central

    REGENBOGEN, SAM; WILKINS, ANGELA D.; LICHTARGE, OLIVIER

    2015-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses. PMID:26776170

  11. High-Density Real-Time PCR-Based in Vivo Toxicogenomic Screen to Predict Organ-Specific Toxicity

    PubMed Central

    Fabian, Gabriella; Farago, Nora; Feher, Liliana Z.; Nagy, Lajos I.; Kulin, Sandor; Kitajka, Klara; Bito, Tamas; Tubak, Vilmos; Katona, Robert L.; Tiszlavicz, Laszlo; Puskas, Laszlo G.

    2011-01-01

    Toxicogenomics, based on the temporal effects of drugs on gene expression, is able to predict toxic effects earlier than traditional technologies by analyzing changes in genomic biomarkers that could precede subsequent protein translation and initiation of histological organ damage. In the present study our objective was to extend in vivo toxicogenomic screening from analyzing one or a few tissues to multiple organs, including heart, kidney, brain, liver and spleen. Nanocapillary quantitative real-time PCR (QRT-PCR) was used in the study, due to its higher throughput, sensitivity and reproducibility, and larger dynamic range compared to DNA microarray technologies. Based on previous data, 56 gene markers were selected coding for proteins with different functions, such as proteins for acute phase response, inflammation, oxidative stress, metabolic processes, heat-shock response, cell cycle/apoptosis regulation and enzymes which are involved in detoxification. Some of the marker genes are specific to certain organs, and some of them are general indicators of toxicity in multiple organs. Utility of the nanocapillary QRT-PCR platform was demonstrated by screening different references, as well as discovery of drug-like compounds for their gene expression profiles in different organs of treated mice in an acute experiment. For each compound, 896 QRT-PCR were done: four organs were used from each of the treated four animals to monitor the relative expression of 56 genes. Based on expression data of the discovery gene set of toxicology biomarkers the cardio- and nephrotoxicity of doxorubicin and sulfasalazin, the hepato- and nephrotoxicity of rotenone, dihydrocoumarin and aniline, and the liver toxicity of 2,4-diaminotoluene could be confirmed. The acute heart and kidney toxicity of the active metabolite SN-38 from its less toxic prodrug, irinotecan could be differentiated, and two novel gene markers for hormone replacement therapy were identified, namely fabp4 and pparg, which were down-regulated by estradiol treatment. PMID:22016648

  12. Mixture toxicity revisited from a toxicogenomic perspective.

    PubMed

    Altenburger, Rolf; Scholz, Stefan; Schmitt-Jansen, Mechthild; Busch, Wibke; Escher, Beate I

    2012-03-06

    The advent of new genomic techniques has raised expectations that central questions of mixture toxicology such as for mechanisms of low dose interactions can now be answered. This review provides an overview on experimental studies from the past decade that address diagnostic and/or mechanistic questions regarding the combined effects of chemical mixtures using toxicogenomic techniques. From 2002 to 2011, 41 studies were published with a focus on mixture toxicity assessment. Primarily multiplexed quantification of gene transcripts was performed, though metabolomic and proteomic analysis of joint exposures have also been undertaken. It is now standard to explicitly state criteria for selecting concentrations and provide insight into data transformation and statistical treatment with respect to minimizing sources of undue variability. Bioinformatic analysis of toxicogenomic data, by contrast, is still a field with diverse and rapidly evolving tools. The reported combined effect assessments are discussed in the light of established toxicological dose-response and mixture toxicity models. Receptor-based assays seem to be the most advanced toward establishing quantitative relationships between exposure and biological responses. Often transcriptomic responses are discussed based on the presence or absence of signals, where the interpretation may remain ambiguous due to methodological problems. The majority of mixture studies design their studies to compare the recorded mixture outcome against responses for individual components only. This stands in stark contrast to our existing understanding of joint biological activity at the levels of chemical target interactions and apical combined effects. By joining established mixture effect models with toxicokinetic and -dynamic thinking, we suggest a conceptual framework that may help to overcome the current limitation of providing mainly anecdotal evidence on mixture effects. To achieve this we suggest (i) to design studies to establish quantitative relationships between dose and time dependency of responses and (ii) to adopt mixture toxicity models. Moreover, (iii) utilization of novel bioinformatic tools and (iv) stress response concepts could be productive to translate multiple responses into hypotheses on the relationships between general stress and specific toxicity reactions of organisms.

  13. The fragility of omics risk and benefit perceptions.

    PubMed

    Börner, Franziska U; Schütz, Holger; Wiedemann, Peter

    2011-03-25

    How do individuals judge the risks and benefits of toxicogenomics, an emerging field of research which is completely unfamiliar to them? The hypothesis is that individuals' perceptions of the risks and benefits of toxicogenomics are fragile and can by influenced by different issues and context framings as a technology. The researchers expected that the effects on risk and benefit judgements would differ between lay individuals and experts in toxicogenomics. A 2×2×2 experiment that encompassed three factors was conducted. The first factor, issue framing incorporated the field of application for the technology (therapy vs. diagnosis setting). The second factor, context framing included organisations and institutions that would profit from the technology (companies vs. regulatory agencies) and the third factor encompasses the quality of individuals' level of knowledge, for example lay vs. expert knowledge. Research results suggest the differential power of framing effects. It seems that the clues provided by context frames - but not by issue frames - are able to influence the ways in which lay people and experts process information. The findings are interpreted in the line of the fuzzy trace theory that predicts reliance on fuzzy gist representations formed by stereotypes on a wide range of judgement problem including risk and benefit perceptions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Toxicogenomics in regulatory ecotoxicology

    USGS Publications Warehouse

    Ankley, Gerald T.; Daston, George P.; Degitz, Sigmund J.; Denslow, Nancy D.; Hoke, Robert A.; Kennedy, Sean W.; Miracle, Ann L.; Perkins, Edward J.; Snape, Jason; Tillitt, Donald E.; Tyler, Charles R.; Versteeg, Donald

    2006-01-01

    Recently, we have witnessed an explosion of different genomic approaches that, through a combination of advanced biological, instrumental, and bioinformatic techniques, can yield a previously unparalleled amount of data concerning the molecular and biochemical status of organisms. Fueled partially by large, well-publicized efforts such as the Human Genome Project, genomic research has become a rapidly growing topical area in multiple biological disciplines. Since 1999, when the term “toxicogenomics” was coined to describe the application of genomics to toxicology (1), a rapid increase in publications on the topic has occurred (Figure 1). The potential utility of toxicogenomics in toxicological research and regulatory activities has been the subject of scientific discussions and, as with any new technology, has evoked a wide range of opinion (2–6).

  15. In Silico Computational Transcriptomics Reveals Novel Endocrine Disruptors in Largemouth Bass ( Micropterus salmoides).

    PubMed

    Basili, Danilo; Zhang, Ji-Liang; Herbert, John; Kroll, Kevin; Denslow, Nancy D; Martyniuk, Christopher J; Falciani, Francesco; Antczak, Philipp

    2018-06-15

    In recent years, decreases in fish populations have been attributed, in part, to the effect of environmental chemicals on ovarian development. To understand the underlying molecular events we developed a dynamic model of ovary development linking gene transcription to key physiological end points, such as gonadosomatic index (GSI), plasma levels of estradiol (E2) and vitellogenin (VTG), in largemouth bass ( Micropterus salmoides). We were able to identify specific clusters of genes, which are affected at different stages of ovarian development. A subnetwork was identified that closely linked gene expression and physiological end points and by interrogating the Comparative Toxicogenomic Database (CTD), quercetin and tretinoin (ATRA) were identified as two potential candidates that may perturb this system. Predictions were validated by investigation of reproductive associated transcripts using qPCR in ovary and in the liver of both male and female largemouth bass treated after a single injection of quercetin and tretinoin (10 and 100 μg/kg). Both compounds were found to significantly alter the expression of some of these genes. Our findings support the use of omics and online repositories for identification of novel, yet untested, compounds. This is the first study of a dynamic model that links gene expression patterns across stages of ovarian development.

  16. Application of toxicogenomic profiling to evaluate effects of benzene and formaldehyde: from yeast to human

    PubMed Central

    McHale, Cliona M.; Smith, Martyn T.; Zhang, Luoping

    2014-01-01

    Genetic variation underlies a significant proportion of the individual variation in human susceptibility to toxicants. The primary current approaches to identify gene–environment (GxE) associations, genome-wide association studies (GWAS) and candidate gene association studies, require large exposed and control populations and an understanding of toxicity genes and pathways, respectively. This limits their application in the study of GxE associations for the leukemogens benzene and formaldehyde, whose toxicity has long been a focus of our research. As an alternative approach, we applied innovative in vitro functional genomics testing systems, including unbiased functional screening assays in yeast and a near-haploid human bone marrow cell line (KBM7). Through comparative genomic and computational analyses of the resulting data, we have identified human genes and pathways that may modulate susceptibility to benzene and formaldehyde. We have validated the roles of several genes in mammalian cell models. In populations occupationally exposed to low levels of benzene, we applied peripheral blood mononuclear cell transcriptomics and chromosome-wide aneuploidy studies (CWAS) in lymphocytes. In this review of the literature, we describe our comprehensive toxicogenomic approach and the potential mechanisms of toxicity and susceptibility genes identified for benzene and formaldehyde, as well as related studies conducted by other researchers. PMID:24571325

  17. Genotoxicity Assessment of Drinking Water Disinfection Byproducts by DNA Damage and Repair Pathway Profiling Analysis.

    PubMed

    Lan, Jiaqi; Rahman, Sheikh Mokhlesur; Gou, Na; Jiang, Tao; Plewa, Micheal J; Alshawabkeh, Akram; Gu, April Z

    2018-06-05

    Genotoxicity is considered a major concern for drinking water disinfection byproducts (DBPs). Of over 700 DBPs identified to date, only a small number has been assessed with limited information for DBP genotoxicity mechanism(s). In this study, we evaluated genotoxicity of 20 regulated and unregulated DBPs applying a quantitative toxicogenomics approach. We used GFP-fused yeast strains that examine protein expression profiling of 38 proteins indicative of all known DNA damage and repair pathways. The toxicogenomics assay detected genotoxicity potential of these DBPs that is consistent with conventional genotoxicity assays end points. Furthermore, the high-resolution, real-time pathway activation and protein expression profiling, in combination with clustering analysis, revealed molecular level details in the genotoxicity mechanisms among different DBPs and enabled classification of DBPs based on their distinct DNA damage effects and repair mechanisms. Oxidative DNA damage and base alkylation were confirmed to be the main molecular mechanisms of DBP genotoxicity. Initial exploration of QSAR modeling using moleular genotoxicity end points (PELI) suggested that genotoxicity of DBPs in this study was correlated with topological and quantum chemical descriptors. This study presents a toxicogenomics-based assay for fast and efficient mechanistic genotoxicity screening and assessment of a large number of DBPs. The results help to fill in the knowledge gap in the understanding of the molecular mechanisms of DBP genotoxicity.

  18. Using Domestic and Free-Ranging Arctic Canid Models for Environmental Molecular Toxicology Research.

    PubMed

    Harley, John R; Bammler, Theo K; Farin, Federico M; Beyer, Richard P; Kavanagh, Terrance J; Dunlap, Kriya L; Knott, Katrina K; Ylitalo, Gina M; O'Hara, Todd M

    2016-02-16

    The use of sentinel species for population and ecosystem health assessments has been advocated as part of a One Health perspective. The Arctic is experiencing rapid change, including climate and environmental shifts, as well as increased resource development, which will alter exposure of biota to environmental agents of disease. Arctic canid species have wide geographic ranges and feeding ecologies and are often exposed to high concentrations of both terrestrial and marine-based contaminants. The domestic dog (Canis lupus familiaris) has been used in biomedical research for a number of years and has been advocated as a sentinel for human health due to its proximity to humans and, in some instances, similar diet. Exploiting the potential of molecular tools for describing the toxicogenomics of Arctic canids is critical for their development as biomedical models as well as environmental sentinels. Here, we present three approaches analyzing toxicogenomics of Arctic contaminants in both domestic and free-ranging canids (Arctic fox, Vulpes lagopus). We describe a number of confounding variables that must be addressed when conducting toxicogenomics studies in canid and other mammalian models. The ability for canids to act as models for Arctic molecular toxicology research is unique and significant for advancing our understanding and expanding the tool box for assessing the changing landscape of environmental agents of disease in the Arctic.

  19. Risk assessment of Soulatrolide and Mammea (A/BA+A/BB) coumarins from Calophyllum brasiliense by a toxicogenomic and toxicological approach.

    PubMed

    Gomez-Verjan, J C; Estrella-Parra, E; Vazquez-Martinez, E R; Gonzalez-Sanchez, I; Guerrero-Magos, G; Mendoza-Villanueva, D; Isus, L; Alfaro, A; Cerbón-Cervantes, M; Aloy, P; Reyes-Chilpa, R

    2016-05-01

    Calophyllum brasiliense (Calophyllaceae) is a tropical rain forest tree distributed in Central and South America. It is an important source of tetracyclic dipyrano coumarins (Soulatrolide) and Mammea type coumarins. Soulatrolide is a potent inhibitor of HIV-1 reverse transcriptase and displays activity against Mycobacterium tuberculosis. Meanwhile, Mammea A/BA and A/BB, pure or as a mixture, are highly active against several human leukemia cell lines, Trypanosoma cruzi and Leishmania amazonensis. Nevertheless, there are few studies evaluating their safety profile. In the present work we performed toxicogenomic and toxicological analysis for both type of compounds. Soulatrolide, and the Mammea A/BA + A/BB mixture (2.1) were slightly toxic accordingly to Lorke assay classification (DL50 > 3000 mg/kg). After a short-term administration (100 mg/kg/daily, orally, 1 week) liver toxicogenomic analysis revealed 46 up and 72 downregulated genes for Mammea coumarins, and 665 up and 1077 downregulated genes for Soulatrolide. Gene enrichment analysis identified transcripts involved in drug metabolism for both compounds. In addition, network analysis through protein-protein interactions, tissue evaluation by TUNEL assay, and histological examination revealed no tissue damage on liver, kidney and spleen after treatments. Our results indicate that both type of coumarins displayed a safety profile, supporting their use in further preclinical studies to determine its therapeutic potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The eNanoMapper database for nanomaterial safety information

    PubMed Central

    Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413

  1. The eNanoMapper database for nanomaterial safety information.

    PubMed

    Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon

    2015-01-01

    The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).

  2. MicroRNA regulatory networks reflective of polyhexamethylene guanidine phosphate-induced fibrosis in A549 human alveolar adenocarcinoma cells.

    PubMed

    Shin, Da Young; Jeong, Mi Ho; Bang, In Jae; Kim, Ha Ryong; Chung, Kyu Hyuck

    2018-05-01

    Polyhexamethylene guanidine phosphate (PHMG-phosphate), an active component of humidifier disinfectant, is suspected to be a major cause of pulmonary fibrosis. Fibrosis, induced by recurrent epithelial damage, is significantly affected by epigenetic regulation, including microRNAs (miRNAs). The aim of this study was to investigate the fibrogenic mechanisms of PHMG-phosphate through the profiling of miRNAs and their target genes. A549 cells were treated with 0.75 μg/mL PHMG-phosphate for 24 and 48 h and miRNA microarray expression analysis was conducted. The putative mRNA targets of the miRNAs were identified and subjected to Gene Ontology analysis. After exposure to PHMG-phosphate for 24 and 48 h, 46 and 33 miRNAs, respectively, showed a significant change in expression over 1.5-fold compared with the control. The integrated analysis of miRNA and mRNA microarray results revealed the putative targets that were prominently enriched were associated with the epithelial-mesenchymal transition (EMT), cell cycle changes, and apoptosis. The dose-dependent induction of EMT by PHMG-phosphate exposure was confirmed by western blot. We identified 13 putative EMT-related targets that may play a role in PHMG-phosphate-induced fibrosis according to the Comparative Toxicogenomic Database. Our findings contribute to the comprehension of the fibrogenic mechanism of PHMG-phosphate and will aid further study on PHMG-phosphate-induced toxicity. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. BioCreative V CDR task corpus: a resource for chemical disease relation extraction.

    PubMed

    Li, Jiao; Sun, Yueping; Johnson, Robin J; Sciaky, Daniela; Wei, Chih-Hsuan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J; Wiegers, Thomas C; Lu, Zhiyong

    2016-01-01

    Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community.Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.

  4. A Network Pharmacology Approach to Determine the Active Components and Potential Targets of Curculigo Orchioides in the Treatment of Osteoporosis.

    PubMed

    Wang, Nani; Zhao, Guizhi; Zhang, Yang; Wang, Xuping; Zhao, Lisha; Xu, Pingcui; Shou, Dan

    2017-10-27

    BACKGROUND Osteoporosis is a complex bone disorder with a genetic predisposition, and is a cause of health problems worldwide. In China, Curculigo orchioides (CO) has been widely used as a herbal medicine in the prevention and treatment of osteoporosis. However, research on the mechanism of action of CO is still lacking. The aim of this study was to identify the absorbable components, potential targets, and associated treatment pathways of CO using a network pharmacology approach. MATERIAL AND METHODS We explored the chemical components of CO and used the five main principles of drug absorption to identify absorbable components. Targets for the therapeutic actions of CO were obtained from the PharmMapper server database. Pathway enrichment analysis was performed using the Comparative Toxicogenomics Database (CTD). Cytoscape was used to visualize the multiple components-multiple target-multiple pathways-multiple disease network for CO. RESULTS We identified 77 chemical components of CO, of which 32 components could be absorbed in the blood. These potential active components of CO regulated 83 targets and affected 58 pathways. Data analysis showed that the genes for estrogen receptor alpha (ESR1) and beta (ESR2), and the gene for 11 beta-hydroxysteroid dehydrogenase type 1, or cortisone reductase (HSD11B1) were the main targets of CO. Endocrine regulatory factors and factors regulating calcium reabsorption, steroid hormone biosynthesis, and metabolic pathways were related to these main targets and to ten corresponding compounds. CONCLUSIONS The network pharmacology approach used in our study has attempted to explain the mechanisms for the effects of CO in the prevention and treatment of osteoporosis, and provides an alternative approach to the investigation of the effects of this complex compound.

  5. Enabling online studies of conceptual relationships between medical terms: developing an efficient web platform.

    PubMed

    Albin, Aaron; Ji, Xiaonan; Borlawsky, Tara B; Ye, Zhan; Lin, Simon; Payne, Philip Ro; Huang, Kun; Xiang, Yang

    2014-10-07

    The Unified Medical Language System (UMLS) contains many important ontologies in which terms are connected by semantic relations. For many studies on the relationships between biomedical concepts, the use of transitively associated information from ontologies and the UMLS has been shown to be effective. Although there are a few tools and methods available for extracting transitive relationships from the UMLS, they usually have major restrictions on the length of transitive relations or on the number of data sources. Our goal was to design an efficient online platform that enables efficient studies on the conceptual relationships between any medical terms. To overcome the restrictions of available methods and to facilitate studies on the conceptual relationships between medical terms, we developed a Web platform, onGrid, that supports efficient transitive queries and conceptual relationship studies using the UMLS. This framework uses the latest technique in converting natural language queries into UMLS concepts, performs efficient transitive queries, and visualizes the result paths. It also dynamically builds a relationship matrix for two sets of input biomedical terms. We are thus able to perform effective studies on conceptual relationships between medical terms based on their relationship matrix. The advantage of onGrid is that it can be applied to study any two sets of biomedical concept relations and the relations within one set of biomedical concepts. We use onGrid to study the disease-disease relationships in the Online Mendelian Inheritance in Man (OMIM). By crossvalidating our results with an external database, the Comparative Toxicogenomics Database (CTD), we demonstrated that onGrid is effective for the study of conceptual relationships between medical terms. onGrid is an efficient tool for querying the UMLS for transitive relations, studying the relationship between medical terms, and generating hypotheses.

  6. Autism genes are selectively targeted by environmental pollutants including pesticides, heavy metals, bisphenol A, phthalates and many others in food, cosmetics or household products.

    PubMed

    Carter, C J; Blizard, R A

    2016-10-27

    The increasing incidence of autism suggests a major environmental influence. Epidemiology has implicated many candidates and genetics many susceptibility genes. Gene/environment interactions in autism were analysed using 206 autism susceptibility genes (ASG's) from the Autworks database to interrogate ∼1 million chemical/gene interactions in the comparative toxicogenomics database. Any bias towards ASG's was statistically determined for each chemical. Many suspect compounds identified in epidemiology, including tetrachlorodibenzodioxin, pesticides, particulate matter, benzo(a)pyrene, heavy metals, valproate, acetaminophen, SSRI's, cocaine, bisphenol A, phthalates, polyhalogenated biphenyls, flame retardants, diesel constituents, terbutaline and oxytocin, inter alia showed a significant degree of bias towards ASG's, as did relevant endogenous agents (retinoids, sex steroids, thyroxine, melatonin, folate, dopamine, serotonin). Numerous other suspected endocrine disruptors (over 100) selectively targeted ASG's including paraquat, atrazine and other pesticides not yet studied in autism and many compounds used in food, cosmetics or household products, including tretinoin, soy phytoestrogens, aspartame, titanium dioxide and sodium fluoride. Autism polymorphisms influence the sensitivity to some of these chemicals and these same genes play an important role in barrier function and control of respiratory cilia sweeping particulate matter from the airways. Pesticides, heavy metals and pollutants also disrupt barrier and/or ciliary function, which is regulated by sex steroids and by bitter/sweet taste receptors. Further epidemiological studies and neurodevelopmental and behavioural research is warranted to determine the relevance of large number of suspect candidates whose addition to the environment, household, food and cosmetics might be fuelling the autism epidemic in a gene-dependent manner. Copyright © 2016. Published by Elsevier Ltd.

  7. Case study on the utility of hepatic global gene expression profiling in the risk assessment of the carcinogen furan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Anna Francina, E-mail: Francina.Jackson@hc-sc.gc.ca; Department of Biology, Carleton University, 1125 Colonel By Drive, Ottawa K1S 5B6; Williams, Andrew, E-mail: Andrew.Williams@hc-sc.gc.ca

    2014-01-01

    Furan is a chemical hepatocarcinogen in mice and rats. Its previously postulated cancer mode of action (MOA) is chronic cytotoxicity followed by sustained regenerative proliferation; however, its molecular basis is unknown. To this end, we conducted toxicogenomic analysis of B3C6F1 mouse livers following three week exposures to non-carcinogenic (0, 1, 2 mg/kg bw) or carcinogenic (4 and 8 mg/kg bw) doses of furan. We saw enrichment for pathways responsible for cytotoxicity: stress-activated protein kinase (SAPK) and death receptor (DR5 and TNF-alpha) signaling, and proliferation: extracellular signal-regulated kinases (ERKs) and TNF-alpha. We also noted the involvement of NF-kappaB and c-Jun inmore » response to furan, which are genes that are known to be required for liver regeneration. Furan metabolism by CYP2E1 produces cis-2-butene-1,4-dial (BDA), which is required for ensuing cytotoxicity and oxidative stress. NRF2 is a master regulator of gene expression during oxidative stress and we suggest that chronic NFR2 activity and chronic inflammation may represent critical transition events between the adaptive (regeneration) and adverse (cancer) outcomes. Another objective of this study was to demonstrate the applicability of toxicogenomics data in quantitative risk assessment. We modeled benchmark doses for our transcriptional data and previously published cancer data, and observed consistency between the two. Margin of exposure values for both transcriptional and cancer endpoints were also similar. In conclusion, using furan as a case study we have demonstrated the value of toxicogenomics data in elucidating dose-dependent MOA transitions and in quantitative risk assessment. - Highlights: • Global gene expression changes in furan-exposed mouse livers were analyzed. • A molecular mode of action for furan-induced hepatocarcinogenesis is proposed. • Key pathways include NRF2, SAPK, ERK and death receptor signaling. • Important roles for TNF-alpha, c-Jun, and NF-κB in tumorigenesis are proposed. • BMD and MoE values from transcriptional and apical data are compared.« less

  8. Toxicogenomic analysis of N-nitrosomorpholine induced changes in rat liver: Comparison of genomic and proteomic responses and anchoring to histopathological parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberemm, A., E-mail: axel.oberemm@bfr.bund.d; Ahr, H.-J.; Bannasch, P.

    2009-12-01

    A common animal model of chemical hepatocarcinogenesis was used to examine the utility of transcriptomic and proteomic data to identify early biomarkers related to chemically induced carcinogenesis. N-nitrosomorpholine, a frequently used genotoxic model carcinogen, was applied via drinking water at 120 mg/L to male Wistar rats for 7 weeks followed by an exposure-free period of 43 weeks. Seven specimens of each treatment group (untreated control and 120 mg/L N-nitrosomorpholine in drinking water) were sacrificed at nine time points during and after N-nitrosomorpholine treatment. Individual samples from the liver were prepared for histological and toxicogenomic analyses. For histological detection of preneoplasticmore » and neoplastic tissue areas, sections were stained using antibodies against the placental form of glutathione-S-transferase (GST-P). Gene and protein expression profiles of liver tissue homogenates were analyzed using RG-U34A Affymetrix rat gene chips and two-dimensional gel electrophoresis-based proteomics, respectively. In order to compare results obtained by histopathology, transcriptomics and proteomics, GST-P-stained liver sections were evaluated morphometrically, which revealed a parallel time course of the area fraction of preneoplastic lesions and gene plus protein expression patterns. On the transcriptional level, an increase of hepatic GST-P expression was detectable as early as 3 weeks after study onset. Comparing deregulated genes and proteins, eight species were identified which showed a corresponding expression profile on both expression levels. Functional analysis suggests that these genes and corresponding proteins may be useful as biomarkers of early hepatocarcinogenesis.« less

  9. Advances in Toxico-Cheminformatics: Supporting a New ...

    EPA Pesticide Factsheets

    EPA’s National Center for Computational Toxicology is building capabilities to support a new paradigm for toxicity screening and prediction through the harnessing of legacy toxicity data, creation of data linkages, and generation of new high-throughput screening (HTS) data. The DSSTox project is working to improve public access to quality structure-annotated chemical toxicity information in less summarized forms than traditionally employed in SAR modeling, and in ways that facilitate both data-mining and read-across. Both DSSTox Structure-Files and the dedicated on-line DSSTox Structure-Browser are enabling seamless structure-based searching and linkages to and from previously isolated, chemically indexed public toxicity data resources (e.g., NTP, EPA IRIS, CPDB). Most recently, structure-enabled search capabilities have been extended to chemical exposure-related microarray experiments in the public EBI Array Express database, additionally linking this resource to the NIEHS CEBS toxicogenomics database. The public DSSTox chemical and bioassay inventory has been recently integrated into PubChem, allowing a user to take full advantage of PubChem structure-activity and bioassay clustering features. The DSSTox project is providing cheminformatics support for EPA’s ToxCastTM project, as well as supporting collaborations with the National Toxicology Program (NTP) HTS and the NIH Chemical Genomics Center (NCGC). Phase I of the ToxCastTM project is generating HT

  10. Cardiovascular Outcomes and the Physical and Chemical Properties of Metal Ions Found in Particulate Matter Air Pollution: A QICAR Study

    PubMed Central

    Meng, Qingyu; Lu, Shou-En; Buckley, Barbara; Welsh, William J.; Whitsel, Eric A.; Hanna, Adel; Yeatts, Karin B.; Warren, Joshua; Herring, Amy H.; Xiu, Aijun

    2013-01-01

    Background: This paper presents an application of quantitative ion character–activity relationships (QICAR) to estimate associations of human cardiovascular (CV) diseases (CVDs) with a set of metal ion properties commonly observed in ambient air pollutants. QICAR has previously been used to predict ecotoxicity of inorganic metal ions based on ion properties. Objectives: The objective of this work was to examine potential associations of biological end points with a set of physical and chemical properties describing inorganic metal ions present in exposures using QICAR. Methods: Chemical and physical properties of 17 metal ions were obtained from peer-reviewed publications. Associations of cardiac arrhythmia, myocardial ischemia, myocardial infarction, stroke, and thrombosis with exposures to metal ions (measured as inference scores) were obtained from the Comparative Toxicogenomics Database (CTD). Robust regressions were applied to estimate the associations of CVDs with ion properties. Results: CVD was statistically significantly associated (Bonferroni-adjusted significance level of 0.003) with many ion properties reflecting ion size, solubility, oxidation potential, and abilities to form covalent and ionic bonds. The properties are relevant for reactive oxygen species (ROS) generation, which has been identified as a possible mechanism leading to CVDs. Conclusion: QICAR has the potential to complement existing epidemiologic methods for estimating associations between CVDs and air pollutant exposures by providing clues about the underlying mechanisms that may explain these associations. PMID:23462649

  11. Integrating genome-wide association study summaries and element-gene interaction datasets identified multiple associations between elements and complex diseases.

    PubMed

    He, Awen; Wang, Wenyu; Prakash, N Tejo; Tinkov, Alexey A; Skalny, Anatoly V; Wen, Yan; Hao, Jingcan; Guo, Xiong; Zhang, Feng

    2018-03-01

    Chemical elements are closely related to human health. Extensive genomic profile data of complex diseases offer us a good opportunity to systemically investigate the relationships between elements and complex diseases/traits. In this study, we applied gene set enrichment analysis (GSEA) approach to detect the associations between elements and complex diseases/traits though integrating element-gene interaction datasets and genome-wide association study (GWAS) data of complex diseases/traits. To illustrate the performance of GSEA, the element-gene interaction datasets of 24 elements were extracted from the comparative toxicogenomics database (CTD). GWAS summary datasets of 24 complex diseases or traits were downloaded from the dbGaP or GEFOS websites. We observed significant associations between 7 elements and 13 complex diseases or traits (all false discovery rate (FDR) < 0.05), including reported relationships such as aluminum vs. Alzheimer's disease (FDR = 0.042), calcium vs. bone mineral density (FDR = 0.031), magnesium vs. systemic lupus erythematosus (FDR = 0.012) as well as novel associations, such as nickel vs. hypertriglyceridemia (FDR = 0.002) and bipolar disorder (FDR = 0.027). Our study results are consistent with previous biological studies, supporting the good performance of GSEA. Our analyzing results based on GSEA framework provide novel clues for discovering causal relationships between elements and complex diseases. © 2017 WILEY PERIODICALS, INC.

  12. An Integrative data mining approach to identifying Adverse ...

    EPA Pesticide Factsheets

    The Adverse Outcome Pathway (AOP) framework is a tool for making biological connections and summarizing key information across different levels of biological organization to connect biological perturbations at the molecular level to adverse outcomes for an individual or population. Computational approaches to explore and determine these connections can accelerate the assembly of AOPs. By leveraging the wealth of publicly available data covering chemical effects on biological systems, computationally-predicted AOPs (cpAOPs) were assembled via data mining of high-throughput screening (HTS) in vitro data, in vivo data and other disease phenotype information. Frequent Itemset Mining (FIM) was used to find associations between the gene targets of ToxCast HTS assays and disease data from Comparative Toxicogenomics Database (CTD) by using the chemicals as the common aggregators between datasets. The method was also used to map gene expression data to disease data from CTD. A cpAOP network was defined by considering genes and diseases as nodes and FIM associations as edges. This network contained 18,283 gene to disease associations for the ToxCast data and 110,253 for CTD gene expression. Two case studies show the value of the cpAOP network by extracting subnetworks focused either on fatty liver disease or the Aryl Hydrocarbon Receptor (AHR). The subnetwork surrounding fatty liver disease included many genes known to play a role in this disease. When querying the cpAOP

  13. EPA CHEMICAL PRIORITIZATION COMMUNITY OF PRACTICE.

    EPA Science Inventory

    IN 2005 THE NATIONAL CENTER FOR COMPUTATIONAL TOXICOLOGY (NCCT) ORGANIZED EPA CHEMICAL PRIORITIATION COMMUNITY OF PRACTICE (CPCP) TO PROVIDE A FORUM FOR DISCUSSING THE UTILITY OF COMPUTATIONAL CHEMISTRY, HIGH-THROUGHPUT SCREENIG (HTS) AND VARIOUS TOXICOGENOMIC TECHNOLOGIES FOR CH...

  14. ToxCast: Developing Predictive Signatures of Chemically Induced Toxicity (S)

    EPA Science Inventory

    ToxCast, the United States Environmental Protection Agency’s chemical prioritization research program, is developing methods for utilizing computational chemistry, bioactivity profiling and toxicogenomic data to predict potential for toxicity and prioritize limited testing resour...

  15. Leveraging toxicogenomics data to build predictive biomarkers supporting AOP assessment

    EPA Science Inventory

    Chemicals induce liver cancer in rodents through well characterized adverse outcome pathways (AOPs) that include molecular initiating events (MIEs). In addition to genotoxicity, these include nongenotoxic mechanisms of cytotoxicity and receptor activation (aryl hydrocarbon recept...

  16. THE TOXCAST PROGRAM FOR PRIORITIZING TOXICITY TESTING OF ENVIRONMENTAL CHEMICALS

    EPA Science Inventory

    The United States Environmental Protection Agency (EPA) is developing methods for utilizing computational chemistry, high-throughput screening (HTS) and various toxicogenomic technologies to predict potential for toxicity and prioritize limited testing resources towards chemicals...

  17. APPROACHES IN PROTEOMICS AND GENOMICS FOR ECO-TOXICOLOGY

    EPA Science Inventory

    A new area of scientific investigation, coined toxicogenomics, enables researchers to understand and study the interaction between the environment and inherited genetic characteristics. This understanding will be critical to fully appreciate the response of organisms to environm...

  18. Cross-Platform Toxicogenomics for the Prediction of Non-Genotoxic Hepatocarcinogenesis in Rat

    PubMed Central

    Metzger, Ute; Templin, Markus F.; Plummer, Simon; Ellinger-Ziegelbauer, Heidrun; Zell, Andreas

    2014-01-01

    In the area of omics profiling in toxicology, i.e. toxicogenomics, characteristic molecular profiles have previously been incorporated into prediction models for early assessment of a carcinogenic potential and mechanism-based classification of compounds. Traditionally, the biomarker signatures used for model construction were derived from individual high-throughput techniques, such as microarrays designed for monitoring global mRNA expression. In this study, we built predictive models by integrating omics data across complementary microarray platforms and introduced new concepts for modeling of pathway alterations and molecular interactions between multiple biological layers. We trained and evaluated diverse machine learning-based models, differing in the incorporated features and learning algorithms on a cross-omics dataset encompassing mRNA, miRNA, and protein expression profiles obtained from rat liver samples treated with a heterogeneous set of substances. Most of these compounds could be unambiguously classified as genotoxic carcinogens, non-genotoxic carcinogens, or non-hepatocarcinogens based on evidence from published studies. Since mixed characteristics were reported for the compounds Cyproterone acetate, Thioacetamide, and Wy-14643, we reclassified these compounds as either genotoxic or non-genotoxic carcinogens based on their molecular profiles. Evaluating our toxicogenomics models in a repeated external cross-validation procedure, we demonstrated that the prediction accuracy of our models could be increased by joining the biomarker signatures across multiple biological layers and by adding complex features derived from cross-platform integration of the omics data. Furthermore, we found that adding these features resulted in a better separation of the compound classes and a more confident reclassification of the three undefined compounds as non-genotoxic carcinogens. PMID:24830643

  19. Toxicogenomic analysis of the hepatic effects of perfluorooctanoic acid on rare minnows (Gobiocypris rarus)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei Yanhong; Graduate School of the Chinese Academy of Sciences, Beijing, 100080; Liu Yang

    2008-02-01

    Perfluorooctanoic acid (PFOA) is a ubiquitous environmental contaminant that has been detected in a variety of terrestrial and aquatic organisms. To assess the effects of PFOA in fish and predict its potential mode of action, a toxicogenomic approach was applied to hepatic gene expression profile analysis in male and female rare minnows (Gobiocypris rarus) using a custom cDNA microarray containing 1773 unique genes. Rare minnows were treated with continuous flow-through exposure to PFOA at concentrations of 3, 10, and 30 mg/L for 28 days. Based on the observed histopathological changes, the livers from fish exposed to 10 mg/L PFOA weremore » selected for further hepatic gene expression analysis. While 124 and 171 genes were significantly altered by PFOA in males and females, respectively, of which 43 genes were commonly regulated in both sexes. The affected genes are involved in multiple biological processes, including lipid metabolism and transport, hormone action, immune responses, and mitochondrial functions. PFOA exposure significantly suppressed genes involved in fatty acid biosynthesis and transport but induced genes associated with intracellular trafficking of cholesterol. Alterations in expression of genes associated with mitochondrial fatty acid {beta}-oxidation were only observed in female rare minnows. In addition, PFOA inhibited genes responsible for thyroid hormone biosynthesis and significantly induced estrogen-responsive genes. These findings implicate PFOA in endocrine disruption. This work contributes not only to the elucidation of the potential mode of toxicity of PFOA to aquatic organisms but also to the use of toxicogenomic approaches to address issues in environmental toxicology.« less

  20. ARRAYS FOR BIOMONITORING ENVIRONMENTAL AND REPRODUCTIVE TOXICOLOGY

    EPA Science Inventory

    DNA arrays are receiving increasing interest as a tool for monitoring the developmental and reproductive impact of xenobiotics and other hazardous materials on human and wildlife populations. The primary tenet of toxicogenomics is that effects of environmental exposure on cellul...

  1. DEVELOPMENTAL TOXICOGENOMIC STUDIES OF PFOA AND PFOS IN MICE.

    EPA Science Inventory

    Perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS) are developmentally toxic in rodents. To better understand the mechanism(s) associated with this toxicity, we have conducted transcript profiling in mice. In an initial study, pregnant animals were dosed througho...

  2. Toxicogenomics Applied to Ecotoxicology

    EPA Science Inventory

    This chapter focuses on evaluation of the current practice of ecotoxicogenomics, less than a decade after the term was coined, as the field continues to evolve. We describe major applications of genomic approaches to define modes/mechanisms of action and derive biologically-base...

  3. Isoflurane is a suitable alternative to ether for anesthetizing rats prior to euthanasia for gene expression analysis.

    PubMed

    Nakatsu, Noriyuki; Igarashi, Yoshinobu; Aoshi, Taiki; Hamaguchi, Isao; Saito, Masumichi; Mizukami, Takuo; Momose, Haruka; Ishii, Ken J; Yamada, Hiroshi

    2017-01-01

    Diethyl ether (ether) had been widely used in Japan for anesthesia, despite its explosive properties and toxicity to both humans and animals. We also had used ether as an anesthetic for euthanizing rats for research in the Toxicogenomics Project (TGP). Because the use of ether for these purposes will likely cease, it is required to select an alternative anesthetic which is validated for consistency with existing TGP data acquired under ether anesthesia. We therefore compared two alternative anesthetic candidates, isoflurane and pentobarbital, with ether in terms of hematological findings, serum biochemical parameters, and gene expressions. As a result, few differences among the three agents were observed. In hematological and serum biochemistry analysis, no significant changes were found. In gene expression analysis, four known genes were extracted as differentially expressed genes in the liver of rats anesthetized with ether, isoflurane, or pentobarbital. However, no significant relationships were detected using gene ontology, pathway, or gene enrichment analyses by DAVID and TargetMine. Surprisingly, although it was expected that the lung would be affected by administration via inhalation, only one differentially expressed gene was extracted in the lung. Taken together, our data indicate that there are no significant differences among ether, isoflurane, and pentobarbital with respect to effects on hematological parameters, serum biochemistry parameters, and gene expression. Based on its smallest affect to existing data and its safety profile for humans and animals, we suggest isoflurane as a suitable alternative anesthetic for use in rat euthanasia in toxicogenomics analysis.

  4. CELLULAR UPTAKE AND TOXICITY OF DENDRITIC NANOMATERIALS: AN INTEGRATED PHYSICOCHEMICAL AND TOXICOGENOMICS STUDY

    EPA Science Inventory

    The successful completion of this project is expected to provide industry with critical data and predictive tools needed to assess the health and environmental impact of dendritic nanomaterials such as EDA core PAMAM dendrimers.

  5. EPA'S TOXCAST PROGRAM FOR PREDICTING HAZARD AND PRIORITIZING TOXICITY TESTING OF ENVIRONMENTAL CHEMICALS

    EPA Science Inventory

    EPA is developing methods for utilizing computational chemistry, high-throughput screening (HTS) and various toxicogenomic technologies to predict potential for toxicity and prioritize limited testing resources towards chemicals that likely represent the greatest hazard to human ...

  6. Pathway-Based Concentration Response Profiles from Toxicogenomics Data

    EPA Science Inventory

    Microarray analysis of gene expression of in vitro systems could be a powerful tool for assessing chemical hazard. Differentially expressed genes specific to cells, chemicals, and concentrations can be organized into molecular pathways that inform mode of action. An important par...

  7. TOXICOGENOMIC DISSECTION OF RODENT LIVER TRANSCRIPT PROFILES AFTER EXPOSURE TO PERFLUOROALKYL ACIDS

    EPA Science Inventory

    Exposure to peroxisome proliferator chemicals (PPC) leads to alterations in the balance between hepatocyte growth and apoptosis, increases in liver to body weight ratios and liver tumors. The perfluoroalkyl acids including perfluorooctanoate (PFOA) and perfluorooctane sulfonate (...

  8. Toxicogenomic profiling of perfluorononanoic acid in wild-type and PPARa-null mice

    EPA Science Inventory

    Perfluorononanoic acid (PFNA) is a ubiquitous environmental contaminant and a developmental toxicant in laboratory animals. Like other perfluoroalkyl acids (PFAAs) such as perfluorooctane sulfonate (PFOA) and perfluoroalkyl acid (PFOS), PFNA is a known activator ofperoxisome prol...

  9. Toxicogenomic Effects Common to Triazole Antifungals and Conserved Between Rats and Humans

    EPA Science Inventory

    The triazole antifungals myclobutanil, propiconazole and triadimefon cause varying degrees of hepatic toxicity and disrupt steroid hormone homeostasis in rodent in vivo models. To identify biological pathways consistently modulated across multiple time-points and various study d...

  10. THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL

    EPA Science Inventory

    Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...

  11. DEVELOPMENT OF MICROARRAYS AS A TOOL FOR DISCOVERING ENVIRONMENTAL EXPOSURE INDICATORS

    EPA Science Inventory

    Toxicogenomics includes research to identify differential gene expression in laboratory and field animals exposed to toxicants, and ultimately, to link the earliest indicators of exposure to adverse effects in organisms and populations. The USEPA National Exposure Research Labor...

  12. Toxicogenomic identification of biomarkers of acute respiratory exposure sensitizing agents

    EPA Science Inventory

    Allergy induction requires multiple exposures to an agent. Therefore the development of high-throughput or in vitro assays for effective screening of potential sensitizers will require the identification of biomarkers. The goal of this preliminary study was to identify potential ...

  13. Toxicogenomic identification of biomarkers of acute respiratory expsoure to sensitizing agents

    EPA Science Inventory

    Allergy induction requires multiple exposures to an agent. Therefore the development of high-throughput or in vitro assays for effective screening of potential sensitizers will require the identification of biomarkers. The goal of this preliminary study was to identify potential ...

  14. APPLICATION OF GENOMICS TO REPRODUCTIVE TOXICOLOGY: WORKING FROM RESEARCH TOWARDS RISK ASSESSMENT

    EPA Science Inventory

    Genomic technologies are available to examine the expression of thousands of genes simultaneously. These technologies represent a paradigm shift from single-gene approaches fundamentally altering the practice of toxicology. The goal of toxicogenomic studies is to improve human ...

  15. Temporal and Dose-response Pathway Analysis for Predicting Chronic Chemical Toxicity

    EPA Science Inventory

    Current challenges facing chemical risk assessment are the time and resources required to meet the data standards necessary for a published assessment and the incorporation of modern biological information. The integration of toxicogenomics into the risk assessment paradigm may ...

  16. THE FUTURE OF TOXICOGENOMICS

    EPA Science Inventory

    Toxicology has classically been seen as the science of poisons. In the modern world, however, it has evolved into a composite of related, but distinct disciplines, which together seek to understand how chemicals of all kinds - both man-made and natural - affect human health and t...

  17. THE MAQC (MICROARRAY QUALITY CONTROL) PROJECT: CALIBRATED RNA SAMPLES, REFERENCE DATASETS, AND QC METRICS AND THRESHOLDS

    EPA Science Inventory

    FDAs Critical Path Initiative identifies pharmacogenomics and toxicogenomics as key opportunities in advancing medical product development and personalized medicine, and the Guidance for Industry: Pharmacogenomic Data Submissions has been released. Microarrays represent a co...

  18. ExpoCast: Exposure Science for Prioritization and Toxicity Testing (S)

    EPA Science Inventory

    The US EPA is completing the Phase I pilot for a chemical prioritization research program, called ToxCast. Here EPA is developing methods for using computational chemistry, high-throughput screening, and toxicogenomic technologies to predict potential toxicity and prioritize limi...

  19. ExpoCast: Exposure Science for Prioritization and Toxicity Testing

    EPA Science Inventory

    The US EPA is completing the Phase I pilot for a chemical prioritization research program, called ToxCastTM. Here EPA is developing methods for using computational chemistry, high-throughput screening, and toxicogenomic technologies to predict potential toxicity and prioritize l...

  20. Customizing the Connectivity Map Approach for Functional Evaluation in Toxicogenomics Studies (SOT)

    EPA Science Inventory

    Evaluating effects on the transcriptome can provide insight on putative chemical-specific mechanisms of action (MOAs). With whole genome transcriptomics technologies becoming more amenable to high-throughput screening, libraries of chemicals can be evaluated in vitro to produce l...

  1. Application of Toxicogenomics in Decision Making in Ecological and Human Health Risk Assessment

    EPA Science Inventory

    Uncertainties in risk assessment arise from sparse or inadequate data including gaps in our understanding of mode of action, the exposure-dose-response pathway, cross-species toxicokinetic and toxicodynamic information, and/or exposure data. There is an expectation that toxicogen...

  2. ToxCast: Developing Predictive Signatures of Chemically Induced Toxicity (Developing Predictive Bioactivity Signatures from ToxCasts HTS Data)

    EPA Science Inventory

    ToxCast, the United States Environmental Protection Agency’s chemical prioritization research program, is developing methods for utilizing computational chemistry, bioactivity profiling and toxicogenomic data to predict potential for toxicity and prioritize limited testing resour...

  3. Toward a Checklist for Exchange and Interpretation of Data froma Toxicology Study

    EPA Science Inventory

    With the advent of toxicogenomics came the need to share data across interdisciplinary teams and to deposit data associated with publications into public data repositories. Within a single institution, many variables associated with a study are standardized, for instance diet, an...

  4. Sources of variation in baseline gene expression levels from toxicogenomics study control animals

    EPA Science Inventory

    The use of gene expression profiling in both clinical and laboratory settings would be enhanced by better characterization ofvariance due to individual, environmental, and technical factors. Meta-analysis ofmicroarray data from untreated or vehicle-treated animals within the con...

  5. Gene Expression Profiling in Liver and Testis of Rats to Characterize the Toxicity of Triazole Fungicides

    EPA Science Inventory

    Four triazole fungicides were studied using toxicogenomic techniques to identify potential mechanisms of action. Adult male Sprague-Dawley rats were dosed for 14 days by gavage with fluconazole, myclobutanil, propiconazole, or triadimefon. Following exposure, serum was collected ...

  6. GENE EXPRESSION PROFILING IN LIVER AND TESTIS OF RATS TO CHARACTERIZE THE TOXICITY OF TRIAZOLE FUNGICIDES.

    EPA Science Inventory

    Four triazole fungicides were studied using toxicogenomic techniques to identify potential mechanisms of action. Adult male Sprague-Dawley rats were dosed for 14 days by gavage with fluconazole, myclobutanil, propiconazole, or triadimefon. Following exposure, serum was collected ...

  7. Transcriptomic Dose-Response Analysis for Mode of Action and Risk Assessment

    EPA Science Inventory

    Microarray and RNA-seq technologies can play an important role in assessing the health risks associated with environmental exposures. The utility of gene expression data to predict hazard has been well documented. Early toxicogenomics studies used relatively high, single doses w...

  8. TOXICOGENOMICS AS A TOOL TO ASSESS EXPOSURE OF FISH TO ENVIRONMENTAL POLLUTANTS

    EPA Science Inventory

    Molecular biological techniques such as gene arrays and quantitative real-time PCR are becoming important tools to study alterations in normal gene expression in fish and other wildlife exposed to such pollutants as endocrine disrupting chemicals (EDCs). An important function fo...

  9. Toxicogenomic responses of nanotoxicity in Daphnia magna exposed to silver nitrate and coated silver nanoparticles

    EPA Science Inventory

    Applications for silver nanomaterials in consumer products are rapidly expanding, creating an urgent need for toxicological examination of the exposure potential and ecological effects of silver nanoparticles (AgNPs). The integration of genomic techniques into environmental toxic...

  10. Data mining reveals a network of early-response genes as a consensus signature of drug-induced in vitro and in vivo toxicity.

    PubMed

    Zhang, J D; Berntenis, N; Roth, A; Ebeling, M

    2014-06-01

    Gene signatures of drug-induced toxicity are of broad interest, but they are often identified from small-scale, single-time point experiments, and are therefore of limited applicability. To address this issue, we performed multivariate analysis of gene expression, cell-based assays, and histopathological data in the TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation system) database. Data mining highlights four genes-EGR1, ATF3, GDF15 and FGF21-that are induced 2 h after drug administration in human and rat primary hepatocytes poised to eventually undergo cytotoxicity-induced cell death. Modelling and simulation reveals that these early stress-response genes form a functional network with evolutionarily conserved structure and intrinsic dynamics. This is underlined by the fact that early induction of this network in vivo predicts drug-induced liver and kidney pathology with high accuracy. Our findings demonstrate the value of early gene-expression signatures in predicting and understanding compound-induced toxicity. The identified network can empower first-line tests that reduce animal use and costs of safety evaluation.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirode, Mitsuhiro; Ono, Atsushi; Miyagishima, Toshikazu

    We have constructed a large-scale transcriptome database of rat liver treated with various drugs. In an effort to identify a biomarker for diagnosis of hepatic phospholipidosis, we extracted 78 probe sets of rat hepatic genes from data of 5 drugs, amiodarone, amitriptyline, clomipramine, imipramine, and ketoconazole, which actually induced this phenotype. Principal component analysis (PCA) using these probes clearly separated dose- and time-dependent clusters of treated groups from their controls. Moreover, 6 drugs (chloramphenicol, chlorpromazine, gentamicin, perhexiline, promethazine, and tamoxifen), which were reported to cause phospholipidosis but judged as negative by histopathological examination, were designated as positive by PCA usingmore » these probe sets. Eight drugs (carbon tetrachloride, coumarin, tetracycline, metformin, hydroxyzine, diltiazem, 2-bromoethylamine, and ethionamide), which showed phospholipidosis-like vacuolar formation in the histopathology, could be distinguished from the typical drugs causing phospholipidosis. Moreover, the possible induction of phospholipidosis was predictable by the expression of these genes 24 h after single administration in some of the drugs. We conclude that these identified 78 probe sets could be useful for diagnosis of phospholipidosis, and that toxicogenomics would be a promising approach for prediction of this type of toxicity.« less

  12. Collaborative biocuration--text-mining development task for document prioritization for curation.

    PubMed

    Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J

    2012-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation is a community-wide effort for evaluating text mining and information extraction systems for the biological domain. The 'BioCreative Workshop 2012' subcommittee identified three areas, or tracks, that comprised independent, but complementary aspects of data curation in which they sought community input: literature triage (Track I); curation workflow (Track II) and text mining/natural language processing (NLP) systems (Track III). Track I participants were invited to develop tools or systems that would effectively triage and prioritize articles for curation and present results in a prototype web interface. Training and test datasets were derived from the Comparative Toxicogenomics Database (CTD; http://ctdbase.org) and consisted of manuscripts from which chemical-gene-disease data were manually curated. A total of seven groups participated in Track I. For the triage component, the effectiveness of participant systems was measured by aggregate gene, disease and chemical 'named-entity recognition' (NER) across articles; the effectiveness of 'information retrieval' (IR) was also measured based on 'mean average precision' (MAP). Top recall scores for gene, disease and chemical NER were 49, 65 and 82%, respectively; the top MAP score was 80%. Each participating group also developed a prototype web interface; these interfaces were evaluated based on functionality and ease-of-use by CTD's biocuration project manager. In this article, we present a detailed description of the challenge and a summary of the results.

  13. Integrating Omic Technologies into Aquatic Ecological Risk Assessment and Environmental Monitoring: Hurdles, Achievements and Future Outlook

    EPA Science Inventory

    In this commentary we present the findings from an international consortium on fish toxicogenomics sponsored by the UK Natural Environment Research Council (NERC) with an objective of moving omic technologies into chemical risk assessment and environmental monitoring. Objectiv...

  14. Integrating Omic Technologies into Aquatic Ecological Risk Assessment and Environmental Monitoring: Hurdles, Achievements and Future Outlook

    EPA Science Inventory

    Background: In this commentary we present the findings from an international consortium on fish toxicogenomics sponsored by the UK Natural Environment Research Council (NERC) with a remit of moving omic technologies into chemical risk assessment and environmental monitoring. Obj...

  15. Ecotoxicogenomics to Support Ecological Risk Assessment: A Case Study with Bisphenol A in Fish

    EPA Science Inventory

    Toxicogenomic approaches are being increasingly applied in the field of ecotoxicology. Given the growing availability of ecotoxicogenomic data, the Agency and the broader scientific community are actively engaged in considering how best to use those data to support ecological ris...

  16. Utilizing Toxicogenomic Data to Understand Chemical Mechanism of Action in Risk Assessment

    EPA Science Inventory

    A recent National Academy of Sciences report pointed to the strong potential for genomic technologies to contribute to the risk assessment process. The report, however, also acknowledged that neither has the full impact of genomic technology been realized nor has it been broadly ...

  17. SOURCES OF VARIABILITY IN BASELINE GENE EXPRESSION IN RAT LIVER AND KIDNEY

    EPA Science Inventory

    Toxicogenomic studies are typically variable in design, but the impact of variations in study design and conduct on control animal gene expression has not been well characterized. A working group of the Health and Environmental Sciences Institute (HESI) Technical Committee on the...

  18. Wiki-based Data Management System for Toxicogenomics

    EPA Science Inventory

    We are developing a data management system to enable systems-based toxicology at the US EPA. This is built upon the WikiLIMS platform and is capabale of housing not just genomics data but also a wide variety of toxicology data and associated experimental design information. Thi...

  19. Characteristics of genomic signatures derived using univariate methods and mechanistically anchored functional descriptors for predicting drug- and xenobiotic-induced nephrotoxicity.

    PubMed

    Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J

    2008-01-01

    ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of predictive genomic investigations.

  20. An Approach for Integrating Toxicogenomic Data in Risk Assessment: The Dibutyl Phthalate Case Study

    EPA Science Inventory

    An approach for evaluating and integrating genomic data in chemical risk assessment was developed based on the lessons learned from performing a case study for the chemical dibutyl phthalate. A case study prototype approach was first developed in accordance with EPA guidance and ...

  1. TOXICOGENOMIC ANALYSIS OF TOLUENE EXPOSURE AT 3 AGES IN BROWN NORWAY RATS.

    EPA Science Inventory

    A major concern in assessing toxicity to environmental exposures is differential

    susceptibility in subsets of the population. Aging adults, who comprise the fastest

    growing segment of the population, may possess a greater sensitivity due to changes in

    metabol...

  2. NEW TECHNOLOGIES TO SOLVE OLD PROBLEMS AND ADDRESS ISSUES IN RISK ASSESSMENT

    EPA Science Inventory

    Appropriate utilization of data is an ongoing concern of the regulated industries and the agencies charged with assessing safety or risk. An area of current interest is the possibility that toxicogenomics will enhance our ability to develop higher or high-throughput models for pr...

  3. Comparison of L1000 and Affymetrix Microarray for In Vitro Concentration-Response Gene Expression Profiling (SOT)

    EPA Science Inventory

    Advances in high-throughput screening technologies and in vitro systems have opened doors for cost-efficient evaluation of chemical effects on a diversity of biological endpoints. However, toxicogenomics platforms remain too costly to evaluate large libraries of chemicals in conc...

  4. PRESENTATION TYPE: Round Table Discussion (80 minutes) TITLE: Unlocking the ‘Omics Archive: Enabling Toxicogenomic/Proteomic Investigation from Archival Samples

    EPA Science Inventory

    Formalin fixation and paraffin embedding (FFPE) is a cross-industry gold standard for preparing nonclinical and clinical samples for histopathological assessment which preserves tissue architecture and enables storage of tissue in archival banks. These archival banks are an untap...

  5. BIOMONITORING THE TOXICOGENOMIC RESPONSE TO ENDOCRINE DISRUPTING CHEMICALS IN HUMANS, LABORATORY SPECIES AND WILDLIFE

    EPA Science Inventory

    With the advent of sequence information for entire eukaryotic genomes, it is now possible to analyze gene expression on a genomic scale. The primary tool for genomic analysis of gene expression is the gene microarray. We have used commercially available and custom cDNA microarray...

  6. KIDNEY TOXICOGENOMICS OF CHRONIC POTASSIUM BROMATE EXPOSURE IN F344 MALE RAT

    EPA Science Inventory

    Potassium bromate (KBrO3), used in both the food and cosmetics industry, and a drinking water disinfection by-product, is a nephrotoxic compound and rodent carcinogen. To gain insight into the carcinogenic mechanism of action and provide possible biomarkers of KBrO3 exposure, the...

  7. KIDNEY TOXICOGENOMICS OF ACUTE SODIUM AND POTASSIUM BROMATE EXPOSURE IN F344 MALE RAT

    EPA Science Inventory

    Bromate, used in both the food and cosmetics industry, is a drinking water disinfection by-product that is nephrotoxic and carcinogenic to rodents. To gain insight into the carcinogenic mechanism of action, identify possible biomarkers of exposure, and determine if the cation, po...

  8. DIFFERENTIAL EXPRESSION OF RETINOIC ACID BIOSYNTHETIC AND METABOLISM GENES IN LIVERS FROM MICE TREATED WITH HEPATOTUMORIGENIC AND NON-HEPATOTUMORIGENIC CONAZOLES

    EPA Science Inventory

    Conazoles are fungicides used in crop protection and as pharmaceuticals. Triadimefon and propiconazole are hepatotumorigenic in mice, while myclobutanil is not. Previous toxicogenomic studies suggest that alteration of the retinoic acid metabolism pathway may play a key event in ...

  9. Three Conazoles Increase Hepatic Microsomal Retinoic Acid Metabolism and Decrease Mouse Hepatic Retinoic Acid Levels In Vivo

    EPA Science Inventory

    Conazoles are fungicides used in agriculture and as pharmaceuticals. In a previous toxicogenomic study of triazole-containing conazoles we found gene expression changes consistent with the alteration of the metabolism of all trans-retinoic acid (atRA), a vitamin A metabolite with...

  10. Recommended approaches in the application of toxicogenomics to derive points of departure for chemical risk assessment

    EPA Science Inventory

    ABSTRACT:Only a fraction of chemicals in commerce have been fully assessed for their potential hazards to human health due to difficulties involved in conventional regulatory tests. It has recently been proposed that quantitative transcriptomic data can be used to determine bench...

  11. Use of Toxicogenomic Data at the US EPA to Inform the Cancer Asessment of the Fungicide Propiconazole

    EPA Science Inventory

    The Office of Pesticide Programs’ (OPP) routinely utilizes mode of action (MOA) data when available for pesticide cancer risk assessment. A MOA analysis incorporates data from required toxicology studies and supplemental mechanistic data. These data are evaluated to identify a ...

  12. Application of Toxicogenomics to Develop a Mode of Action for a Carcinogenic Conazole Fungicide

    EPA Science Inventory

    Conazoles are a common class of fungicides used to control fungal growth in the environment and in humans. Some of these agents have adverse toxicological outcomes in mammals as carcinogens, reproductive toxins, and hepatotoxins. We coupled the results from genomic analyses with ...

  13. CHARACTERIZATION OF CYPS IN THE METABOLISM OF ALL TRANS RETINOIC ACID BY LIVER MICROSOMES FROM MICE TREATED WITH CONAZOLES

    EPA Science Inventory

    Conazoles are fungicides used in crop protection and as pharmaceuticals. Triadimefon and propiconazole are hepatotumorigenic in mice, while myclobutanil is not. Previous toxicogenomic studies suggest that alteration of the retinoic acid metabolism pathway may involve in conazole-...

  14. Toxicogenomic Dissection of the Perfluorooctanoic Acid Transcript Profile in Mouse Liver: Evidence for the Involvement of Nuclear Receptors PPARα and CAR

    EPA Science Inventory

    A number of perfluorinated alkyl acids including perfluorooctanoic acid (PFOA) elicit effects similar to peroxisome proliferator chemicals (PPC) in mouse and rat liver. There is strong evidence that PPC cause many of their effects linked to liver cancer through the nuclear recep...

  15. Toxicogenomic Dissection of the Perfluorooctanoic Acid Transcript Profile in Mouse Liver: Evidence for Involvement of the Nuclear Receptors PPARα and CAR

    EPA Science Inventory

    A number of perfluorinated alkyl acids including perfluorooctanoic acid (PFOA) elicit effects similar to peroxisome proliferator chemicals (PPC) in mouse and rat liver. There is strong evidence that PPC cause many of their effects related to liver carcinogenesis through the nucle...

  16. Analysis of baseline gene expression levels from toxicogenomics study control animals to identify sources of variation and predict responses to chemicals

    EPA Science Inventory

    The use of gene expression profiling to predict chemical mode of action would be enhanced by better characterization of variance due to individual, environmental, and technical factors. Meta-analysis of microarray data from untreated or vehicle-treated animals within the control ...

  17. Toxicogenomic assessment of 6-OH-BDE47 induced developmental toxicity in chicken embryo

    EPA Science Inventory

    Hydroxylated and methoxylated polybrominated diphenyl ethers (OH-/MeO-PBDEs) are analogs of PBDEs with hundreds of possible structures and many of them can activate aryl hydrocarbon receptor (AhR), however, the in vivo evidence on the toxicity of OH-/MeO-PBDEs are still very limi...

  18. Functional toxicogenomic assessment of triclosan in human HepG2 cells using genome-wide CRISPR-Cas9 screen

    EPA Science Inventory

    Thousands of chemicals for which limited toxicological data are available are used and then detected in humans and the environment. Rapid and cost-effective approaches for assessing the toxicological properties of chemicals are needed. We used CRISPR-Cas9 functional genomic scree...

  19. ALTERATIONS IN A11 TRANS RETINOIC ACID METABOLISM IN LIVER MICROSOMES FROM MICE TREATED WITH HEPATOTUMORIGENIC AND NON-HEPATOTUMORIGENIC CONAZOLES

    EPA Science Inventory

    Conazoles are fungicides used in crop protection and as pharmaceuticals. Triadimefon and propiconazole are hepatotumorigenic in mice, while myclobutanil is not. Previous toxicogenomic studies suggest that alteration of the retinoic acid metabolism pathway may be a key event in co...

  20. TOXICOGENOMIC ANALYSIS INCORPORATING OPERON-TRANSCRIPTIONAL COUPLING AND TOXICANT CONCENTRATRION-EXPRESSION RESPONSE: Analysis of MX-Treated Salmonella

    EPA Science Inventory

    What is the study? This study is the first to use microarray analysis in the Ames strains of Salmonella. The microarray chips were custom-designed for this study and are not commercially available, and we evaluated the well-studied drinking water mutagen, MX. Because much inform...

  1. Editor's Highlight: Off-Target Effects of Neuroleptics and Antidepressants on Saccharomyces cerevisiae.

    PubMed

    Caldara, Marina; Graziano, Sara; Gullì, Mariolina; Cadonici, Stefania; Marmiroli, Nelson

    2017-04-01

    Over the past years, the use of antidepressants and neuroleptics has steadily increased. Although incredibly useful to treat disorders like depression, schizophrenia, epilepsy, or mental retardation, these drugs display many side effects. Toxicogenomic studies aim to limit this problem by trying to identify cellular targets and off-targets of medical compounds. The baker yeast Saccharomyces cerevisiae has been shown to be a key player in this approach, as it represents an incredible toolbox for the dissection of complex biological processes. Moreover, the evolutionary conservation of many pathways allows the translation of yeast data to the human system. In this paper, a better attention was paid to chlorpromazine, as it still is one of the most widely used drug in therapy. The results of a toxicogenomic screening performed on a yeast mutants collection treated with chlorpromazine were instrumental to identify a set of genes for further analyses. For this purpose, a multidisciplinary approach was used based on growth phenotypes identification, Gene Ontology search, and network analysis. Then, the impacts of three antidepressants (imipramine, doxepin, and nortriptyline) and three neuroleptics (promazine, chlorpromazine, and promethazine) on S. cerevisiae were compared through physiological analyses, microscopy characterization, and transcriptomic studies. Data highlight key differences between neuroleptics and antidepressants, but also between the individual molecules. By performing a network analysis on the human homologous genes, it emerged that genes and proteins involved in the Notch pathway are possible off-targets of these molecules, along with key regulatory proteins. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. An approach for integrating toxicogenomic data in risk assessment: The dibutyl phthalate case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Euling, Susan Y., E-mail: euling.susan@epa.gov; Thompson, Chad M.; Chiu, Weihsueh A.

    An approach for evaluating and integrating genomic data in chemical risk assessment was developed based on the lessons learned from performing a case study for the chemical dibutyl phthalate. A case study prototype approach was first developed in accordance with EPA guidance and recommendations of the scientific community. Dibutyl phthalate (DBP) was selected for the case study exercise. The scoping phase of the dibutyl phthalate case study was conducted by considering the available DBP genomic data, taken together with the entire data set, for whether they could inform various risk assessment aspects, such as toxicodynamics, toxicokinetics, and dose–response. A descriptionmore » of weighing the available dibutyl phthalate data set for utility in risk assessment provides an example for considering genomic data for future chemical assessments. As a result of conducting the scoping process, two questions—Do the DBP toxicogenomic data inform 1) the mechanisms or modes of action?, and 2) the interspecies differences in toxicodynamics?—were selected to focus the case study exercise. Principles of the general approach include considering the genomics data in conjunction with all other data to determine their ability to inform the various qualitative and/or quantitative aspects of risk assessment, and evaluating the relationship between the available genomic and toxicity outcome data with respect to study comparability and phenotypic anchoring. Based on experience from the DBP case study, recommendations and a general approach for integrating genomic data in chemical assessment were developed to advance the broader effort to utilize 21st century data in risk assessment. - Highlights: • Performed DBP case study for integrating genomic data in risk assessment • Present approach for considering genomic data in chemical risk assessment • Present recommendations for use of genomic data in chemical risk assessment.« less

  3. Discovery of Transcriptional Targets Regulated by Nuclear Receptors Using a Probabilistic Graphical Model

    PubMed Central

    Lee, Mikyung; Huang, Ruili; Tong, Weida

    2016-01-01

    Nuclear receptors (NRs) are ligand-activated transcriptional regulators that play vital roles in key biological processes such as growth, differentiation, metabolism, reproduction, and morphogenesis. Disruption of NRs can result in adverse health effects such as NR-mediated endocrine disruption. A comprehensive understanding of core transcriptional targets regulated by NRs helps to elucidate their key biological processes in both toxicological and therapeutic aspects. In this study, we applied a probabilistic graphical model to identify the transcriptional targets of NRs and the biological processes they govern. The Tox21 program profiled a collection of approximate 10 000 environmental chemicals and drugs against a panel of human NRs in a quantitative high-throughput screening format for their NR disruption potential. The Japanese Toxicogenomics Project, one of the most comprehensive efforts in the field of toxicogenomics, generated large-scale gene expression profiles on the effect of 131 compounds (in its first phase of study) at various doses, and different durations, and their combinations. We applied author-topic model to these 2 toxicological datasets, which consists of 11 NRs run in either agonist and/or antagonist mode (18 assays total) and 203 in vitro human gene expression profiles connected by 52 shared drugs. As a result, a set of clusters (topics), which consists of a set of NRs and their associated target genes were determined. Various transcriptional targets of the NRs were identified by assays run in either agonist or antagonist mode. Our results were validated by functional analysis and compared with TRANSFAC data. In summary, our approach resulted in effective identification of associated/affected NRs and their target genes, providing biologically meaningful hypothesis embedded in their relationships. PMID:26643261

  4. Developing Toxicogenomics as a Research Tool by Applying Benchmark Dose-Response Modeling to inform Chemical Mode of Action and Tumorigenic Potency

    EPA Science Inventory

    ABSTRACT Results of global gene expression profiling after short-term exposures can be used to inform tumorigenic potency and chemical mode of action (MOA) and thus serve as a strategy to prioritize future or data-poor chemicals for further evaluation. This compilation of cas...

  5. An Approach to Using Toxicogenomic Data in U.S. EPA Human Health Risk Assessments: A Dibutyl Phthalate (Dbp) Case Study (External Review Draft)

    EPA Science Inventory

    This draft report is a description of an approach to evaluate genomic data for use in risk assessment and a case study to illustrate the approach. The dibutyl phthalate (DBP) case study example focuses on male reproductive developmental effects and the qualitative application of...

  6. Evaluation of In Vitro Biotransformation Using HepaRG Cells to Improve High-Throughput Chemical Hazard Prediction: A Toxicogenomics Analysis (SOT)

    EPA Science Inventory

    The US EPA’s ToxCast program has generated a wealth of data in >600 in vitro assayson a library of 1060 environmentally relevant chemicals and failed pharmaceuticals to facilitate hazard identification. An inherent criticism of many in vitro-based strategies is the inability of a...

  7. CHEMICAL EFFECTS IN BIOLOGICAL SYSTEMS – DATA DICTIONARY (CEBS-DD): A COMPENDIUM OF TERMS FOR THE CAPTURE AND INTEGRATION OF BIOLOGICAL STUDY DESIGN DESCRIPTION, CONVENTIONAL PHENOTYPES AND ‘OMICS’ DATA

    EPA Science Inventory

    A critical component in the design of the Chemical Effects in Biological Systems (CEBS) Knowledgebase is a strategy to capture toxicogenomics study protocols and the toxicity endpoint data (clinical pathology and histopathology). A Study is generally an experiment carried out du...

  8. The toxicological application of transcriptomics and epigenomics in zebrafish and other teleosts.

    PubMed

    Williams, Tim D; Mirbahai, Leda; Chipman, J Kevin

    2014-03-01

    Zebrafish (Danio rerio) is one of a number of teleost fish species frequently employed in toxicology. Toxico-genomics determines global transcriptomic responses to chemical exposures and can predict their effects. It has been applied successfully within aquatic toxicology to assist in chemical testing, determination of mechanisms and environmental monitoring. Moreover, the related field of toxico-epigenomics, that determines chemical-induced changes in DNA methylation, histone modifications and micro-RNA expression, is emerging as a valuable contribution to understanding mechanisms of both adaptive and adverse responses. Zebrafish has proven a useful and convenient model species for both transcriptomic and epigenetic toxicological studies. Despite zebrafish's dominance in other areas of fish biology, alternative fish species are used extensively in toxico-genomics. The main reason for this is that environmental monitoring generally focuses on species native to the region of interest. We are starting to see advances in the integration of high-throughput screening, omics techniques and bioinformatics together with more traditional indicator endpoints that are relevant to regulators. Integration of such approaches with high-throughput testing of zebrafish embryos, leading to the discovery of adverse outcome pathways, promises to make a major contribution to ensuring the safety of chemicals in the environment.

  9. A Pipeline for High-Throughput Concentration Response Modeling of Gene Expression for Toxicogenomics

    PubMed Central

    House, John S.; Grimm, Fabian A.; Jima, Dereje D.; Zhou, Yi-Hui; Rusyn, Ivan; Wright, Fred A.

    2017-01-01

    Cell-based assays are an attractive option to measure gene expression response to exposure, but the cost of whole-transcriptome RNA sequencing has been a barrier to the use of gene expression profiling for in vitro toxicity screening. In addition, standard RNA sequencing adds variability due to variable transcript length and amplification. Targeted probe-sequencing technologies such as TempO-Seq, with transcriptomic representation that can vary from hundreds of genes to the entire transcriptome, may reduce some components of variation. Analyses of high-throughput toxicogenomics data require renewed attention to read-calling algorithms and simplified dose–response modeling for datasets with relatively few samples. Using data from induced pluripotent stem cell-derived cardiomyocytes treated with chemicals at varying concentrations, we describe here and make available a pipeline for handling expression data generated by TempO-Seq to align reads, clean and normalize raw count data, identify differentially expressed genes, and calculate transcriptomic concentration–response points of departure. The methods are extensible to other forms of concentration–response gene-expression data, and we discuss the utility of the methods for assessing variation in susceptibility and the diseased cellular state. PMID:29163636

  10. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    PubMed

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  11. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification.

    PubMed

    Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ © The Author(s) 2014. Published by Oxford University Press.

  12. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification

    PubMed Central

    Wiegers, Thomas C.; Davis, Allan Peter; Mattingly, Carolyn J.

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ PMID:24919658

  13. Toxicological responses of environmental mixtures: Environmental metal mixtures display synergistic induction of metal-responsive and oxidative stress genes in placental cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adebambo, Oluwadamilare A.; Ray, Paul D.; Shea, Damian

    Exposure to elevated levels of the toxic metals inorganic arsenic (iAs) and cadmium (Cd) represents a major global health problem. These metals often occur as mixtures in the environment, creating the potential for interactive or synergistic biological effects different from those observed in single exposure conditions. In the present study, environmental mixtures collected from two waste sites in China and comparable mixtures prepared in the laboratory were tested for toxicogenomic response in placental JEG-3 cells. These cells serve as a model for evaluating cellular responses to exposures during pregnancy. One of the mixtures was predominated by iAs and one bymore » Cd. Six gene biomarkers were measured in order to evaluate the effects from the metal mixtures using dose and time-course experiments including: heme oxygenase 1 (HO-1) and metallothionein isoforms (MT1A, MT1F and MT1G) previously shown to be preferentially induced by exposure to either iAs or Cd, and metal transporter genes aquaporin-9 (AQP9) and ATPase, Cu{sup 2+} transporting, beta polypeptide (ATP7B). There was a significant increase in the mRNA expression levels of ATP7B, HO-1, MT1A, MT1F, and MT1G in mixture-treated cells compared to the iAs or Cd only-treated cells. Notably, the genomic responses were observed at concentrations significantly lower than levels found at the environmental collection sites. These data demonstrate that metal mixtures increase the expression of gene biomarkers in placental JEG-3 cells in a synergistic manner. Taken together, the data suggest that toxic metals that co-occur may induce detrimental health effects that are currently underestimated when analyzed as single metals. - Highlights: • Toxicogenomic responses of environmental metal mixtures assessed • Induction of ATP7B, HO-1, MT1A, MT1F and MT1G by metal mixtures observed in placental cells • Higher gene induction in response to metal mixtures versus single metal treatments.« less

  14. Bisphenol A-associated epigenomic changes in prepubescent girls: a cross-sectional study in Gharbiah, Egypt

    PubMed Central

    2013-01-01

    Background There is now compelling evidence that epigenetic modifications link adult disease susceptibility to environmental exposures during specific life stages, including pre-pubertal development. Animal studies indicate that bisphenol A (BPA), the monomer used in epoxy resins and polycarbonate plastics, may impact health through epigenetic mechanisms, and epidemiological data associate BPA levels with metabolic disorders, behavior changes, and reproductive effects. Thus, we conducted an environmental epidemiology study of BPA exposure and CpG methylation in pre-adolescent girls from Gharbiah, Egypt hypothesizing that methylation profiles exhibit exposure-dependent trends. Methods Urinary concentrations of total (free plus conjugated) species of BPA in spot samples were quantified for 60 girls aged 10 to 13. Genome-wide CpG methylation was concurrently measured in bisulfite-converted saliva DNA using the Infinium HumanMethylation27 BeadChip (N = 46). CpG sites from four candidate genes were validated via quantitative bisulfite pyrosequencing. Results CpG methylation varied widely among girls, and higher urinary BPA concentrations were generally associated with less genomic methylation. Based on pathway analyses, genes exhibiting reduced methylation with increasing urinary BPA were involved in immune function, transport activity, metabolism, and caspase activity. In particular, hypomethylation of CpG targets on chromosome X was associated with higher urinary BPA. Using the Comparative Toxicogenomics Database, we identified a number of candidate genes in our sample that previously have been associated with BPA-related expression change. Conclusions These data indicate that BPA may affect human health through specific epigenomic modification of genes in relevant pathways. Thus, epigenetic epidemiology holds promise for the identification of biomarkers from previous exposures and the development of epigenetic-based diagnostic strategies. PMID:23590724

  15. Predicting Rat and Human Pregnane X Receptor Activators Using Bayesian Classification Models.

    PubMed

    AbdulHameed, Mohamed Diwan M; Ippolito, Danielle L; Wallqvist, Anders

    2016-10-17

    The pregnane X receptor (PXR) is a ligand-activated transcription factor that acts as a master regulator of metabolizing enzymes and transporters. To avoid adverse drug-drug interactions and diseases such as steatosis and cancers associated with PXR activation, identifying drugs and chemicals that activate PXR is of crucial importance. In this work, we developed ligand-based predictive computational models for both rat and human PXR activation, which allowed us to identify potentially harmful chemicals and evaluate species-specific effects of a given compound. We utilized a large publicly available data set of nearly 2000 compounds screened in cell-based reporter gene assays to develop Bayesian quantitative structure-activity relationship models using physicochemical properties and structural descriptors. Our analysis showed that PXR activators tend to be hydrophobic and significantly different from nonactivators in terms of their physicochemical properties such as molecular weight, logP, number of rings, and solubility. Our Bayesian models, evaluated by using 5-fold cross-validation, displayed a sensitivity of 75% (76%), specificity of 76% (75%), and accuracy of 89% (89%) for human (rat) PXR activation. We identified structural features shared by rat and human PXR activators as well as those unique to each species. We compared rat in vitro PXR activation data to in vivo data by using DrugMatrix, a large toxicogenomics database with gene expression data obtained from rats after exposure to diverse chemicals. Although in vivo gene expression data pointed to cross-talk between nuclear receptor activators that is captured only by in vivo assays, overall we found broad agreement between in vitro and in vivo PXR activation. Thus, the models developed here serve primarily as efficient initial high-throughput in silico screens of in vitro activity.

  16. Assessment of the DNA damaging potential of environmental chemicals using a quantitative high-throughput screening approach to measure p53 activation.

    PubMed

    Witt, Kristine L; Hsieh, Jui-Hua; Smith-Roe, Stephanie L; Xia, Menghang; Huang, Ruili; Zhao, Jinghua; Auerbach, Scott S; Hur, Junguk; Tice, Raymond R

    2017-08-01

    Genotoxicity potential is a critical component of any comprehensive toxicological profile. Compounds that induce DNA or chromosomal damage often activate p53, a transcription factor essential to cell cycle regulation. Thus, within the US Tox21 Program, we screened a library of ∼10,000 (∼8,300 unique) environmental compounds and drugs for activation of the p53-signaling pathway using a quantitative high-throughput screening assay employing HCT-116 cells (p53 +/+ ) containing a stably integrated β-lactamase reporter gene under control of the p53 response element (p53RE). Cells were exposed (-S9) for 16 hr at 15 concentrations (generally 1.2 nM to 92 μM) three times, independently. Excluding compounds that failed analytical chemistry analysis or were suspected of inducing assay interference, 365 (4.7%) of 7,849 unique compounds were concluded to activate p53. As part of an in-depth characterization of our results, we first compared them with results from traditional in vitro genotoxicity assays (bacterial mutation, chromosomal aberration); ∼15% of known, direct-acting genotoxicants in our library activated the p53RE. Mining the Comparative Toxicogenomics Database revealed that these p53 actives were significantly associated with increased expression of p53 downstream genes involved in DNA damage responses. Furthermore, 53 chemical substructures associated with genotoxicity were enriched in certain classes of p53 actives, for example, anthracyclines (antineoplastics) and vinca alkaloids (tubulin disruptors). Interestingly, the tubulin disruptors manifested unusual nonmonotonic concentration response curves suggesting activity through a unique p53 regulatory mechanism. Through the analysis of our results, we aim to define a role for this assay as one component of a comprehensive toxicological characterization of large compound libraries. Environ. Mol. Mutagen. 58:494-507, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Functional toxicogenomic assessment of triclosan in human ...

    EPA Pesticide Factsheets

    Thousands of chemicals for which limited toxicological data are available are used and then detected in humans and the environment. Rapid and cost-effective approaches for assessing the toxicological properties of chemicals are needed. We used CRISPR-Cas9 functional genomic screening to identify potential molecular mechanism of a widely used antimicrobial triclosan (TCS) in HepG2 cells. Resistant genes (whose knockout gives potential resistance) at IC50 (50% Inhibition concentration of cell viability) were significantly enriched in adherens junction pathway, MAPK signaling pathway and PPAR signaling pathway, suggesting a potential molecular mechanism in TCS induced cytotoxicity. Evaluation of top-ranked resistant genes, FTO (encoding an mRNA demethylase) and MAP2K3 (a MAP kinase kinase family gene), revealed that their loss conferred resistance to TCS. In contrast, sensitive genes (whose knockout enhances potential sensitivity) at IC10 and IC20 were specifically enriched in pathways involved with immune responses, which was concordant with the transcriptomic profiling of TCS at concentrations

  18. Analysis of baseline gene expression levels from ...

    EPA Pesticide Factsheets

    The use of gene expression profiling to predict chemical mode of action would be enhanced by better characterization of variance due to individual, environmental, and technical factors. Meta-analysis of microarray data from untreated or vehicle-treated animals within the control arm of toxicogenomics studies has yielded useful information on baseline fluctuations in gene expression. A dataset of control animal microarray expression data was assembled by a working group of the Health and Environmental Sciences Institute's Technical Committee on the Application of Genomics in Mechanism Based Risk Assessment in order to provide a public resource for assessments of variability in baseline gene expression. Data from over 500 Affymetrix microarrays from control rat liver and kidney were collected from 16 different institutions. Thirty-five biological and technical factors were obtained for each animal, describing a wide range of study characteristics, and a subset were evaluated in detail for their contribution to total variability using multivariate statistical and graphical techniques. The study factors that emerged as key sources of variability included gender, organ section, strain, and fasting state. These and other study factors were identified as key descriptors that should be included in the minimal information about a toxicogenomics study needed for interpretation of results by an independent source. Genes that are the most and least variable, gender-selectiv

  19. Toxicogenomic analysis identifies the apoptotic pathway as the main cause of hepatotoxicity induced by tributyltin.

    PubMed

    Zhou, Mi; Feng, Mei; Fu, Ling-Ling; Ji, Lin-Dan; Zhao, Jin-Shun; Xu, Jin

    2016-11-01

    Tributyltin (TBT) is one of the most widely used organotin biocides, which has severe endocrine-disrupting effects on marine species and mammals. Given that TBT accumulates at higher levels in the liver than in any other organ, and it acts mainly as a hepatotoxic agent, it is important to clearly delineate the hepatotoxicity of TBT. However, most of the available studies on TBT have focused on observations at the cellular level, while studies at the level of genes and proteins are limited; therefore, the molecular mechanisms of TBT-induced hepatotoxicity remains largely unclear. In the present study, we applied a toxicogenomic approach to investigate the effects of TBT on gene expression in the human normal liver cell line HL7702. Gene expression profiling identified the apoptotic pathway as the major cause of hepatotoxicity induced by TBT. Flow cytometry assays confirmed that medium- and high-dose TBT treatments significantly increased the number of apoptotic cells, and more cells underwent late apoptosis in the high-dose TBT group. The genes encoding heat shock proteins (HSPs), kinases and tumor necrosis factor receptors mediated TBT-induced apoptosis. These findings revealed novel molecular mechanisms of TBT-induced hepatotoxicity, and the current microarray data may also provide clues for future studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Toxicogenomics and Cancer Susceptibility: Advances with Next-Generation Sequencing

    PubMed Central

    Ning, Baitang; Su, Zhenqiang; Mei, Nan; Hong, Huixiao; Deng, Helen; Shi, Leming; Fuscoe, James C.; Tolleson, William H.

    2017-01-01

    The aim of this review is to comprehensively summarize the recent achievements in the field of toxicogenomics and cancer research regarding genetic-environmental interactions in carcinogenesis and detection of genetic aberrations in cancer genomes by next-generation sequencing technology. Cancer is primarily a genetic disease in which genetic factors and environmental stimuli interact to cause genetic and epigenetic aberrations in human cells. Mutations in the germline act as either high-penetrance alleles that strongly increase the risk of cancer development, or as low-penetrance alleles that mildly change an individual’s susceptibility to cancer. Somatic mutations, resulting from either DNA damage induced by exposure to environmental mutagens or from spontaneous errors in DNA replication or repair are involved in the development or progression of the cancer. Induced or spontaneous changes in the epigenome may also drive carcinogenesis. Advances in next-generation sequencing technology provide us opportunities to accurately, economically, and rapidly identify genetic variants, somatic mutations, gene expression profiles, and epigenetic alterations with single-base resolution. Whole genome sequencing, whole exome sequencing, and RNA sequencing of paired cancer and adjacent normal tissue present a comprehensive picture of the cancer genome. These new findings should benefit public health by providing insights in understanding cancer biology, and in improving cancer diagnosis and therapy. PMID:24875441

  1. Alternatives to animal testing: research, trends, validation, regulatory acceptance.

    PubMed

    Huggins, Jane

    2003-01-01

    Current trends and issues in the development of alternatives to the use of animals in biomedical experimentation are discussed in this position paper. Eight topics are considered and include refinement of acute toxicity assays; eye corrosion/irritation alternatives; skin corrosion/irritation alternatives; contact sensitization alternatives; developmental/reproductive testing alternatives; genetic engineering (transgenic) assays; toxicogenomics; and validation of alternative methods. The discussion of refinement of acute toxicity assays is focused primarily on developments with regard to reduction of the number of animals used in the LD(50) assay. However, the substitution of humane endpoints such as clinical signs of toxicity for lethality in these assays is also evaluated. Alternative assays for eye corrosion/irritation as well as those for skin corrosion/irritation are described with particular attention paid to the outcomes, both successful and unsuccessful, of several validation efforts. Alternative assays for contact sensitization and developmental/reproductive toxicity are presented as examples of methods designed for the examination of interactions between toxins and somewhat more complex physiological systems. Moreover, genetic engineering and toxicogenomics are discussed with an eye toward the future of biological experimentation in general. The implications of gene manipulation for research animals, specifically, are also examined. Finally, validation methods are investigated as to their effectiveness, or lack thereof, and suggestions for their standardization and improvement, as well as implementation are reviewed.

  2. Application of dynamic topic models to toxicogenomics data.

    PubMed

    Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida

    2016-10-06

    All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.

  3. Toxicogenomic effects common to triazole antifungals and conserved between rats and humans.

    PubMed

    Goetz, Amber K; Dix, David J

    2009-07-01

    The triazole antifungals myclobutanil, propiconazole and triadimefon cause varying degrees of hepatic toxicity and disrupt steroid hormone homeostasis in rodent in vivo models. To identify biological pathways consistently modulated across multiple timepoints and various study designs, gene expression profiling was conducted on rat livers from three separate studies with triazole treatment groups ranging from 6 h after a single oral gavage exposure, to prenatal to adult exposures via feed. To explore conservation of responses across species, gene expression from the rat liver studies were compared to in vitro data from rat and human primary hepatocytes exposed to the triazoles. Toxicogenomic data on triazoles from 33 different treatment groups and 135 samples (microarrays) identified thousands of probe sets and dozens of pathways differentially expressed across time, dose, and species--many of these were common to all three triazoles, or conserved between rodents and humans. Common and conserved pathways included androgen and estrogen metabolism, xenobiotic metabolism signaling through CAR and PXR, and CYP mediated metabolism. Differentially expressed genes included the Phase I xenobiotic, fatty acid, sterol and steroid metabolism genes Cyp2b2 and CYP2B6, Cyp3a1 and CYP3A4, and Cyp4a22 and CYP4A11; Phase II conjugation enzyme genes Ugt1a1 and UGT1A1; and Phase III ABC transporter genes Abcb1 and ABCB1. Gene expression changes caused by all three triazoles in liver and hepatocytes were concentrated in biological pathways regulating lipid, sterol and steroid homeostasis, identifying a potential common mode of action conserved between rodents and humans. Modulation of hepatic sterol and steroid metabolism is a plausible mode of action for changes in serum testosterone and adverse reproductive outcomes observed in rat studies, and may be relevant to human risk assessment.

  4. Genome-wide Gene Expression Profiling of Acute Metal Exposures in Male Zebrafish

    DTIC Science & Technology

    2014-10-23

    Data in Brief Genome-wide gene expression profiling of acute metal exposures in male zebrafish Christine E. Baer a,⁎, Danielle L. Ippolito b, Naissan... Zebrafish Whole organism Nickel Chromium Cobalt Toxicogenomics To capture global responses to metal poisoning and mechanistic insights into metal...toxicity, gene expression changes were evaluated in whole adult male zebrafish following acute 24 h high dose exposure to three metals with known human

  5. Bio Warfare and Terrorism: Toxins and Other Mid-Spectrum Agents

    DTIC Science & Technology

    2005-01-01

    biotechnology, toxicogenomics, toxin, tetrodotoxin, and others. Once an agent has and proteomics may also help to open the door to the 276 Bio Warfare...also interferon gamma, interleukin-6, and tumor alsointrfern gmma intrlekin6, ad tmor by the mold Aspergillus flavus and commonly conta- necrosis factor...as bullets. No the new sciences of genomics and proteomics to alter toxoid or antitoxin is available, genetic code and to affect the expression of

  6. Biomarkers of Exposure to Toxic Substances. Volume 2: Genomics: Unique Patterns of Differential Gene Expression and Pathway Perturbation Resulting from Exposure to Nephrotoxins with Regional Specific Toxicity

    DTIC Science & Technology

    2009-05-01

    of chemicals agents . Changes in gene expression are among the most sensitive indicators of chemical exposure. Toxicogenomics, which is based on DNA...assessing gene expression changes and subsequently the mechanism of renal injury following exposure to nephrotoxins selected for their regional...Serine Treatment on Selected Serum Chemistry Parameters ........................ 8 Table 4: Effect of PUR Treatment on Selected Serum Chemistry

  7. Way forward in case of a false positive in vitro genotoxicity result for a cosmetic substance?

    PubMed

    Doktorova, Tatyana Y; Ates, Gamze; Vinken, Mathieu; Vanhaecke, Tamara; Rogiers, Vera

    2014-02-01

    The currently used regulatory in vitro mutagenicity/genotoxicity test battery has a high sensitivity for detecting genotoxicants, but it suffers from a large number of irrelevant positive results (i.e. low specificity) thereby imposing the need for additional follow-up by in vitro and/or in vivo genotoxicity tests. This could have a major impact on the cosmetic industry in Europe, seen the imposed animal testing and marketing bans on cosmetics and their ingredients. Afflicted, but safe substances could therefore be lost. Using the example of triclosan, a cosmetic preservative, we describe here the potential applicability of a human toxicogenomics-based in vitro assay as a potential mechanistically based follow-up test for positive in vitro genotoxicity results. Triclosan shows a positive in vitro chromosomal aberration test, but is negative during in vivo follow-up tests. Toxicogenomics analysis unequivocally shows that triclosan is identified as a compound acting through non-DNA reactive mechanisms. This proof-of-principle study illustrates the potential of genome-wide transcriptomics data in combination with in vitro experimentation as a possible weight-of-evidence follow-up approach for de-risking a positive outcome in a standard mutagenicity/genotoxicity battery. As such a substantial number of cosmetic compounds wrongly identified as genotoxicants could be saved for the future. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Response of human renal tubular cells to cyclosporine and sirolimus: A toxicogenomic study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pallet, Nicolas; Rabant, Marion; Xu-Dubois, Yi-Chun

    The molecular mechanisms involved in the potentially nephrotoxic response of tubular cells to immunosuppressive drugs remain poorly understood. Transcriptional profiles of human proximal tubular cells exposed to cyclosporine A (CsA), sirolimus (SRL) or their combination, were established using oligonucleotide microarrays. Hierarchical clustering of genes implicated in fibrotic processes showed a clear distinction between expression profiles with CsA and CsA + SRL treatments on the one hand and SRL treatment on the other. Functional analysis found that CsA and CsA + SRL treatments preferentially alter biological processes located at the cell membrane, such as ion transport or signal transduction, whereas SRLmore » modifies biological processes within the nucleus and related to transcriptional activity. Genome wide expression analysis suggested that CsA may induce an endoplasmic reticulum (ER) stress in tubular cells in vitro. Moreover we found that CsA exposure in vivo is associated with the upregulation of the ER stress marker BIP in kidney transplant biopsies. In conclusion, this toxicogenomic study highlights the molecular interaction networks that may contribute to the tubular response to CsA and SRL. These results may also offer a new working hypothesis for future research in the field of CsA nephrotoxicity. Further studies are needed to evaluate if ER stress detection in tubular cells in human biopsies can predict CsA nephrotoxicity.« less

  9. Functional Toxicogenomic Assessment of Triclosan in Human HepG2 Cells Using Genome-Wide CRISPR-Cas9 Screening.

    PubMed

    Xia, Pu; Zhang, Xiaowei; Xie, Yuwei; Guan, Miao; Villeneuve, Daniel L; Yu, Hongxia

    2016-10-04

    There are thousands of chemicals used by humans and detected in the environment for which limited or no toxicological data are available. Rapid and cost-effective approaches for assessing the toxicological properties of chemicals are needed. We used CRISPR-Cas9 functional genomic screening to identify the potential molecular mechanism of a widely used antimicrobial triclosan (TCS) in HepG2 cells. Resistant genes at IC50 (the concentration causing a 50% reduction in cell viability) were significantly enriched in the adherens junction pathway, MAPK signaling pathway, and PPAR signaling pathway, suggesting a potential role in the molecular mechanism of TCS-induced cytotoxicity. Evaluation of the top-ranked resistant genes, FTO (encoding an mRNA demethylase) and MAP2K3 (a MAP kinase kinase family gene), revealed that their loss conferred resistance to TCS. In contrast, sensitive genes at IC10 and IC20 were specifically enriched in pathways involved with immune responses, which was concordant with transcriptomic profiling of TCS at concentrations of

  10. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data.

    PubMed

    Hettne, Kristina M; Boorsma, André; van Dartel, Dorien A M; Goeman, Jelle J; de Jong, Esther; Piersma, Aldert H; Stierum, Rob H; Kleinjans, Jos C; Kors, Jan A

    2013-01-29

    Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set analysis (GSA) methods for chemical treatment identification, for pharmacological mechanism elucidation, and for comparing compound toxicity profiles. We created 30,211 chemical response-specific gene sets for human and mouse by next-gen TM, and derived 1,189 (human) and 588 (mouse) gene sets from the Comparative Toxicogenomics Database (CTD). We tested for significant differential expression (SDE) (false discovery rate -corrected p-values < 0.05) of the next-gen TM-derived gene sets and the CTD-derived gene sets in gene expression (GE) data sets of five chemicals (from experimental models). We tested for SDE of gene sets for six fibrates in a peroxisome proliferator-activated receptor alpha (PPARA) knock-out GE dataset and compared to results from the Connectivity Map. We tested for SDE of 319 next-gen TM-derived gene sets for environmental toxicants in three GE data sets of triazoles, and tested for SDE of 442 gene sets associated with embryonic structures. We compared the gene sets to triazole effects seen in the Whole Embryo Culture (WEC), and used principal component analysis (PCA) to discriminate triazoles from other chemicals. Next-gen TM-derived gene sets matching the chemical treatment were significantly altered in three GE data sets, and the corresponding CTD-derived gene sets were significantly altered in five GE data sets. Six next-gen TM-derived and four CTD-derived fibrate gene sets were significantly altered in the PPARA knock-out GE dataset. None of the fibrate signatures in cMap scored significant against the PPARA GE signature. 33 environmental toxicant gene sets were significantly altered in the triazole GE data sets. 21 of these toxicants had a similar toxicity pattern as the triazoles. We confirmed embryotoxic effects, and discriminated triazoles from other chemicals. Gene set analysis with next-gen TM-derived chemical response-specific gene sets is a scalable method for identifying similarities in gene responses to other chemicals, from which one may infer potential mode of action and/or toxic effect.

  11. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data

    PubMed Central

    2013-01-01

    Background Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set analysis (GSA) methods for chemical treatment identification, for pharmacological mechanism elucidation, and for comparing compound toxicity profiles. Methods We created 30,211 chemical response-specific gene sets for human and mouse by next-gen TM, and derived 1,189 (human) and 588 (mouse) gene sets from the Comparative Toxicogenomics Database (CTD). We tested for significant differential expression (SDE) (false discovery rate -corrected p-values < 0.05) of the next-gen TM-derived gene sets and the CTD-derived gene sets in gene expression (GE) data sets of five chemicals (from experimental models). We tested for SDE of gene sets for six fibrates in a peroxisome proliferator-activated receptor alpha (PPARA) knock-out GE dataset and compared to results from the Connectivity Map. We tested for SDE of 319 next-gen TM-derived gene sets for environmental toxicants in three GE data sets of triazoles, and tested for SDE of 442 gene sets associated with embryonic structures. We compared the gene sets to triazole effects seen in the Whole Embryo Culture (WEC), and used principal component analysis (PCA) to discriminate triazoles from other chemicals. Results Next-gen TM-derived gene sets matching the chemical treatment were significantly altered in three GE data sets, and the corresponding CTD-derived gene sets were significantly altered in five GE data sets. Six next-gen TM-derived and four CTD-derived fibrate gene sets were significantly altered in the PPARA knock-out GE dataset. None of the fibrate signatures in cMap scored significant against the PPARA GE signature. 33 environmental toxicant gene sets were significantly altered in the triazole GE data sets. 21 of these toxicants had a similar toxicity pattern as the triazoles. We confirmed embryotoxic effects, and discriminated triazoles from other chemicals. Conclusions Gene set analysis with next-gen TM-derived chemical response-specific gene sets is a scalable method for identifying similarities in gene responses to other chemicals, from which one may infer potential mode of action and/or toxic effect. PMID:23356878

  12. [The prospect of application of toxicogenetics/pharmcogenetics theory and methods in forensic practice].

    PubMed

    Shen, Dan-na; Yi, Xu-fu; Chen, Xiao-gang; Xu, Tong-li; Cui, Li-juan

    2007-10-01

    Individual response to drugs, toxicants, environmental chemicals and allergens varies with genotype. Some respond well to these substances without significant consequences, while others may respond strongly with severe consequences and even death. Toxicogenetics and toxicogenomics as well as pharmacogenetics explain the genetic basis for the variations of individual response to toxicants by sequencing the human genome and large-scale identification of genome polymorphism. The new disciplines will provide a new route for forensic specialists to determine the cause of death.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulsen, Sarah S., E-mail: spo@nrcwe.dk; Department of Science, Systems and Models, Roskilde University, DK-4000 Roskilde; Saber, Anne T., E-mail: ats@nrcwe.dk

    Multi-walled carbon nanotubes (MWCNTs) are an inhomogeneous group of nanomaterials that vary in lengths, shapes and types of metal contamination, which makes hazard evaluation difficult. Here we present a toxicogenomic analysis of female C57BL/6 mouse lungs following a single intratracheal instillation of 0, 18, 54 or 162 μg/mouse of a small, curled (CNT{sub Small}, 0.8 ± 0.1 μm in length) or large, thick MWCNT (CNT{sub Large}, 4 ± 0.4 μm in length). The two MWCNTs were extensively characterized by SEM and TEM imaging, thermogravimetric analysis, and Brunauer–Emmett–Teller surface area analysis. Lung tissues were harvested 24 h, 3 days and 28more » days post-exposure. DNA microarrays were used to analyze gene expression, in parallel with analysis of bronchoalveolar lavage fluid, lung histology, DNA damage (comet assay) and the presence of reactive oxygen species (dichlorodihydrofluorescein assay), to profile and characterize related pulmonary endpoints. Overall changes in global transcription following exposure to CNT{sub Small} or CNT{sub Large} were similar. Both MWCNTs elicited strong acute phase and inflammatory responses that peaked at day 3, persisted up to 28 days, and were characterized by increased cellular influx in bronchoalveolar lavage fluid, interstitial pneumonia and gene expression changes. However, CNT{sub Large} elicited an earlier onset of inflammation and DNA damage, and induced more fibrosis and a unique fibrotic gene expression signature at day 28, compared to CNT{sub Small}. The results indicate that the extent of change at the molecular level during early response phases following an acute exposure is greater in mice exposed to CNT{sub Large}, which may eventually lead to the different responses observed at day 28. - Highlights: • We evaluate the toxicogenomic response in mice following MWCNT instillation. • Two MWCNTs of different properties were examined and thoroughly characterized. • MWCNT exposure leads to increased pulmonary inflammation and acute phase response. • The thick and straight MWCNT induced transcriptional and histological pulmonary fibrotic changes. • This was not observed following exposure to the thinner and curled MWCNT.« less

  14. Toxicogenomic assessment of 6-OH-BDE47 induced ...

    EPA Pesticide Factsheets

    Hydroxylated and methoxylated polybrominated diphenyl ethers (OH-/MeO-PBDEs) are analogs of PBDEs with hundreds of possible structures and many of them can activate aryl hydrocarbon receptor (AhR), however, the in vivo evidence on the toxicity of OH-/MeO-PBDEs are still very limited. 6-OH-BDE47 is a relatively potent AhR activator and a predominant congener of OH-PBDEs detected in the environment. Here the developmental toxicity of 6-OH-BDE47 in chicken embryos was assessed using a toxicogenomic approach. Fertilized chicken eggs were dosed via in ovo administration of 0.006 to 0.474 nmol 6-OH-BDE47/g egg followed by 18-days incubation. Significant embryo mortality (LD50=0.294 pmol/g egg) and increased hepatic somatic index (HSI) were caused by 6-OH-BDE47 exposure. The functional enrichment of differentially expressed genes (DEGs) associated with oxidative phosphorylation, generation of precursor metabolites and energy, and electron transport chain suggest that 6-OH-BDE47 exposure may disrupt the embryo development by altering the function of energy production in mitochondrion. Moreover, AhR mediated responses including up-regulation of CYP1A4 was observed in the livers of embryos exposed to 6-OH-BDE47. Overall, this study confirmed the prediction of embryo lethality by 6-OH-BDE47 consistent with an adverse outcome pathway (AOP) linking AhR activation to embryo lethality. The results provide an example of application of AOP in the hazard and ecological risk asse

  15. Human cell toxicogenomic analysis links reactive oxygen species to the toxicity of monohaloacetic acid drinking water disinfection byproducts

    PubMed Central

    Pals, Justin; Attene-Ramos, Matias S.; Xia, Menghang; Wagner, Elizabeth D.; Plewa, Michael J.

    2014-01-01

    Chronic exposure to drinking water disinfection byproducts has been linked to adverse health risks. The monohaloacetic acids (monoHAAs) are generated as byproducts during the disinfection of drinking water and are cytotoxic, genotoxic, mutagenic, and teratogenic. Iodoacetic acid toxicity was mitigated by antioxidants, suggesting the involvement of oxidative stress. Other monoHAAs may share a similar mode of action. Each monoHAA generated a significant concentration-response increase in the expression of a β-lactamase reporter under the control of the Antioxidant Response Element (ARE). The monoHAAs generated oxidative stress with a rank order of IAA > BAA >> CAA; this rank order was observed with other toxicological endpoints. Toxicogenomic analysis was conducted with a non-transformed human intestinal epithelial cell line (FHs 74 Int). Exposure to the monoHAAs altered the transcription levels of multiple oxidative stress responsive genes, indicating that each exposure generated oxidative stress. The transcriptome profiles showed an increase in TXNRD1 and SRXN1, suggesting peroxiredoxin proteins had been oxidized during monoHAA exposures. Three sources of reactive oxygen species were identified, the hypohalous acid generating peroxidase enzymes LPO and MPO, NADPH-dependent oxidase NOX5, and PTGS2 (COX-2) mediated arachidonic acid metabolism. Each monoHAA exposure caused an increase in COX-2 mRNA levels. These data provide a functional association between monoHAA exposure and adverse health outcomes such as oxidative stress, inflammation, and cancer. PMID:24050308

  16. Cord blood gene expression supports that prenatal exposure to perfluoroalkyl substances causes depressed immune functionality in early childhood.

    PubMed

    Pennings, Jeroen L A; Jennen, Danyel G J; Nygaard, Unni C; Namork, Ellen; Haug, Line S; van Loveren, Henk; Granum, Berit

    2016-01-01

    Perfluoroalkyl and polyfluoroalkyl substances (PFAS) are a class of synthetic compounds that have widespread use in consumer and industrial applications. PFAS are considered environmental pollutants that have various toxic properties, including effects on the immune system. Recent human studies indicate that prenatal exposure to PFAS leads to suppressed immune responses in early childhood. In this study, data from the Norwegian BraMat cohort was used to investigate transcriptomics profiles in neonatal cord blood and their association with maternal PFAS exposure, anti-rubella antibody levels at 3 years of age and the number of common cold episodes until 3 years. Genes associated with PFAS exposure showed enrichment for immunological and developmental functions. The analyses identified a toxicogenomics profile of 52 PFAS exposure-associated genes that were in common with genes associated with rubella titers and/or common cold episodes. This gene set contains several immunomodulatory genes (CYTL1, IL27) as well as other immune-associated genes (e.g. EMR4P, SHC4, ADORA2A). In addition, this study identified PPARD as a PFAS toxicogenomics marker. These markers can serve as the basis for further mechanistic or epidemiological studies. This study provides a transcriptomics connection between prenatal PFAS exposure and impaired immune function in early childhood and supports current views on PPAR- and NF-κB-mediated modes of action. The findings add to the available evidence that PFAS exposure is immunotoxic in humans and support regulatory policies to phase out these substances.

  17. Contribution of new technologies to characterization and prediction of adverse effects.

    PubMed

    Rouquié, David; Heneweer, Marjoke; Botham, Jane; Ketelslegers, Hans; Markell, Lauren; Pfister, Thomas; Steiling, Winfried; Strauss, Volker; Hennes, Christa

    2015-02-01

    Identification of the potential hazards of chemicals has traditionally relied on studies in laboratory animals where changes in clinical pathology and histopathology compared to untreated controls defined an adverse effect. In the past decades, increased consistency in the definition of adversity with chemically-induced effects in laboratory animals, as well as in the assessment of human relevance has been reached. More recently, a paradigm shift in toxicity testing has been proposed, mainly driven by concerns over animal welfare but also thanks to the development of new methods. Currently, in vitro approaches, toxicogenomic technologies and computational tools, are available to provide mechanistic insight in toxicological Mode of Action (MOA) of the adverse effects observed in laboratory animals. The vision described as Tox21c (Toxicity Testing in the 21st century) aims at predicting in vivo toxicity using a bottom-up-approach, starting with understanding of MOA based on in vitro data to ultimately predict adverse effects in humans. At present, a practical application of the Tox21c vision is still far away. While moving towards toxicity prediction based on in vitro data, a stepwise reduction of in vivo testing is foreseen by combining in vitro with in vivo tests. Furthermore, newly developed methods will also be increasingly applied, in conjunction with established methods in order to gain trust in these new methods. This confidence is based on a critical scientific prerequisite: the establishment of a causal link between data obtained with new technologies and adverse effects manifested in repeated-dose in vivo toxicity studies. It is proposed to apply the principles described in the WHO/IPCS framework of MOA to obtain this link. Finally, an international database of known MOAs obtained in laboratory animals using data-rich chemicals will facilitate regulatory acceptance and could further help in the validation of the toxicity pathway and adverse outcome pathway concepts.

  18. An integrative data mining approach to identifying adverse outcome pathway signatures.

    PubMed

    Oki, Noffisat O; Edwards, Stephen W

    2016-03-28

    The Adverse Outcome Pathway (AOP) framework is a tool for making biological connections and summarizing key information across different levels of biological organization to connect biological perturbations at the molecular level to adverse outcomes for an individual or population. Computational approaches to explore and determine these connections can accelerate the assembly of AOPs. By leveraging the wealth of publicly available data covering chemical effects on biological systems, computationally-predicted AOPs (cpAOPs) were assembled via data mining of high-throughput screening (HTS) in vitro data, in vivo data and other disease phenotype information. Frequent Itemset Mining (FIM) was used to find associations between the gene targets of ToxCast HTS assays and disease data from Comparative Toxicogenomics Database (CTD) by using the chemicals as the common aggregators between datasets. The method was also used to map gene expression data to disease data from CTD. A cpAOP network was defined by considering genes and diseases as nodes and FIM associations as edges. This network contained 18,283 gene to disease associations for the ToxCast data and 110,253 for CTD gene expression. Two case studies show the value of the cpAOP network by extracting subnetworks focused either on fatty liver disease or the Aryl Hydrocarbon Receptor (AHR). The subnetwork surrounding fatty liver disease included many genes known to play a role in this disease. When querying the cpAOP network with the AHR gene, an interesting subnetwork including glaucoma was identified. While substantial literature exists to support the potential for AHR ligands to elicit glaucoma, it was not explicitly captured in the public annotation information in CTD. The subnetwork from this analysis suggests a cpAOP that includes changes in CYP1B1 expression, which has been previously established in the literature as a primary cause of glaucoma. These case studies highlight the value in integrating multiple data sources when defining cpAOPs for HTS data. Copyright © 2016. Published by Elsevier Ireland Ltd.

  19. A Drosophila model for toxicogenomics: Genetic variation in susceptibility to heavy metal exposure

    PubMed Central

    Luoma, Sarah E.; St. Armour, Genevieve E.; Thakkar, Esha

    2017-01-01

    The genetic factors that give rise to variation in susceptibility to environmental toxins remain largely unexplored. Studies on genetic variation in susceptibility to environmental toxins are challenging in human populations, due to the variety of clinical symptoms and difficulty in determining which symptoms causally result from toxic exposure; uncontrolled environments, often with exposure to multiple toxicants; and difficulty in relating phenotypic effect size to toxic dose, especially when symptoms become manifest with a substantial time lag. Drosophila melanogaster is a powerful model that enables genome-wide studies for the identification of allelic variants that contribute to variation in susceptibility to environmental toxins, since the genetic background, environmental rearing conditions and toxic exposure can be precisely controlled. Here, we used extreme QTL mapping in an outbred population derived from the D. melanogaster Genetic Reference Panel to identify alleles associated with resistance to lead and/or cadmium, two ubiquitous environmental toxins that present serious health risks. We identified single nucleotide polymorphisms (SNPs) associated with variation in resistance to both heavy metals as well as SNPs associated with resistance specific to each of them. The effects of these SNPs were largely sex-specific. We applied mutational and RNAi analyses to 33 candidate genes and functionally validated 28 of them. We constructed networks of candidate genes as blueprints for orthologous networks of human genes. The latter not only provided functional contexts for known human targets of heavy metal toxicity, but also implicated novel candidate susceptibility genes. These studies validate Drosophila as a translational toxicogenomics gene discovery system. PMID:28732062

  20. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    PubMed Central

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  1. A review of toxicity and mechanisms of individual and mixtures of heavy metals in the environment.

    PubMed

    Wu, Xiangyang; Cobbina, Samuel J; Mao, Guanghua; Xu, Hai; Zhang, Zhen; Yang, Liuqing

    2016-05-01

    The rational for the study was to review the literature on the toxicity and corresponding mechanisms associated with lead (Pb), mercury (Hg), cadmium (Cd), and arsenic (As), individually and as mixtures, in the environment. Heavy metals are ubiquitous and generally persist in the environment, enabling them to biomagnify in the food chain. Living systems most often interact with a cocktail of heavy metals in the environment. Heavy metal exposure to biological systems may lead to oxidation stress which may induce DNA damage, protein modification, lipid peroxidation, and others. In this review, the major mechanism associated with toxicities of individual metals was the generation of reactive oxygen species (ROS). Additionally, toxicities were expressed through depletion of glutathione and bonding to sulfhydryl groups of proteins. Interestingly, a metal like Pb becomes toxic to organisms through the depletion of antioxidants while Cd indirectly generates ROS by its ability to replace iron and copper. ROS generated through exposure to arsenic were associated with many modes of action, and heavy metal mixtures were found to have varied effects on organisms. Many models based on concentration addition (CA) and independent action (IA) have been introduced to help predict toxicities and mechanisms associated with metal mixtures. An integrated model which combines CA and IA was further proposed for evaluating toxicities of non-interactive mixtures. In cases where there are molecular interactions, the toxicogenomic approach was used to predict toxicities. The high-throughput toxicogenomics combines studies in genetics, genome-scale expression, cell and tissue expression, metabolite profiling, and bioinformatics.

  2. Human cell toxicogenomic analysis linking reactive oxygen species to the toxicity of monohaloacetic acid drinking water disinfection byproducts.

    PubMed

    Pals, Justin; Attene-Ramos, Matias S; Xia, Menghang; Wagner, Elizabeth D; Plewa, Michael J

    2013-01-01

    Chronic exposure to drinking water disinfection byproducts has been linked to adverse health risks. The monohaloacetic acids (monoHAAs) are generated as byproducts during the disinfection of drinking water and are cytotoxic, genotoxic, mutagenic, and teratogenic. Iodoacetic acid toxicity was mitigated by antioxidants, suggesting the involvement of oxidative stress. Other monoHAAs may share a similar mode of action. Each monoHAA generated a significant concentration-response increase in the expression of a β-lactamase reporter under the control of the antioxidant response element (ARE). The monoHAAs generated oxidative stress with a rank order of iodoacetic acid (IAA) > bromoacetic acid (BAA) ≫ chloroacetic acid (CAA); this rank order was observed with other toxicological end points. Toxicogenomic analysis was conducted with a nontransformed human intestinal epithelial cell line (FHs 74 Int). Exposure to the monoHAAs altered the transcription levels of multiple oxidative stress responsive genes, indicating that each exposure generated oxidative stress. The transcriptome profiles showed an increase in thioredoxin reductase 1 (TXNRD1) and sulfiredoxin (SRXN1), suggesting peroxiredoxin proteins had been oxidized during monoHAA exposures. Three possible sources of reactive oxygen species were identified, the hypohalous acid generating peroxidase enzymes lactoperoxidase (LPO) and myeloperoxidase (MPO), nicotinamide adenine dinucleotide phosphate (NADPH)-dependent oxidase 5 (NOX5), and PTGS2 (COX-2) mediated arachidonic acid metabolism. Each monoHAA exposure caused an increase in COX-2 mRNA levels. These data provide a functional association between monoHAA exposure and adverse health outcomes such as oxidative stress, inflammation, and cancer.

  3. Evaluation of low doses BPA-induced perturbation of glycemia by toxicogenomics points to a primary role of pancreatic islets and to the mechanism of toxicity.

    PubMed

    Carchia, E; Porreca, I; Almeida, P J; D'Angelo, F; Cuomo, D; Ceccarelli, M; De Felice, M; Mallardo, M; Ambrosino, C

    2015-10-29

    Epidemiologic and experimental studies have associated changes of blood glucose homeostasis to Bisphenol A (BPA) exposure. We took a toxicogenomic approach to investigate the mechanisms of low-dose (1 × 10(-9 )M) BPA toxicity in ex vivo cultures of primary murine pancreatic islets and hepatocytes. Twenty-nine inhibited genes were identified in islets and none in exposed hepatocytes. Although their expression was slightly altered, their impaired cellular level, as a whole, resulted in specific phenotypic changes. Damage of mitochondrial function and metabolism, as predicted by bioinformatics analyses, was observed: BPA exposure led to a time-dependent decrease in mitochondrial membrane potential, to an increase of ROS cellular levels and, finally, to an induction of apoptosis, attributable to the bigger Bax/Bcl-2 ratio owing to activation of NF-κB pathway. Our data suggest a multifactorial mechanism for BPA toxicity in pancreatic islets with emphasis to mitochondria dysfunction and NF-κB activation. Finally, we assessed in vitro the viability of BPA-treated islets in stressing condition, as exposure to high glucose, evidencing a reduced ability of the exposed islets to respond to further damages. The result was confirmed in vivo evaluating the reduction of glycemia in hyperglycemic mice transplanted with control and BPA-treated pancreatic islets. The reported findings identify the pancreatic islet as the main target of BPA toxicity in impairing the glycemia. They suggest that the BPA exposure can weaken the response of the pancreatic islets to damages. The last observation could represent a broader concept whose consideration should lead to the development of experimental plans better reproducing the multiple exposure conditions.

  4. Effects of lithium on growth, maturation, reproduction and gene expression in the nematode Caenorhabditis elegans.

    PubMed

    Inokuchi, Ayako; Yamamoto, Ryoko; Morita, Fumiyo; Takumi, Shota; Matsusaki, Hiromi; Ishibashi, Hiroshi; Tominaga, Nobuaki; Arizono, Koji

    2015-09-01

    Lithium (Li) has been widely used to treat bipolar disorder, and industrial use of Li has been increasing; thus, environmental pollution and ecological impacts of Li have become a concern. This study was conducted to clarify the potential biological effects of LiCl and Li(2)CO(3) on a nematode, Caenorhabditis elegans as a model system for evaluating soil contaminated with Li. Exposure of C. elegans to LiCl and Li(2)CO(3) decreased growth/maturation and reproduction. The lowest observed effect concentrations for growth, maturation and reproduction were 1250, 313 and 10 000 µm, respectively, for LiCl and 750, 750 and 3000 µm, respectively, for Li(2)CO(3). We also investigated the physiological function of LiCl and LiCO(3) in C. elegans using DNA microarray analysis as an eco-toxicogenomic approach. Among approximately 300 unique genes, including metabolic genes, the exposure to 78 µm LiCl downregulated the expression of 36 cytochrome P450, 16 ABC transporter, 10 glutathione S-transferase, 16 lipid metabolism and two vitellogenin genes. On the other hand, exposure to 375 µm Li(2)CO(3) downregulated the expression of 11 cytochrome P450, 13 ABC transporter, 13 lipid metabolism and one vitellogenin genes. No gene was upregulated by LiCl or Li(2)CO(3). These results suggest that LiCl and Li(2)CO(3) potentially affect the biological and physiological function in C. elegans associated with alteration of the gene expression such as metabolic genes. Our data also provide experimental support for the utility of toxicogenomics by integrating gene expression profiling into a toxicological study of an environmentally important organism such as C. elegans. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Phenobarbital and propiconazole toxicogenomic profiles in mice show major similarities consistent with the key role that constitutive androstane receptor (CAR) activation plays in their mode of action

    PubMed Central

    Currie, Richard A.; Peffer, Richard C.; Goetz, Amber K.; Omiecinski, Curtis J.; Goodman, Jay I.

    2014-01-01

    Toxicogenomics (TGx) is employed frequently to investigate underlying molecular mechanisms of the compound of interest and, thus, has become an aid to mode of action determination. However, the results and interpretation of a TGx dataset are influenced by the experimental design and methods of analysis employed. This article describes an evaluation and reanalysis, by two independent laboratories, of previously published TGx mouse liver microarray data for a triazole fungicide, propiconazole (PPZ), and the anticonvulsant drug phenobarbital (PB). Propiconazole produced an increase incidence of liver tumors in male CD-1 mice only at a dose that exceeded the maximum tolerated dose (2500 ppm). Firstly, we illustrate how experimental design differences between two in vivo studies with PPZ and PB may impact the comparisons of TGx results. Secondly, we demonstrate that different researchers using different pathway analysis tools can come to different conclusions on specific mechanistic pathways, even when using the same datasets. Finally, despite these differences the results across three different analyses also show a striking degree of similarity observed for PPZ and PB treated livers when the expression data are viewed as major signaling pathways and cell processes affected. Additional studies described here show that the postulated key event of hepatocellular proliferation was observed in CD-1 mice for both PPZ and PB, and that PPZ is also a potent activator of the mouse CAR nuclear receptor. Thus, with regard to the events which are hallmarks of CAR-induced effects that are key events in the mode of action (MOA) of mouse liver carcinogenesis with PB, PPZ-induced tumors can be viewed as being promoted by a similar PB-like CAR-dependent MOA. PMID:24675475

  6. A novel genotoxin-specific qPCR array based on the metabolically competent human HepaRG™ cell line as a rapid and reliable tool for improved in vitro hazard assessment.

    PubMed

    Ates, Gamze; Mertens, Birgit; Heymans, Anja; Verschaeve, Luc; Milushev, Dimiter; Vanparys, Philippe; Roosens, Nancy H C; De Keersmaecker, Sigrid C J; Rogiers, Vera; Doktorova, Tatyana Y

    2018-04-01

    Although the value of the regulatory accepted batteries for in vitro genotoxicity testing is recognized, they result in a high number of false positives. This has a major impact on society and industries developing novel compounds for pharmaceutical, chemical, and consumer products, as afflicted compounds have to be (prematurely) abandoned or further tested on animals. Using the metabolically competent human HepaRG ™ cell line and toxicogenomics approaches, we have developed an upgraded, innovative, and proprietary gene classifier. This gene classifier is based on transcriptomic changes induced by 12 genotoxic and 12 non-genotoxic reference compounds tested at sub-cytotoxic concentrations, i.e., IC10 concentrations as determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. The resulting gene classifier was translated into an easy-to-handle qPCR array that, as shown by pathway analysis, covers several different cellular processes related to genotoxicity. To further assess the predictivity of the tool, a set of 5 known positive and 5 known negative test compounds for genotoxicity was evaluated. In addition, 2 compounds with debatable genotoxicity data were tested to explore how the qPCR array would classify these. With an accuracy of 100%, when equivocal results were considered positive, the results showed that combining HepaRG ™ cells with a genotoxin-specific qPCR array can improve (geno)toxicological hazard assessment. In addition, the developed qPCR array was able to provide additional information on compounds for which so far debatable genotoxicity data are available. The results indicate that the new in vitro tool can improve human safety assessment of chemicals in general by basing predictions on mechanistic toxicogenomics information.

  7. Discovering functional modules by topic modeling RNA-Seq based toxicogenomic data.

    PubMed

    Yu, Ke; Gong, Binsheng; Lee, Mikyung; Liu, Zhichao; Xu, Joshua; Perkins, Roger; Tong, Weida

    2014-09-15

    Toxicogenomics (TGx) endeavors to elucidate the underlying molecular mechanisms through exploring gene expression profiles in response to toxic substances. Recently, RNA-Seq is increasingly regarded as a more powerful alternative to microarrays in TGx studies. However, realizing RNA-Seq's full potential requires novel approaches to extracting information from the complex TGx data. Considering read counts as the number of times a word occurs in a document, gene expression profiles from RNA-Seq are analogous to a word by document matrix used in text mining. Topic modeling aiming at to discover the latent structures in text corpora would be helpful to explore RNA-Seq based TGx data. In this study, topic modeling was applied on a typical RNA-Seq based TGx data set to discover hidden functional modules. The RNA-Seq based gene expression profiles were transformed into "documents", on which latent Dirichlet allocation (LDA) was used to build a topic model. We found samples treated by the compounds with the same modes of actions (MoAs) could be clustered based on topic similarities. The topic most relevant to each cluster was identified as a "marker" topic, which was interpreted by gene enrichment analysis with MoAs then confirmed by compound and pathways associations mined from literature. To further validate the "marker" topics, we tested topic transferability from RNA-Seq to microarrays. The RNA-Seq based gene expression profile of a topic specifically associated with peroxisome proliferator-activated receptors (PPAR) signaling pathway was used to query samples with similar expression profiles in two different microarray data sets, yielding accuracy of about 85%. This proof-of-concept study demonstrates the applicability of topic modeling to discover functional modules in RNA-Seq data and suggests a valuable computational tool for leveraging information within TGx data in RNA-Seq era.

  8. Gene expression profiling in liver and testis of rats to characterize the toxicity of triazole fungicides.

    PubMed

    Tully, Douglas B; Bao, Wenjun; Goetz, Amber K; Blystone, Chad R; Ren, Hongzu; Schmid, Judith E; Strader, Lillian F; Wood, Carmen R; Best, Deborah S; Narotsky, Michael G; Wolf, Douglas C; Rockett, John C; Dix, David J

    2006-09-15

    Four triazole fungicides were studied using toxicogenomic techniques to identify potential mechanisms of action. Adult male Sprague-Dawley rats were dosed for 14 days by gavage with fluconazole, myclobutanil, propiconazole, or triadimefon. Following exposure, serum was collected for hormone measurements, and liver and testes were collected for histology, enzyme biochemistry, or gene expression profiling. Body and testis weights were unaffected, but liver weights were significantly increased by all four triazoles, and hepatocytes exhibited centrilobular hypertrophy. Myclobutanil exposure increased serum testosterone and decreased sperm motility, but no treatment-related testis histopathology was observed. We hypothesized that gene expression profiles would identify potential mechanisms of toxicity and used DNA microarrays and quantitative real-time PCR (qPCR) to generate profiles. Triazole fungicides are designed to inhibit fungal cytochrome P450 (CYP) 51 enzyme but can also modulate the expression and function of mammalian CYP genes and enzymes. Triazoles affected the expression of numerous CYP genes in rat liver and testis, including multiple Cyp2c and Cyp3a isoforms as well as other xenobiotic metabolizing enzyme (XME) and transporter genes. For some genes, such as Ces2 and Udpgtr2, all four triazoles had similar effects on expression, suggesting possible common mechanisms of action. Many of these CYP, XME and transporter genes are regulated by xeno-sensing nuclear receptors, and hierarchical clustering of CAR/PXR-regulated genes demonstrated the similarities of toxicogenomic responses in liver between all four triazoles and in testis between myclobutanil and triadimefon. Triazoles also affected expression of multiple genes involved in steroid hormone metabolism in the two tissues. Thus, gene expression profiles helped identify possible toxicological mechanisms of the triazole fungicides.

  9. Gene expression analysis in rat lungs after intratracheal exposure to nanoparticles doped with cadmium

    NASA Astrophysics Data System (ADS)

    Coccini, Teresa; Fabbri, Marco; Roda, Elisa; Grazia Sacco, Maria; Manzo, Luigi; Gribaldo, Laura

    2011-07-01

    Silica nanoparticles (NPs) incorporating cadmium (Cd) have been developed for a range of potential application including drug delivery devices. Occupational Cd inhalation has been associated with emphysema, pulmonary fibrosis and lung tumours. Mechanistically, Cd can induce oxidative stress and mediate cell-signalling pathways that are involved in inflammation.This in vivo study aimed at investigating pulmonary molecular effects of NPs doped with Cd (NP-Cd, 1 mg/animal) compared to soluble CdCl2 (400 μg/animal), in Sprague Dawley rats treated intra-tracheally, 7 and 30 days after administration. NPs of silica containing Cd salt were prepared starting from commercial nano-size silica powder (HiSil™ T700 Degussa) with average pore size of 20 nm and surface area of 240 m2/g. Toxicogenomic analysis was performed by the DNA microarray technology (using Agilent Whole Rat Genome Microarray 4×44K) to evaluate changes in gene expression of the entire genome. These findings indicate that the whole genome analysis may represent a valuable approach to assess the whole spectrum of biological responses to cadmium containing nanomaterials.

  10. LiverTox: Advanced QSAR and Toxicogeomic Software for Hepatotoxicity Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, P-Y.; Yuracko, K.

    2011-02-25

    YAHSGS LLC and Oak Ridge National Laboratory (ORNL) established a CRADA in an attempt to develop a predictive system using a pre-existing ORNL computational neural network and wavelets format. This was in the interest of addressing national needs for toxicity prediction system to help overcome the significant drain of resources (money and time) being directed toward developing chemical agents for commerce. The research project has been supported through an STTR mechanism and funded by the National Institute of Environmental Health Sciences beginning Phase I in 2004 (CRADA No. ORNL-04-0688) and extending Phase II through 2007 (ORNL NFE-06-00020). To attempt themore » research objectives and aims outlined under this CRADA, state-of-the-art computational neural network and wavelet methods were used in an effort to design a predictive toxicity system that used two independent areas on which to base the system’s predictions. These two areas were quantitative structure-activity relationships and gene-expression data obtained from microarrays. A third area, using the new Massively Parallel Signature Sequencing (MPSS) technology to assess gene expression, also was attempted but had to be dropped because the company holding the rights to this promising MPSS technology went out of business. A research-scale predictive toxicity database system called Multi-Intelligent System for Toxicogenomic Applications (MISTA) was developed and its feasibility for use as a predictor of toxicological activity was tested. The fundamental focus of the CRADA was an attempt and effort to operate the MISTA database using the ORNL neural network. This effort indicated the potential that such a fully developed system might be used to assist in predicting such biological endpoints as hepatotoxcity and neurotoxicity. The MISTA/LiverTox approach if eventually fully developed might also be useful for automatic processing of microarray data to predict modes of action. A technical paper describing the methods and technology used in the CRADA has been published. This paper was entitled “A Toxicity Evaluation and Predictive System Based on Neural Networks and Wavelets” and appeared in an American Chemical Society peer-reviewed publication this year (J. Chem. Inf. Model. 47: 676685, 2007). A patent application was filed but later abandoned.« less

  11. Petri net-based prediction of therapeutic targets that recover abnormally phosphorylated proteins in muscle atrophy.

    PubMed

    Jung, Jinmyung; Kwon, Mijin; Bae, Sunghwa; Yim, Soorin; Lee, Doheon

    2018-03-05

    Muscle atrophy, an involuntary loss of muscle mass, is involved in various diseases and sometimes leads to mortality. However, therapeutics for muscle atrophy thus far have had limited effects. Here, we present a new approach for therapeutic target prediction using Petri net simulation of the status of phosphorylation, with a reasonable assumption that the recovery of abnormally phosphorylated proteins can be a treatment for muscle atrophy. The Petri net model was employed to simulate phosphorylation status in three states, i.e. reference, atrophic and each gene-inhibited state based on the myocyte-specific phosphorylation network. Here, we newly devised a phosphorylation specific Petri net that involves two types of transitions (phosphorylation or de-phosphorylation) and two types of places (activation with or without phosphorylation). Before predicting therapeutic targets, the simulation results in reference and atrophic states were validated by Western blotting experiments detecting five marker proteins, i.e. RELA, SMAD2, SMAD3, FOXO1 and FOXO3. Finally, we determined 37 potential therapeutic targets whose inhibition recovers the phosphorylation status from an atrophic state as indicated by the five validated marker proteins. In the evaluation, we confirmed that the 37 potential targets were enriched for muscle atrophy-related terms such as actin and muscle contraction processes, and they were also significantly overlapping with the genes associated with muscle atrophy reported in the Comparative Toxicogenomics Database (p-value < 0.05). Furthermore, we noticed that they included several proteins that could not be characterized by the shortest path analysis. The three potential targets, i.e. BMPR1B, ROCK, and LEPR, were manually validated with the literature. In this study, we suggest a new approach to predict potential therapeutic targets of muscle atrophy with an analysis of phosphorylation status simulated by Petri net. We generated a list of the potential therapeutic targets whose inhibition recovers abnormally phosphorylated proteins in an atrophic state. They were evaluated by various approaches, such as Western blotting, GO terms, literature, known muscle atrophy-related genes and shortest path analysis. We expect the new proposed strategy to provide an understanding of phosphorylation status in muscle atrophy and to provide assistance towards identifying new therapies.

  12. Morphological and cytohistochemical evaluation of renal effects of cadmium-doped silica nanoparticles given intratracheally to rat

    NASA Astrophysics Data System (ADS)

    Coccini, T.; Roda, E.; Barni, S.; Manzo, L.

    2013-04-01

    Renal morphological parameters were determined in rats intratracheally instilled with model cadmium-containing silica nanoparticles (Cd-SiNPs, 1mg/rat), also exploring whether their potential modifications would be associated with toxicogenomic changes. Cd-SiNP effects, evaluated 7 and 30 days post-exposure, were assessed by (i) histopathology (Haematoxylin/Eosin Staining), (ii) characterization of apoptotic features by TUNEL staining. Data were compared with those obtained by CdCl2 (400μg/rat), SiNPs (600μg/rat), 0.1 ml saline. Area-specific cell apoptosis was observed in all treatment groups: cortex and inner medulla were the most affected regions. Apoptotic changes were apparent at 7 days post-exposure in both areas, and were still observable in inner medulla 30 days after treatment. Increase in apoptotic frequency was more pronounced in Cd-SiNP-treated animals compared to either CdCl2 or SiNPs. Histological findings showed comparable alterations in the renal glomerular (cortex) architecture occurring in all treatment groups at both time-points considered. The glomeruli appeared often collapsed, showing condensed, packed mesangial and endothelial cells. Oedematous haemorrhagic glomeruli were also observed in Cd-SiNPs-treated animals. Bare SiNPs caused morphological and apoptotic changes without modifying the renal gene expression profile. These findings support the concept that multiple assays and an integrated testing strategy should be recommended to characterize toxicological responses to nanoparticles in mammalian systems.

  13. De novo transcriptome assembly and differential gene expression analysis of the calanoid copepod Acartia tonsa exposed to nickel nanoparticles.

    PubMed

    Zhou, Chao; Carotenuto, Ylenia; Vitiello, Valentina; Wu, Changwen; Zhang, Jianshe; Buttino, Isabella

    2018-06-14

    The calanoid copepod Acartia tonsa is a reference species in standardized ecotoxicology bioassay. Despite this interest, there is a lack of knowledge on molecular responses of A. tonsa to contaminants. We generated a de novo assembled transcriptome of A. tonsa exposed 4 days to 8.5 and 17 mg/L nickel nanoparticles (NiNPs), which have been shown to reduce egg hatching success and larval survival but had no effects on the adults. Aims of our study were to 1) improve the knowledge on the molecular responses of A. tonsa copepod and 2) increase the genomic resources of this copepod for further identification of potential biomarkers of NP exposure. The de novo assembled transcriptome of A. tonsa consisted of 53,619 unigenes, which were further annotated to nr, GO, KOG and KEGG databases. In particular, most unigenes were assigned to Metabolic and Cellular processes (34-45%) GO terms, and to Human disease (28%) and Organismal systems (23%) KEGG categories. Comparison among treatments showed that 373 unigenes were differentially expressed in A. tonsa exposed to NiNPs at 8.5 and 17 mg/L, with respect to control. Most of these genes were downregulated and took part in ribosome biogenesis, translation and protein turnover, thus suggesting that NiNPs could affect the copepod ribosome synthesis machinery and functioning. Overall, our study highlights the potential of toxicogenomic approach in gaining more mechanistic and functional information about the mode of action of emerging compounds on marine organisms, for biomarker discovering in crustaceans. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Toxicogenomic outcomes predictive of forestomach carcinogenesis following exposure to benzo(a)pyrene: Relevance to human cancer risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labib, Sarah, E-mail: Sarah.Labib@hc-sc.gc.ca; Guo, Charles H., E-mail: Charles.Guo@hc-sc.gc.ca; Williams, Andrew, E-mail: Andrew.Williams@hc-sc.gc.ca

    2013-12-01

    Forestomach tumors are observed in mice exposed to environmental carcinogens. However, the relevance of this data to humans is controversial because humans lack a forestomach. We hypothesize that an understanding of early molecular changes after exposure to a carcinogen in the forestomach will provide mode-of-action information to evaluate the applicability of forestomach cancers to human cancer risk assessment. In the present study we exposed mice to benzo(a)pyrene (BaP), an environmental carcinogen commonly associated with tumors of the rodent forestomach. Toxicogenomic tools were used to profile gene expression response in the forestomach. Adult Muta™Mouse males were orally exposed to 25, 50,more » and 75 mg BaP/kg-body-weight/day for 28 consecutive days. The forestomach was collected three days post-exposure. DNA microarrays, real-time RT-qPCR arrays, and protein analyses were employed to characterize responses in the forestomach. Microarray results showed altered expression of 414 genes across all treatment groups (± 1.5 fold; false discovery rate adjusted P ≤ 0.05). Significant downregulation of genes associated with phase II xenobiotic metabolism and increased expression of genes implicated in antigen processing and presentation, immune response, chemotaxis, and keratinocyte differentiation were observed in treated groups in a dose-dependent manner. A systematic comparison of the differentially expressed genes in the forestomach from the present study to differentially expressed genes identified in human diseases including human gastrointestinal tract cancers using the NextBio Human Disease Atlas showed significant commonalities between the two models. Our results provide molecular evidence supporting the use of the mouse forestomach model to evaluate chemically-induced gastrointestinal carcinogenesis in humans. - Highlights: • Benzo(a)pyrene-mediated transcriptomic response in the forestomach was examined. • The immunoproteosome subunits and MHC class I pathway were the most affected. • Keratinocyte differentiation associated gene expression changes were dose-dependent. • Molecular similarities exist between cancers of the forestomach and human stomach.« less

  15. Toxicogenomics: the challenges and opportunities to identify biomarkers, signatures and thresholds to support mode-of-action.

    PubMed

    Currie, Richard A

    2012-08-15

    Toxicogenomics (TGx) can be defined as the application of "omics" techniques to toxicology and risk assessment. By identifying molecular changes associated with toxicity, TGx data might assist hazard identification and investigate causes. Early technical challenges were evaluated and addressed by consortia (e.g. ISLI/HESI and the Microarray Quality Control consortium), which demonstrated that TGx gave reliable and reproducible information. The MAQC also produced "best practice on signature generation" after conducting an extensive evaluation of different methods on common datasets. Two findings of note were the need for methods that control batch variability, and that the predictive ability of a signature changes in concert with the variability of the endpoint. The key challenge remaining is data interpretation, because TGx can identify molecular changes that are causal, associated with or incidental to toxicity. Application of Bradford Hill's tests for causation, which are used to build mode of action (MOA) arguments, can produce reasonable hypotheses linking altered pathways to phenotypic changes. However, challenges in interpretation still remain: are all pathway changes equal, which are most important and plausibly linked to toxicity? Therefore the expert judgement of the toxicologist is still needed. There are theoretical reasons why consistent alterations across a metabolic pathway are important, but similar changes in signalling pathways may not alter information flow. At the molecular level thresholds may be due to the inherent properties of the regulatory network, for example switch-like behaviours from some network motifs (e.g. positive feedback) in the perturbed pathway leading to the toxicity. The application of systems biology methods to TGx data can generate hypotheses that explain why a threshold response exists. However, are we adequately trained to make these judgments? There is a need for collaborative efforts between regulators, industry and academia to properly define how these technologies can be applied using appropriate case-studies. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Identification of potential genomic biomarkers of hepatotoxicity caused by reactive metabolites of N-methylformamide: Application of stable isotope labeled compounds in toxicogenomic studies.

    PubMed

    Mutlib, Abdul; Jiang, Ping; Atherton, Jim; Obert, Leslie; Kostrubsky, Seva; Madore, Steven; Nelson, Sidney

    2006-10-01

    The inability to predict if a metabolically bioactivated compound will cause toxicity in later stages of drug development or post-marketing is of serious concern. One approach for improving the predictive success of compound toxicity has been to compare the gene expression profile in preclinical models dosed with novel compounds to a gene expression database generated from compounds with known toxicity. While this guilt-by-association approach can be useful, it is often difficult to elucidate gene expression changes that may be related to the generation of reactive metabolites. In an effort to address this issue, we compared the gene expression profiles obtained from animals treated with a soft-electrophile-producing hepatotoxic compound against corresponding deuterium labeled analogues resistant to metabolic processing. Our aim was to identify a subset of potential biomarker genes for hepatotoxicity caused by soft-electrophile-producing compounds. The current study utilized a known hepatotoxic compound N-methylformamide (NMF) and its two analogues labeled with deuterium at different positions to block metabolic oxidation at the formyl (d(1)) and methyl (d(3)) moieties. Groups of mice were dosed with each compound, and their livers were harvested at different time intervals. RNA was prepared and analyzed on Affymetrix GeneChip arrays. RNA transcripts showing statistically significant changes were identified, and selected changes were confirmed using TaqMan RT-PCR. Serum clinical chemistry and histopathologic evaluations were performed on selected samples as well. The data set generated from the different groups of animals enabled us to determine which gene expression changes were attributed to the bioactivating pathway. We were able to selectively modulate the metabolism of NMF by labeling various positions of the molecule with a stable isotope, allowing us to monitor gene changes specifically due to a particular metabolic pathway. Two groups of genes were identified, which were associated with the metabolism of a certain part of the NMF molecule. The metabolic pathway leading to the production of reactive methyl isocyanate resulted in distinct expression patterns that correlated with histopathologic findings. There was a clear correlation between the expression of certain genes involved in the cell cycle/apoptosis and inflammatory pathways and the presence of reactive metabolite. These genes may serve as potential genomic biomarkers of hepatotoxicity induced by soft-electrophile-producing compounds. However, the robustness of these potential genomic biomarkers will need to be validated using other hepatotoxicants (both soft- and hard-electrophile-producing agents) and compounds known to cause idiosyncratic liver toxicity before being adopted into the drug discovery screening process.

  17. An Approach to Using Toxicogenomic Data in US EPA Human ...

    EPA Pesticide Factsheets

    This draft report is a description of an approach to evaluate genomic data for use in risk assessment and a case study to illustrate the approach. The dibutyl phthalate (DBP) case study example focuses on male reproductive developmental effects and the qualitative application of the available genomic data. The case study presented in this draft document is a separate activity from any of the ongoing IRIS human health assessments for the phthalates. This draft report is a description of an approach to evaluate genomic data for use in risk assessment and a case study to illustrate the approach. The dibutyl phthalate (DBP) case study example focuses on male reproductive developmental effects and the qualitative application of the available genomic data.

  18. A transcriptomic analysis of turmeric: Curcumin represses the expression of cholesterol biosynthetic genes and synergizes with simvastatin.

    PubMed

    Einbond, Linda Saxe; Manservisi, Fabiana; Wu, Hsan-Au; Balick, Michael; Antonetti, Victoria; Vornoli, Andrea; Menghetti, Ilaria; Belpoggi, Fiorella; Redenti, Stephen; Roter, Alan

    2018-06-01

    The spice turmeric (Curcuma longa L.) has a long history of use as an anti-inflammatory agent. The active component curcumin induces a variety of diverse biological effects and forms a series of degradation and metabolic products in vivo. Our hypothesis is that the field of toxicogenomics provides tools that can be used to characterize the mode of action and toxicity of turmeric components and to predict turmeric-drug interactions. Male Sprague-Dawley rats were treated for 4 days with turmeric root containing about 3% curcumin (comparable to what people consume in the fresh or dried root) or a fraction of turmeric enriched for curcumin (∼74%) and liver tissue collected for gene expression analysis. Two doses of each agent were added to the diet, corresponding to 540 and 2700 mg/kg body weight/day of turmeric. The transcriptomic effects of turmeric on rat liver tissue were examined using 3 programs, ToxFx Analysis Suite, in the context of a large drug database, Ingenuity Pathway and NextBio analyses. ToxFx analysis indicates that turmeric containing about 3% or 74% curcumin represses the expression of cholesterol biosynthetic genes. The dose of 400 mg/kg b.w./day curcumin induced the Drug Signature associated with hepatic inflammatory infiltrate. Ingenuity analysis confirmed that all 4 turmeric treatments had a significant effect on cholesterol biosynthesis, specifically the Cholesterol biosynthesis superpathway and Cholesterol biosynthesis 1 and 2. Among the top 10 up or downregulated genes, all 4 treatments downregulated PDK4; while 3 treatments downregulated ANGPTL4 or FASN. These findings suggest curcumin may enhance the anticancer effects of certain classes of statins, which we confirmed with biological assays. Given this enhancement, lower levels of statins may be required, and even be desirable. Our findings also warn of possible safety issues, such as potential inflammatory liver effects, for patients who ingest a combination of certain classes of statins and curcumin. Transcriptomic analysis suggests that turmeric is worthwhile to study to prevent and treat cancer and lipid disorders. Our approach lays new groundwork for studies of the mode of action and safety of herbal medicines and can also be used to develop a methodology to standardize herbal medicines. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. DNA content alterations in Tetrahymena pyriformis macronucleus after exposure to food preservatives sodium nitrate and sodium benzoate.

    PubMed

    Loutsidou, Ariadni C; Hatzi, Vasiliki I; Chasapis, C T; Terzoudi, Georgia I; Spiliopoulou, Chara A; Stefanidou, Maria E

    2012-12-01

    The toxicity, in terms of changes in the DNA content, of two food preservatives, sodium nitrate and sodium benzoate was studied on the protozoan Tetrahymena pyriformis using DNA image analysis technology. For this purpose, selected doses of both food additives were administered for 2 h to protozoa cultures and DNA image analysis of T. pyriformis nuclei was performed. The analysis was based on the measurement of the Mean Optical Density which represents the cellular DNA content. The results have shown that after exposure of the protozoan cultures to doses equivalent to ADI, a statistically significant increase in the macronuclear DNA content compared to the unexposed control samples was observed. The observed increase in the macronuclear DNA content is indicative of the stimulation of the mitotic process and the observed increase in MOD, accompanied by a stimulation of the protozoan proliferation activity is in consistence with this assumption. Since alterations at the DNA level such as DNA content and uncontrolled mitogenic stimulation have been linked with chemical carcinogenesis, the results of the present study add information on the toxicogenomic profile of the selected chemicals and may potentially lead to reconsideration of the excessive use of nitrates aiming to protect public health.

  20. A test strategy for the assessment of additive attributed toxicity of tobacco products.

    PubMed

    Kienhuis, Anne S; Staal, Yvonne C M; Soeteman-Hernández, Lya G; van de Nobelen, Suzanne; Talhout, Reinskje

    2016-08-01

    The new EU Tobacco Product Directive (TPD) prohibits tobacco products containing additives that are toxic in unburnt form or that increase overall toxicity of the product. This paper proposes a strategy to assess additive attributed toxicity in the context of the TPD. Literature was searched on toxicity testing strategies for regulatory purposes from tobacco industry and governmental institutes. Although mainly traditional in vivo testing strategies have been applied to assess toxicity of unburnt additives and increases in overall toxicity of tobacco products due to additives, in vitro tests combined with toxicogenomics and validated using biomarkers of exposure and disease are most promising in this respect. As such, tests are needed that are sensitive enough to assess additive attributed toxicity above the overall toxicity of tobacco products, which can associate assay outcomes to human risk and exposure. In conclusion, new, sensitive in vitro assays are needed to conclude whether comparable testing allows for assessment of small changes in overall toxicity attributed to additives. A more pragmatic approach for implementation on a short-term is mandated lowering of toxic emission components. Combined with risk assessment, this approach allows assessment of effectiveness of harm reduction strategies, including banning or reducing of additives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Can different primary care databases produce comparable estimates of burden of disease: results of a study exploring venous leg ulceration.

    PubMed

    Petherick, Emily S; Pickett, Kate E; Cullum, Nicky A

    2015-08-01

    Primary care databases from the UK have been widely used to produce evidence on the epidemiology and health service usage of a wide range of conditions. To date there have been few evaluations of the comparability of estimates between different sources of these data. To estimate the comparability of two widely used primary care databases, the Health Improvement Network Database (THIN) and the General Practice Research Database (GPRD) using venous leg ulceration as an exemplar condition. Cross prospective cohort comparison. GPRD and the THIN databases using data from 1998 to 2006. A data set was extracted from both databases containing all cases of persons aged 20 years or greater with a database diagnosis of venous leg ulceration recorded in the databases for the period 1998-2006. Annual rates of incidence and prevalence of venous leg ulceration were calculated within each database and standardized to the European standard population and compared using standardized rate ratios. Comparable estimates of venous leg ulcer incidence from the GPRD and THIN databases could be obtained using data from 2000 to 2006 and of prevalence using data from 2001 to 2006. Recent data collected by these two databases are more likely to produce comparable results of the burden venous leg ulceration. These results require confirmation in other disease areas to enable researchers to have confidence in the comparability of findings from these two widely used primary care research resources. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Comparative toxicogenomic analysis of oral Cr(VI) exposure effects in rat and mouse small intestinal epithelia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopec, Anna K.; Thompson, Chad M.; Kim, Suntae

    2012-07-15

    Continuous exposure to high concentrations of hexavalent chromium [Cr(VI)] in drinking water results in intestinal tumors in mice but not rats. Concentration-dependent gene expression effects were evaluated in female F344 rat duodenal and jejunal epithelia following 7 and 90 days of exposure to 0.3–520 mg/L (as sodium dichromate dihydrate, SDD) in drinking water. Whole-genome microarrays identified 3269 and 1815 duodenal, and 4557 and 1534 jejunal differentially expressed genes at 8 and 91 days, respectively, with significant overlaps between the intestinal segments. Functional annotation identified gene expression changes associated with oxidative stress, cell cycle, cell death, and immune response that weremore » consistent with reported changes in redox status and histopathology. Comparative analysis with B6C3F1 mouse data from a similarly designed study identified 2790 differentially expressed rat orthologs in the duodenum compared to 5013 mouse orthologs at day 8, and only 1504 rat and 3484 mouse orthologs at day 91. Automated dose–response modeling resulted in similar median EC{sub 50}s in the rodent duodenal and jejunal mucosae. Comparative examination of differentially expressed genes also identified divergently regulated orthologs. Comparable numbers of differentially expressed genes were observed at equivalent Cr concentrations (μg Cr/g duodenum). However, mice accumulated higher Cr levels than rats at ≥ 170 mg/L SDD, resulting in a ∼ 2-fold increase in the number of differentially expressed genes. These qualitative and quantitative differences in differential gene expression, which correlate with differences in tissue dose, likely contribute to the disparate intestinal tumor outcomes. -- Highlights: ► Cr(VI) elicits dose-dependent changes in gene expression in rat intestine. ► Cr(VI) elicits less differential gene expression in rats compared to mice. ► Cr(VI) gene expression can be phenotypically anchored to intestinal changes. ► Species-specific and divergent changes are consistent with species-specific tumors.« less

  3. Toxicogenomics in the 3T3-L1 cell line, a new approach for screening of obesogenic compounds.

    PubMed

    Pereira-Fernandes, Anna; Vanparys, Caroline; Vergauwen, Lucia; Knapen, Dries; Jorens, Philippe Germaines; Blust, Ronny

    2014-08-01

    The obesogen hypothesis states that together with an energy imbalance between calories consumed and calories expended, exposure to environmental compounds early in life or throughout lifetime might have an influence on obesity development. In this work, we propose a new approach for obesogen screening, i.e., the use of transcriptomics in the 3T3-L1 pre-adipocyte cell line. Based on the data from a previous study of our group using a lipid accumulation based adipocyte differentiation assay, several human-relevant obesogenic compounds were selected: reference obesogens (Rosiglitazone, Tributyltin), test obesogens (Butylbenzyl phthalate, butylparaben, propylparaben, Bisphenol A), and non-obesogens (Ethylene Brassylate, Bis (2-ethylhexyl)phthalate). The high stability and reproducibility of the 3T3-L1 gene transcription patterns over different experiments and cell batches is demonstrated by this study. Obesogens and non-obesogen gene transcription profiles were clearly distinguished using hierarchical clustering. Furthermore, a gradual distinction corresponding to differences in induction of lipid accumulation could be made between test and reference obesogens based on transcription patterns, indicating the potential use of this strategy for classification of obesogens. Marker genes that are able to distinguish between non, test, and reference obesogens were identified. Well-known genes involved in adipocyte differentiation as well as genes with unknown functions were selected, implying a potential adipocyte-related function of the latter. Cell-physiological lipid accumulation was well estimated based on transcription levels of the marker genes, indicating the biological relevance of omics data. In conclusion, this study shows the high relevance and reproducibility of this 3T3-L1 based in vitro toxicogenomics tool for classification of obesogens and biomarker discovery. Although the results presented here are promising, further confirmation of the predictive value of the set of candidate biomarkers identified as well as the validation of their clinical role will be needed. © The Author 2014. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  4. Toxicogenomics analysis of mouse lung responses following exposure to titanium dioxide nanomaterials reveal their disease potential at high doses

    PubMed Central

    Rahman, Luna; Wu, Dongmei; Johnston, Michael; William, Andrew; Halappanavar, Sabina

    2017-01-01

    Titanium dioxide nanoparticles (TiO2NPs) induce lung inflammation in experimental animals. In this study, we conducted a comprehensive toxicogenomic analysis of lung responses in mice exposed to six individual TiO2NPs exhibiting different sizes (8, 20 and 300nm), crystalline structure (anatase, rutile or anatase/rutile) and surface modifications (hydrophobic or hydrophilic) to investigate whether the mechanisms leading to TiO2NP-induced lung inflammation are property specific. A detailed histopathological analysis was conducted to investigate the long-term disease implications of acute exposure to TiO2NPs. C57BL/6 mice were exposed to 18, 54, 162 or 486 µg of TiO2NPs/mouse via single intratracheal instillation. Controls were exposed to dispersion medium only. Bronchoalveolar lavage fluid (BALF) and lung tissue were sampled on 1, 28 and 90 days post-exposure. Although all TiO2NPs induced lung inflammation as measured by the neutrophil influx in BALF, rutile-type TiO2NPs induced higher inflammation with the hydrophilic rutile TiO2NP showing the maximum increase. Accordingly, the rutile TiO2NPs induced higher number of differentially expressed genes. Histopathological analysis of lung sections on Day 90 post-exposure showed increased collagen staining and fibrosis-like changes following exposure to the rutile TiO2NPs at the highest dose tested. Among the anatase, the smallest TiO2NP of 8nm showed the maximum response. The anatase TiO2NP of 300nm was the least responsive of all. The results suggest that the severity of lung inflammation is property specific; however, the underlying mechanisms (genes and pathways perturbed) leading to inflammation were the same for all particle types. While the particle size clearly influenced the overall acute lung responses, a combination of small size, crystalline structure and hydrophilic surface contributed to the long-term pathological effects observed at the highest dose (486 µg/mouse). Although the dose at which the pathological changes were observed is considered physiologically high, the study highlights the disease potential of certain TiO2NPs of specific properties. PMID:27760801

  5. 76 FR 67732 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-02

    ... proposed information collection project: ``Nursing Home Survey on Patient Safety Culture Comparative... Nursing Home Survey on Patient Safety Culture Comparative Database The Agency for Healthcare Research and... Culture (Nursing Home SOPS) Comparative Database. The Nursing Home SOPS Comparative Database consists of...

  6. 78 FR 46338 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-31

    ... Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative Database.'' In accordance with the... Safety Culture Comparative Database Request for information collection approval. The Agency for... on Patient Safety Culture (Hospital SOPS) Comparative Database; OMB NO. 0935-0162, last approved on...

  7. Toxicogenomic response of Mycobacterium bovis BCG to peracetic acid and a comparative analysis of the M. bovis BCG response to three oxidative disinfectants.

    PubMed

    Nde, Chantal W; Toghrol, Freshteh; Jang, Hyeung-Jin; Bentley, William E

    2011-04-01

    Tuberculosis is a leading cause of death worldwide and infects thousands of Americans annually. Mycobacterium bovis causes tuberculosis in humans and several animal species. Peracetic acid is an approved tuberculocide in hospital and domestic environments. This study presents for the first time the transcriptomic changes in M. bovis BCG after treatment with 0.1 mM peracetic acid for 10 and 20 min. This study also presents for the first time a comparison among the transcriptomic responses of M. bovis BCG to three oxidative disinfectants: peracetic acid, sodium hypochlorite, and hydrogen peroxide after 10 min of treatment. Results indicate that arginine biosynthesis, virulence, and oxidative stress response genes were upregulated after both peracetic acid treatment times. Three DNA repair genes were downregulated after 10 and 20 min and cell wall component genes were upregulated after 20 min. The devR-devS signal transduction system was upregulated after 10 min, suggesting a role in the protection against peracetic acid treatment. Results also suggest that peracetic acid and sodium hypochlorite both induce the expression of the ctpF gene which is upregulated in hypoxic environments. Further, this study reveals that in M. bovis BCG, hydrogen peroxide and peracetic acid both induce the expression of katG involved in oxidative stress response and the mbtD and mbtI genes involved in iron regulation/virulence.

  8. Comparative Analysis of Toxic Responses of Organic Extracts from Diesel and Selected Alternative Fuels Engine Emissions in Human Lung BEAS-2B Cells.

    PubMed

    Libalova, Helena; Rossner, Pavel; Vrbova, Kristyna; Brzicova, Tana; Sikorova, Jitka; Vojtisek-Lom, Michal; Beranek, Vit; Klema, Jiri; Ciganek, Miroslav; Neca, Jiri; Pencikova, Katerina; Machala, Miroslav; Topinka, Jan

    2016-11-03

    This study used toxicogenomics to identify the complex biological response of human lung BEAS-2B cells treated with organic components of particulate matter in the exhaust of a diesel engine. First, we characterized particles from standard diesel (B0), biodiesel (methylesters of rapeseed oil) in its neat form (B100) and 30% by volume blend with diesel fuel (B30), and neat hydrotreated vegetable oil (NEXBTL100). The concentration of polycyclic aromatic hydrocarbons (PAHs) and their derivatives in organic extracts was the lowest for NEXBTL100 and higher for biodiesel. We further analyzed global gene expression changes in BEAS-2B cells following 4 h and 24 h treatment with extracts. The concentrations of 50 µg extract/mL induced a similar molecular response. The common processes induced after 4 h treatment included antioxidant defense, metabolism of xenobiotics and lipids, suppression of pro-apoptotic stimuli, or induction of plasminogen activating cascade; 24 h treatment affected fewer processes, particularly those involved in detoxification of xenobiotics, including PAHs. The majority of distinctively deregulated genes detected after both 4 h and 24 h treatment were induced by NEXBTL100; the deregulated genes included, e.g., those involved in antioxidant defense and cell cycle regulation and proliferation. B100 extract, with the highest PAH concentrations, additionally affected several cell cycle regulatory genes and p38 signaling.

  9. Evaluation of Database Coverage: A Comparison of Two Methodologies.

    ERIC Educational Resources Information Center

    Tenopir, Carol

    1982-01-01

    Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…

  10. Inter-Annual Variability of the Acoustic Propagation in the Mediterranean Sea Identified from a Synoptic Monthly Gridded Database as Compared with GDEM

    DTIC Science & Technology

    2016-12-01

    VARIABILITY OF THE ACOUSTIC PROPAGATION IN THE MEDITERRANEAN SEA IDENTIFIED FROM A SYNOPTIC MONTHLY GRIDDED DATABASE AS COMPARED WITH GDEM by...ANNUAL VARIABILITY OF THE ACOUSTIC PROPAGATION IN THE MEDITERRANEAN SEA IDENTIFIED FROM A SYNOPTIC MONTHLY GRIDDED DATABASE AS COMPARED WITH GDEM 5...profiles obtained from the synoptic monthly gridded World Ocean Database (SMD-WOD) and Generalized Digital Environmental Model (GDEM) temperature (T

  11. Computational Approaches to Chemical Hazard Assessment

    PubMed Central

    Luechtefeld, Thomas; Hartung, Thomas

    2018-01-01

    Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769

  12. Toxicogenomics and clinical toxicology: an example of the connection between basic and applied sciences.

    PubMed

    Ferrer-Dufol, Ana; Menao-Guillen, Sebastian

    2009-04-10

    The relationship between basic research and its potential clinical applications is often a difficult subject. Clinical toxicology has always been very dependent on experimental research whose usefulness has been impaired by the existence of huge differences in the toxicity expression of different substances, inter- and intra-species which make it difficult to predict clinical effects in humans. The new methods in molecular biology developed in the last decades are furnishing very useful tools to study some of the more relevant molecules implied in toxicokinetic and toxicodynamic processes. We aim to show some meaningful examples of how recent research developments with genes and proteins have clear applications to understand significant clinical matters, such as inter-individual variations in susceptibility to chemicals, and other phenomena related to the way some substances act to induce variations in the expression and functionality of these targets.

  13. Diagnostic performance of traditional hepatobiliary biomarkers of drug-induced liver injury in the rat.

    PubMed

    Ennulat, Daniela; Magid-Slav, Michal; Rehm, Sabine; Tatsuoka, Kay S

    2010-08-01

    Nonclinical studies provide the opportunity to anchor biochemical with morphologic findings; however, liver injury is often complex and heterogeneous, confounding the ability to relate biochemical changes with specific patterns of injury. The aim of the current study was to compare diagnostic performance of hepatobiliary markers for specific manifestations of drug-induced liver injury in rat using data collected in a recent hepatic toxicogenomics initiative in which rats (n = 3205) were given 182 different treatments for 4 or 14 days. Diagnostic accuracy of alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin (Tbili), serum bile acids (SBA), alkaline phosphatase (ALP), gamma glutamyl transferase (GGT), total cholesterol (Chol), and triglycerides (Trig) was evaluated for specific types of liver histopathology by Receiver Operating Characteristic (ROC) analysis. To assess the relationship between biochemical and morphologic changes in the absence of hepatocellular necrosis, a second ROC analysis was performed on a subset of rats (n = 2504) given treatments (n = 152) that did not cause hepatocellular necrosis. In the initial analysis, ALT, AST, Tbili, and SBA had the greatest diagnostic utility for manifestations of hepatocellular necrosis and biliary injury, with comparable magnitude of area under the ROC curve and serum hepatobiliary marker changes for both. In the absence of hepatocellular necrosis, ALT increases were observed with biochemical or morphologic evidence of cholestasis. In both analyses, diagnostic utility of ALP and GGT for biliary injury was limited; however, ALP had modest diagnostic value for peroxisome proliferation, and ALT, AST, and total Chol had moderate diagnostic utility for phospholipidosis. None of the eight markers evaluated had diagnostic value for manifestations of hypertrophy, cytoplasmic rarefaction, inflammation, or lipidosis.

  14. Comparative Toxicogenomic Responses to the Flame Retardant mITP in Developing Zebrafish.

    PubMed

    Haggard, Derik E; Das, Siba R; Tanguay, Robert L

    2017-02-20

    Monosubstituted isopropylated triaryl phosphate (mITP) is a major component of Firemaster 550, an additive flame retardant mixture commonly used in polyurethane foams. Developmental toxicity studies in zebrafish established mITP as the most toxic component of FM 550, which causes pericardial edema and heart looping failure. Mechanistic studies showed that mITP is an aryl hydrocarbon receptor (AhR) ligand; however, the cardiotoxic effects of mITP were independent of the AhR. We performed comparative whole genome transcriptomics in wild-type and ahr2 hu3335 zebrafish, which lack functional ahr2, to identify transcriptional signatures causally involved in the mechanism of mITP-induced cardiotoxicity. Regardless of ahr2 status, mITP exposure resulted in decreased expression of transcripts related to the synthesis of all-trans-retinoic acid and a host of Hox genes. Clustered gene ontology enrichment analysis showed unique enrichment in biological processes related to xenobiotic metabolism and response to external stimuli in wild-type samples. Transcript enrichments overlapping both genotypes involved the retinoid metabolic process and sensory/visual perception biological processes. Examination of the gene-gene interaction network of the differentially expressed transcripts in both genetic backgrounds demonstrated a strong AhR interaction network specific to wild-type samples, with overlapping genes regulated by retinoic acid receptors (RARs). A transcriptome analysis of control ahr2-null zebrafish identified potential cross-talk among AhR, Nrf2, and Hif1α. Collectively, we confirmed that mITP is an AhR ligand and present evidence in support of our hypothesis that mITP's developmental cardiotoxic effects are mediated by inhibition at the RAR level.

  15. Toxicological Responses of Environmental Mixtures: Environmental Metals Mixtures Display Synergistic Induction of Metal-Responsive and Oxidative Stress Genes in Placental Cells

    PubMed Central

    Adebambo, Oluwadamilare A.; Ray, Paul D.; Shea, Damian; Fry, Rebecca C.

    2016-01-01

    Exposure to elevated levels of the toxic metals inorganic arsenic (iAs) and cadmium (Cd) represents a major global health problem. These metals often occur as mixtures in the environment, creating the potential for interactive or synergistic biological effects different from those observed in single exposure conditions. In the present study, environmental mixtures collected from two waste sites in China and comparable mixtures prepared in the laboratory were tested for toxicogenomic response in placental JEG-3 cells. These cells serve as a model for evaluating cellular responses to exposures during pregnancy. One of the mixtures was predominated by iAs and one by Cd. Six gene biomarkers were measured in order to evaluate the effects from the metals mixtures using dose and time-course experiments including: heme oxygenase 1 (HO-1) and metallothionein isoforms (MT1A, MT1F and MT1G) previously shown to be preferentially induced by exposure to either iAs or Cd, and metal transporter genes aquaporin-9 (AQP9) and ATPase, Cu2+ transporting, beta polypeptide (ATP7B). There was a significant increase in the mRNA expression levels of ATP7B, HO-1, MT1A, MT1F, and MT1G in mixture-treated cells compared to the iAs or Cd only-treated cells. Notably, the genomic responses were observed at concentrations significantly lower than levels found at the environmental collection sites. These data demonstrate that metal mixtures increase the expression of gene biomarkers in placental JEG-3 cells in a synergistic manner. Taken together, the data suggest that toxic metals that co-occur may induce detrimental health effects that are currently underestimated when analyzed as single metals. PMID:26472158

  16. Building An Integrated Neurodegenerative Disease Database At An Academic Health Center

    PubMed Central

    Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.

    2010-01-01

    Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346

  17. Incidence of Appendicitis over Time: A Comparative Analysis of an Administrative Healthcare Database and a Pathology-Proven Appendicitis Registry

    PubMed Central

    Clement, Fiona; Zimmer, Scott; Dixon, Elijah; Ball, Chad G.; Heitman, Steven J.; Swain, Mark; Ghosh, Subrata

    2016-01-01

    Importance At the turn of the 21st century, studies evaluating the change in incidence of appendicitis over time have reported inconsistent findings. Objectives We compared the differences in the incidence of appendicitis derived from a pathology registry versus an administrative database in order to validate coding in administrative databases and establish temporal trends in the incidence of appendicitis. Design We conducted a population-based comparative cohort study to identify all individuals with appendicitis from 2000 to2008. Setting & Participants Two population-based data sources were used to identify cases of appendicitis: 1) a pathology registry (n = 8,822); and 2) a hospital discharge abstract database (n = 10,453). Intervention & Main Outcome The administrative database was compared to the pathology registry for the following a priori analyses: 1) to calculate the positive predictive value (PPV) of administrative codes; 2) to compare the annual incidence of appendicitis; and 3) to assess differences in temporal trends. Temporal trends were assessed using a generalized linear model that assumed a Poisson distribution and reported as an annual percent change (APC) with 95% confidence intervals (CI). Analyses were stratified by perforated and non-perforated appendicitis. Results The administrative database (PPV = 83.0%) overestimated the incidence of appendicitis (100.3 per 100,000) when compared to the pathology registry (84.2 per 100,000). Codes for perforated appendicitis were not reliable (PPV = 52.4%) leading to overestimation in the incidence of perforated appendicitis in the administrative database (34.8 per 100,000) as compared to the pathology registry (19.4 per 100,000). The incidence of appendicitis significantly increased over time in both the administrative database (APC = 2.1%; 95% CI: 1.3, 2.8) and pathology registry (APC = 4.1; 95% CI: 3.1, 5.0). Conclusion & Relevance The administrative database overestimated the incidence of appendicitis, particularly among perforated appendicitis. Therefore, studies utilizing administrative data to analyze perforated appendicitis should be interpreted cautiously. PMID:27820826

  18. Building an integrated neurodegenerative disease database at an academic health center.

    PubMed

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  19. Validity of cancer diagnosis in the National Health Insurance database compared with the linked National Cancer Registry in Taiwan.

    PubMed

    Kao, Wei-Heng; Hong, Ji-Hong; See, Lai-Chu; Yu, Huang-Ping; Hsu, Jun-Te; Chou, I-Jun; Chou, Wen-Chi; Chiou, Meng-Jiun; Wang, Chun-Chieh; Kuo, Chang-Fu

    2017-08-16

    We aimed to evaluate the validity of cancer diagnosis in the National Health Insurance (NHI) database, which has routinely collected the health information of almost the entire Taiwanese population since 1995, compared with the Taiwan National Cancer Registry (NCR). There were 26,542,445 active participants registered in the NHI database between 2001 and 2012. National Cancer Registry and NHI database records were compared for cancer diagnosis; date of cancer diagnosis; and 1, 2, and 5 year survival. In addition, the 10 leading causes of cancer deaths in Taiwan were analyzed. There were 908,986 cancer diagnoses in NCR and NHI database and 782,775 (86.1%) in both, with 53,192 (5.9%) in the NHI database only and 73,019 (8.0%) in the NCR only. The positive predictive value of the NHI database cancer diagnoses was 94% for all cancers; the positive predictive value of the 10 specific cancers ranged from 95% (lung cancer) to 82% (cervical cancer). The date of diagnosis in the NHI database was generally delayed by a median of 15 days (interquartile range 8-18) compared with the NCR. The 1, 2, and 5 year survival rates were 71.21%, 60.85%, and 47.44% using the NHI database and were 71.18%, 60.17%, and 46.09% using NCR data. Recording of cancer diagnoses and survival estimates based on these diagnosis codes in the NHI database are generally consistent with the NCR. Studies using NHI database data must pay careful attention to eligibility and record linkage; use of both sources is recommended. Copyright © 2017 John Wiley & Sons, Ltd.

  20. 76 FR 72929 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-28

    ... proposed information collection project: ``Medical Office Survey on Patient Safety Culture Comparative... Medical Office Survey on Patient Safety Culture Comparative Database. The Agency for Healthcare Research... Patient Safety Culture (Medical Office SOPS) Comparative Database. The Medical Office SOPS Comparative...

  1. Comparing surgical infections in National Surgical Quality Improvement Project and an Institutional Database.

    PubMed

    Selby, Luke V; Sjoberg, Daniel D; Cassella, Danielle; Sovel, Mindy; Weiser, Martin R; Sepkowitz, Kent; Jones, David R; Strong, Vivian E

    2015-06-15

    Surgical quality improvement requires accurate tracking and benchmarking of postoperative adverse events. We track surgical site infections (SSIs) with two systems; our in-house surgical secondary events (SSE) database and the National Surgical Quality Improvement Project (NSQIP). The SSE database, a modification of the Clavien-Dindo classification, categorizes SSIs by their anatomic site, whereas NSQIP categorizes by their level. Our aim was to directly compare these different definitions. NSQIP and the SSE database entries for all surgeries performed in 2011 and 2012 were compared. To match NSQIP definitions, and while blinded to NSQIP results, entries in the SSE database were categorized as either incisional (superficial or deep) or organ space infections. These categorizations were compared with NSQIP records; agreement was assessed with Cohen kappa. The 5028 patients in our cohort had a 6.5% SSI in the SSE database and a 4% rate in NSQIP, with an overall agreement of 95% (kappa = 0.48, P < 0.0001). The rates of categorized infections were similarly well matched; incisional rates of 4.1% and 2.7% for the SSE database and NSQIP and organ space rates of 2.6% and 1.5%. Overall agreements were 96% (kappa = 0.36, P < 0.0001) and 98% (kappa = 0.55, P < 0.0001), respectively. Over 80% of cases recorded by the SSE database but not NSQIP did not meet NSQIP criteria. The SSE database is an accurate, real-time record of postoperative SSIs. Institutional databases that capture all surgical cases can be used in conjunction with NSQIP with excellent concordance. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. MicroRNA Responses to the Genotoxic Carcinogens Aflatoxin B1 and Benzo[a]pyrene in Human HepaRG Cells.

    PubMed

    Marrone, April K; Tryndyak, Volodymyr; Beland, Frederick A; Pogribny, Igor P

    2016-02-01

    Recent advances in toxicogenomics present an opportunity to develop new in vitro testing methodologies to identify human carcinogens. We have investigated microRNA expression responses to the treatment of human liver HepaRG cells with the human genotoxic carcinogens aflatoxin B1 (AFB1) and benzo[a]pyrene (B[a]P), and the structurally similar compounds aflatoxin B2 (AFB2) and benzo[e]pyrene (B[e]P) that exhibit minimal carcinogenic potential. We demonstrate that treatment of HepaRG cells with AFB1 or B[a]P resulted in specific changes in the expression of miRNAs as compared with their non-carcinogenic analogues, particularly in a marked over-expression of miR-410. An additional novel finding is the dose- and time-dependent inhibition of miR-122 in AFB1-treated HepaRG cells. Mechanistically, the AFB1-induced down-regulation of miR-122 was attributed to inhibition of the HNF4A/miR-122 regulatory pathway. These results demonstrate that HepaRG cells can be used to investigate miRNA responses to xenobiotic exposure, and illustrate the existence of early non-genotoxic events, in addition to a well-established genotoxic mode of action changes, in the mechanism of AFB1 and B[a]P carcinogenicity. Published by Oxford University Press on behalf of the Society of Toxicology 2015. This work is written by US Government employees and is in the public domain in the US.

  3. Comparative Analysis of Toxic Responses of Organic Extracts from Diesel and Selected Alternative Fuels Engine Emissions in Human Lung BEAS-2B Cells

    PubMed Central

    Libalova, Helena; Rossner,, Pavel; Vrbova, Kristyna; Brzicova, Tana; Sikorova, Jitka; Vojtisek-Lom, Michal; Beranek, Vit; Klema, Jiri; Ciganek, Miroslav; Neca, Jiri; Pencikova, Katerina; Machala, Miroslav; Topinka, Jan

    2016-01-01

    This study used toxicogenomics to identify the complex biological response of human lung BEAS-2B cells treated with organic components of particulate matter in the exhaust of a diesel engine. First, we characterized particles from standard diesel (B0), biodiesel (methylesters of rapeseed oil) in its neat form (B100) and 30% by volume blend with diesel fuel (B30), and neat hydrotreated vegetable oil (NEXBTL100). The concentration of polycyclic aromatic hydrocarbons (PAHs) and their derivatives in organic extracts was the lowest for NEXBTL100 and higher for biodiesel. We further analyzed global gene expression changes in BEAS-2B cells following 4 h and 24 h treatment with extracts. The concentrations of 50 µg extract/mL induced a similar molecular response. The common processes induced after 4 h treatment included antioxidant defense, metabolism of xenobiotics and lipids, suppression of pro-apoptotic stimuli, or induction of plasminogen activating cascade; 24 h treatment affected fewer processes, particularly those involved in detoxification of xenobiotics, including PAHs. The majority of distinctively deregulated genes detected after both 4 h and 24 h treatment were induced by NEXBTL100; the deregulated genes included, e.g., those involved in antioxidant defense and cell cycle regulation and proliferation. B100 extract, with the highest PAH concentrations, additionally affected several cell cycle regulatory genes and p38 signaling. PMID:27827897

  4. Choosing a Database for Social Work: A Comparison of Social Work Abstracts and Social Service Abstracts

    ERIC Educational Resources Information Center

    Flatley, Robert K.; Lilla, Rick; Widner, Jack

    2007-01-01

    This study compared Social Work Abstracts and Social Services Abstracts databases in terms of indexing, journal coverage, and searches. The authors interviewed editors, analyzed journal coverage, and compared searches. It was determined that the databases complement one another more than compete. The authors conclude with some considerations.

  5. The intelligent database machine

    NASA Technical Reports Server (NTRS)

    Yancey, K. E.

    1985-01-01

    The IDM data base was compared with the data base crack to determine whether IDM 500 would better serve the needs of the MSFC data base management system than Oracle. The two were compared and the performance of the IDM was studied. Implementations that work best on which database are implicated. The choice is left to the database administrator.

  6. 78 FR 28848 - Information Collection Activities; Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative Database.'' In accordance with the... for Healthcare Research and Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative... SOPS) Comparative Database; OMB NO. 0935- [[Page 28849

  7. Subject searching of monographs online in the medical literature.

    PubMed

    Brahmi, F A

    1988-01-01

    Searching by subject for monographic information online in the medical literature is a challenging task. The NLM database of choice is CATLINE. Other NLM databases of interest are BIOTHICSLINE, CANCERLIT, HEALTH, POPLINE, and TOXLINE. Ten BRS databases are also discussed. Of these, Books in Print, Bookinfo, and OCLC are explored further. The databases are compared as to number of total records and number and percentage of monographs. Three topics were searched on CROSS to compare hits on BBIP, BOOK, and OCLC. The same searches were run on CATLINE. The parameters of time coverage and language were equalized and the resulting citations were compared and analyzed for duplication and uniqueness. With the input of CATLINE tapes into OCLC, OCLC has become the database of choice for searching by subject for medical monographs.

  8. ODG: Omics database generator - a tool for generating, querying, and analyzing multi-omics comparative databases to facilitate biological understanding.

    PubMed

    Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D

    2017-08-10

    Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or understudied species. For species for which more data are available, ODG can be used to conduct complex multi-omics, pattern-matching queries.

  9. 77 FR 5023 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-01

    ... proposed information collection project: ``Medical Office Survey on Patient Safety Culture Comparative... . SUPPLEMENTARY INFORMATION: Proposed Project Medical Office Survey on Patient Safety Culture Comparative Database... AHRQ Medical Office Survey on Patient Safety Culture (Medical Office SOPS) Comparative Database. The...

  10. Toxicogenomic analysis of the particle dose- and size-response relationship of silica particles-induced toxicity in mice

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoyan; Jin, Tingting; Jin, Yachao; Wu, Leihong; Hu, Bin; Tian, Yu; Fan, Xiaohui

    2013-01-01

    This study investigated the relationship between particle size and toxicity of silica particles (SP) with diameters of 30, 70, and 300 nm, which is essential to the safe design and application of SP. Data obtained from histopathological examinations suggested that SP of these sizes can all induce acute inflammation in the liver. In vivo imaging showed that intravenously administrated SP are mainly present in the liver, spleen and intestinal tract. Interestingly, in gene expression analysis, the cellular response pathways activated in the liver are predominantly conserved independently of particle dose when the same size SP are administered or are conserved independently of particle size, surface area and particle number when nano- or submicro-sized SP are administered at their toxic doses. Meanwhile, integrated analysis of transcriptomics, previous metabonomics and conventional toxicological results support the view that SP can result in inflammatory and oxidative stress, generate mitochondrial dysfunction, and eventually cause hepatocyte necrosis by neutrophil-mediated liver injury.

  11. The toxicogenomic multiverse: convergent recruitment of proteins into animal venoms.

    PubMed

    Fry, Bryan G; Roelants, Kim; Champagne, Donald E; Scheib, Holger; Tyndall, Joel D A; King, Glenn F; Nevalainen, Timo J; Norman, Janette A; Lewis, Richard J; Norton, Raymond S; Renjifo, Camila; de la Vega, Ricardo C Rodríguez

    2009-01-01

    Throughout evolution, numerous proteins have been convergently recruited into the venoms of various animals, including centipedes, cephalopods, cone snails, fish, insects (several independent venom systems), platypus, scorpions, shrews, spiders, toxicoferan reptiles (lizards and snakes), and sea anemones. The protein scaffolds utilized convergently have included AVIT/colipase/prokineticin, CAP, chitinase, cystatin, defensins, hyaluronidase, Kunitz, lectin, lipocalin, natriuretic peptide, peptidase S1, phospholipase A(2), sphingomyelinase D, and SPRY. Many of these same venom protein types have also been convergently recruited for use in the hematophagous gland secretions of invertebrates (e.g., fleas, leeches, kissing bugs, mosquitoes, and ticks) and vertebrates (e.g., vampire bats). Here, we discuss a number of overarching structural, functional, and evolutionary generalities of the protein families from which these toxins have been frequently recruited and propose a revised and expanded working definition for venom. Given the large number of striking similarities between the protein compositions of conventional venoms and hematophagous secretions, we argue that the latter should also fall under the same definition.

  12. Transcriptomic resources for environmental risk assessment: a case study in the Venice lagoon.

    PubMed

    Milan, M; Pauletto, M; Boffo, L; Carrer, C; Sorrentino, F; Ferrari, G; Pavan, L; Patarnello, T; Bargelloni, L

    2015-02-01

    The development of new resources to evaluate the environmental status is becoming increasingly important representing a key challenge for ocean and coastal management. Recently, the employment of transcriptomics in aquatic toxicology has led to increasing initiatives proposing to integrate eco-toxicogenomics in the evaluation of marine ecosystem health. However, several technical issues need to be addressed before introducing genomics as a reliable tool in regulatory ecotoxicology. The Venice lagoon constitutes an excellent case, in which the assessment of environmental risks derived from the nearby industrial activities represents a crucial task. In this context, the potential role of genomics to assist environmental monitoring was investigated through the definition of reliable gene expression markers associated to chemical contamination in Manila clams, and their subsequent employment for the classification of Venice lagoon areas. Overall, the present study addresses key issues to evaluate the future outlooks of genomics in the environmental monitoring and risk assessment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. DNA microarray technology in nutraceutical and food safety.

    PubMed

    Liu-Stratton, Yiwen; Roy, Sashwati; Sen, Chandan K

    2004-04-15

    The quality and quantity of diet is a key determinant of health and disease. Molecular diagnostics may play a key role in food safety related to genetically modified foods, food-borne pathogens and novel nutraceuticals. Functional outcomes in biology are determined, for the most part, by net balance between sets of genes related to the specific outcome in question. The DNA microarray technology offers a new dimension of strength in molecular diagnostics by permitting the simultaneous analysis of large sets of genes. Automation of assay and novel bioinformatics tools make DNA microarrays a robust technology for diagnostics. Since its development a few years ago, this technology has been used for the applications of toxicogenomics, pharmacogenomics, cell biology, and clinical investigations addressing the prevention and intervention of diseases. Optimization of this technology to specifically address food safety is a vast resource that remains to be mined. Efforts to develop diagnostic custom arrays and simplified bioinformatics tools for field use are warranted.

  14. Construction of a robust microarray from a non-model species (largemouth bass) using pyrosequencing technology

    PubMed Central

    Garcia-Reyero, Natàlia; Griffitt, Robert J.; Liu, Li; Kroll, Kevin J.; Farmerie, William G.; Barber, David S.; Denslow, Nancy D.

    2009-01-01

    A novel custom microarray for largemouth bass (Micropterus salmoides) was designed with sequences obtained from a normalized cDNA library using the 454 Life Sciences GS-20 pyrosequencer. This approach yielded in excess of 58 million bases of high-quality sequence. The sequence information was combined with 2,616 reads obtained by traditional suppressive subtractive hybridizations to derive a total of 31,391 unique sequences. Annotation and coding sequences were predicted for these transcripts where possible. 16,350 annotated transcripts were selected as target sequences for the design of the custom largemouth bass oligonucleotide microarray. The microarray was validated by examining the transcriptomic response in male largemouth bass exposed to 17β-œstradiol. Transcriptomic responses were assessed in liver and gonad, and indicated gene expression profiles typical of exposure to œstradiol. The results demonstrate the potential to rapidly create the tools necessary to assess large scale transcriptional responses in non-model species, paving the way for expanded impact of toxicogenomics in ecotoxicology. PMID:19936325

  15. A transcriptomics data-driven gene space accurately predicts liver cytopathology and drug-induced liver injury

    PubMed Central

    Kohonen, Pekka; Parkkinen, Juuso A.; Willighagen, Egon L.; Ceder, Rebecca; Wennerberg, Krister; Kaski, Samuel; Grafström, Roland C.

    2017-01-01

    Predicting unanticipated harmful effects of chemicals and drug molecules is a difficult and costly task. Here we utilize a ‘big data compacting and data fusion’—concept to capture diverse adverse outcomes on cellular and organismal levels. The approach generates from transcriptomics data set a ‘predictive toxicogenomics space’ (PTGS) tool composed of 1,331 genes distributed over 14 overlapping cytotoxicity-related gene space components. Involving ∼2.5 × 108 data points and 1,300 compounds to construct and validate the PTGS, the tool serves to: explain dose-dependent cytotoxicity effects, provide a virtual cytotoxicity probability estimate intrinsic to omics data, predict chemically-induced pathological states in liver resulting from repeated dosing of rats, and furthermore, predict human drug-induced liver injury (DILI) from hepatocyte experiments. Analysing 68 DILI-annotated drugs, the PTGS tool outperforms and complements existing tests, leading to a hereto-unseen level of DILI prediction accuracy. PMID:28671182

  16. Identification of consensus biomarkers for predicting non-genotoxic hepatocarcinogens

    PubMed Central

    Huang, Shan-Han; Tung, Chun-Wei

    2017-01-01

    The assessment of non-genotoxic hepatocarcinogens (NGHCs) is currently relying on two-year rodent bioassays. Toxicogenomics biomarkers provide a potential alternative method for the prioritization of NGHCs that could be useful for risk assessment. However, previous studies using inconsistently classified chemicals as the training set and a single microarray dataset concluded no consensus biomarkers. In this study, 4 consensus biomarkers of A2m, Ca3, Cxcl1, and Cyp8b1 were identified from four large-scale microarray datasets of the one-day single maximum tolerated dose and a large set of chemicals without inconsistent classifications. Machine learning techniques were subsequently applied to develop prediction models for NGHCs. The final bagging decision tree models were constructed with an average AUC performance of 0.803 for an independent test. A set of 16 chemicals with controversial classifications were reclassified according to the consensus biomarkers. The developed prediction models and identified consensus biomarkers are expected to be potential alternative methods for prioritization of NGHCs for further experimental validation. PMID:28117354

  17. A Comparative Analysis Among the SRS M&M, NIS, and KID Databases for the Adolescent Idiopathic Scoliosis.

    PubMed

    Lee, Nathan J; Guzman, Javier Z; Kim, Jun; Skovrlj, Branko; Martin, Christopher T; Pugely, Andrew J; Gao, Yubo; Caridi, John M; Mendoza-Lattes, Sergio; Cho, Samuel K

    2016-11-01

    Retrospective cohort analysis. A growing number of publications have utilized the Scoliosis Research Society (SRS) Morbidity and Mortality (M&M) database, but none have compared it to other large databases. The objective of this study was to compare SRS complications with those in administrative databases. The Nationwide Inpatient Sample (NIS) and Kid's Inpatient Database (KID) captured a greater number of overall complications while the SRS M&M data provided a greater incidence of spine-related complications following adolescent idiopathic scoliosis (AIS) surgery. Chi-square was used to obtain statistical significance, with p < .05 considered significant. The SRS 2004-2007 (9,904 patients), NIS 2004-2007 (20,441 patients) and KID 2003-2006 (10,184 patients) databases were analyzed for AIS patients who underwent fusion. Comparable variables were queried in all three databases, including patient demographics, surgical variables, and complications. Patients undergoing AIS in the SRS database were slightly older (SRS 14.4 years vs. NIS 13.8 years, p < .0001; KID 13.9 years, p < .0001) and less likely to be male (SRS 18.5% vs. NIS 26.3%, p < .0001; KID 24.8%, p < .0001). Revision surgery (SRS 3.3% vs. NIS 2.4%, p < .0001; KID 0.9%, p < .0001) and osteotomy (SRS 8% vs. NIS 2.3%, p < .0001; KID 2.4%, p < .0001) were more commonly reported in the SRS database. The SRS database reported fewer overall complications (SRS 3.9% vs. NIS 7.3%, p < .0001; KID 6.6%, p < .0001). However, when respiratory complications (SRS 0.5% vs. NIS 3.7%, p < .0001; KID 4.4%, p < .0001) were excluded, medical complication rates were similar across databases. In contrast, SRS reported higher spine-specific complication rates. Mortality rates were similar between SRS versus NIS (p = .280) and SRS versus KID (p = .08) databases. There are similarities and differences between the three databases. These discrepancies are likely due to the varying data-gathering methods each organization uses to collect their morbidity data. Level IV. Copyright © 2016 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  18. Does an Otolaryngology-Specific Database Have Added Value? A Comparative Feasibility Analysis.

    PubMed

    Bellmunt, Angela M; Roberts, Rhonda; Lee, Walter T; Schulz, Kris; Pynnonen, Melissa A; Crowson, Matthew G; Witsell, David; Parham, Kourosh; Langman, Alan; Vambutas, Andrea; Ryan, Sheila E; Shin, Jennifer J

    2016-07-01

    There are multiple nationally representative databases that support epidemiologic and outcomes research, and it is unknown whether an otolaryngology-specific resource would prove indispensable or superfluous. Therefore, our objective was to determine the feasibility of analyses in the National Ambulatory Medical Care Survey (NAMCS) and National Hospital Ambulatory Medical Care Survey (NHAMCS) databases as compared with the otolaryngology-specific Creating Healthcare Excellence through Education and Research (CHEER) database. Parallel analyses in 2 data sets. Ambulatory visits in the United States. To test a fixed hypothesis that could be directly compared between data sets, we focused on a condition with expected prevalence high enough to substantiate availability in both. This query also encompassed a broad span of diagnoses to sample the breadth of available information. Specifically, we compared an assessment of suspected risk factors for sensorineural hearing loss in subjects 0 to 21 years of age, according to a predetermined protocol. We also assessed the feasibility of 6 additional diagnostic queries among all age groups. In the NAMCS/NHAMCS data set, the number of measured observations was not sufficient to support reliable numeric conclusions (percentage standard error among risk factors: 38.6-92.1). Analysis of the CHEER database demonstrated that age, sex, meningitis, and cytomegalovirus were statistically significant factors associated with pediatric sensorineural hearing loss (P < .01). Among the 6 additional diagnostic queries assessed, NAMCS/NHAMCS usage was also infeasible; the CHEER database contained 1585 to 212,521 more observations per annum. An otolaryngology-specific database has added utility when compared with already available national ambulatory databases. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  19. 77 FR 4038 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... proposed information collection project: ``Nursing Home Survey on Patient Safety Culture Comparative...: Proposed Project Nursing Home Survey on Patient Safety Culture Comparative Database The Agency for... Nursing Home Survey on Patient Safety Culture (Nursing Home SOPS) Comparative Database. The Nursing Home...

  20. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  1. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  2. MIPS PlantsDB: a database framework for comparative plant genome research.

    PubMed

    Nussbaumer, Thomas; Martis, Mihaela M; Roessner, Stephan K; Pfeifer, Matthias; Bader, Kai C; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel

    2013-01-01

    The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB-plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834-D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB.

  3. MIPS PlantsDB: a database framework for comparative plant genome research

    PubMed Central

    Nussbaumer, Thomas; Martis, Mihaela M.; Roessner, Stephan K.; Pfeifer, Matthias; Bader, Kai C.; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel

    2013-01-01

    The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB–plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834–D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB. PMID:23203886

  4. Correlation between Self-Citation and Impact Factor in Iranian English Medical Journals in WoS and ISC: A Comparative Approach.

    PubMed

    Ghazi Mirsaeid, Seyed Javad; Motamedi, Nadia; Ramezan Ghorbani, Nahid

    2015-09-01

    In this study, the impact of self-citation (Journal and Author) on impact factor of Iranian English Medical journals in two international citation databases, Web of Science (WoS) and Islamic world science citation center (ISC), were compared by citation analysis. Twelve journals in WoS and 26 journals in ISC databases indexed between the years (2006-2009) were selected and compared. For comparison of self-citation rate in two databases, we used Wilcoxon and Mann-whitney tests. We used Pearson test for correlation of self-citation and IF in WoS, and the Spearman's correlation coefficient for the ISC database. Covariance analysis was used for comparison of two correlation tests. P. value was 0.05 in all of tests. There was no significant difference between self-citation rates in two databases (P>0.05). Findings also showed no significant difference between the correlation of Journal self-citation and impact factor in two databases (P=0.526) however, there was significant difference between the author's self-citation and impact factor in these databases (P<0.001). The impact of Author's self-citation in the Impact Factor of WoS was higher than the ISC.

  5. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  6. Digital Dental X-ray Database for Caries Screening

    NASA Astrophysics Data System (ADS)

    Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila

    2016-06-01

    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.

  7. CyanoClust: comparative genome resources of cyanobacteria and plastids.

    PubMed

    Sasaki, Naobumi V; Sato, Naoki

    2010-01-01

    Cyanobacteria, which perform oxygen-evolving photosynthesis as do chloroplasts of plants and algae, are one of the best-studied prokaryotic phyla and one from which many representative genomes have been sequenced. Lack of a suitable comparative genomic database has been a problem in cyanobacterial genomics because many proteins involved in physiological functions such as photosynthesis and nitrogen fixation are not catalogued in commonly used databases, such as Clusters of Orthologous Proteins (COG). CyanoClust is a database of homolog groups in cyanobacteria and plastids that are produced by the program Gclust. We have developed a web-server system for the protein homology database featuring cyanobacteria and plastids. Database URL: http://cyanoclust.c.u-tokyo.ac.jp/.

  8. Content based information retrieval in forensic image databases.

    PubMed

    Geradts, Zeno; Bijhold, Jurrien

    2002-03-01

    This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.

  9. 75 FR 3908 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-25

    ... Comparative Database.'' In accordance with the Paperwork Reduction Act, 44 U.S.C. 3501-3520, AHRQ invites the... Assessment of Healthcare Providers and Systems (CAHPS) Health Plan Survey Comparative Database. [[Page 3909..., and the Centers for Medicare & Medicaid Services (CMS) to provide comparative data to support public...

  10. 75 FR 16134 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-31

    ... Survey Comparative Database.'' In accordance with the Paperwork Reduction Act, 44 U.S.C. 3501-3520, AHRQ... Comparative Database The Agency for Healthcare Research and Quality (AHRQ) requests that the Office of..., purchasers, and the Centers for Medicare & Medicaid Services (CMS) to provide comparative data to support...

  11. 78 FR 69088 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-18

    ... Comparative Database.'' In accordance with the Paperwork Reduction Act, 44 U.S.C. 3501-3521, AHRQ invites the... Comparative Database Request for information collection approval. The Agency for Healthcare Research and..., purchasers, and the Centers for Medicare & Medicaid Services (CMS) to provide comparative data to support...

  12. 78 FR 49518 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... Comparative Database.'' In accordance with the Paperwork Reduction Act, 44 U.S.C. 3501-3521, AHRQ invites the... Assessment of Healthcare Providers and Systems (CAHPS) Health Plan Survey Comparative Database Request for... Medicare & Medicaid Services (CMS) to provide comparative data to support public reporting of health plan...

  13. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  14. Clinical decision support tools: performance of personal digital assistant versus online drug information databases.

    PubMed

    Clauson, Kevin A; Polen, Hyla H; Marsh, Wallace A

    2007-12-01

    To evaluate personal digital assistant (PDA) drug information databases used to support clinical decision-making, and to compare the performance of PDA databases with their online versions. Prospective evaluation with descriptive analysis. Five drug information databases available for PDAs and online were evaluated according to their scope (inclusion of correct answers), completeness (on a 3-point scale), and ease of use; 158 question-answer pairs across 15 weighted categories of drug information essential to health care professionals were used to evaluate these databases. An overall composite score integrating these three measures was then calculated. Scores for the PDA databases and for each PDA-online pair were compared. Among the PDA databases, composite rankings, from highest to lowest, were as follows: Lexi-Drugs, Clinical Pharmacology OnHand, Epocrates Rx Pro, mobileMicromedex (now called Thomson Clinical Xpert), and Epocrates Rx free version. When we compared database pairs, online databases that had greater scope than their PDA counterparts were Clinical Pharmacology (137 vs 100 answers, p<0.001), Micromedex (132 vs 96 answers, p<0.001), Lexi-Comp Online (131 vs 119 answers, p<0.001), and Epocrates Online Premium (103 vs 98 answers, p=0.001). Only Micromedex online was more complete than its PDA version (p=0.008). Regarding ease of use, the Lexi-Drugs PDA database was superior to Lexi-Comp Online (p<0.001); however, Epocrates Online Premium, Epocrates Online Free, and Micromedex online were easier to use than their PDA counterparts (p<0.001). In terms of composite scores, only the online versions of Clinical Pharmacology and Micromedex demonstrated superiority over their PDA versions (p>0.01). Online and PDA drug information databases assist practitioners in improving their clinical decision-making. Lexi-Drugs performed significantly better than all of the other PDA databases evaluated. No PDA database demonstrated superiority to its online counterpart; however, the online versions of Clinical Pharmacology and Micromedex were superior to their PDA versions in answering questions.

  15. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases.

    PubMed

    Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.

  16. Information Literacy Skills: Comparing and Evaluating Databases

    ERIC Educational Resources Information Center

    Grismore, Brian A.

    2012-01-01

    The purpose of this database comparison is to express the importance of teaching information literacy skills and to apply those skills to commonly used Internet-based research tools. This paper includes a comparison and evaluation of three databases (ProQuest, ERIC, and Google Scholar). It includes strengths and weaknesses of each database based…

  17. Comparing Top-Down with Bottom-Up Approaches: Teaching Data Modeling

    ERIC Educational Resources Information Center

    Kung, Hsiang-Jui; Kung, LeeAnn; Gardiner, Adrian

    2013-01-01

    Conceptual database design is a difficult task for novice database designers, such as students, and is also therefore particularly challenging for database educators to teach. In the teaching of database design, two general approaches are frequently emphasized: top-down and bottom-up. In this paper, we present an empirical comparison of students'…

  18. Mammography status using patient self-reports and computerized radiology database.

    PubMed

    Thompson, B; Taylor, V; Goldberg, H; Mullen, M

    1999-10-01

    This study sought to compare self-reported mammography use of low-income women utilizing an inner-city public hospital with a computerized hospital database for tracking mammography use. A survey of all age-eligible women using the hospital's internal medicine clinic was done; responses were matched with the radiology database. We examined concordance among the two data sources. Concordance between self-report and the database was high (82%) when using "ever had a mammogram at the hospital," but low (58%) when comparing self-reported last mammogram with the information contained in the database. Disagreements existed between self-reports and the database. Because we sought to ensure that women would know exactly what a mammogram entailed by including a picture of a woman having a mammogram, it is possible that women's responses were accurate, leading to concerns that discrepancies might be present in the database. Physicians and staff must ensure that they understand the full history of a woman's experience with mammography before recommending for or against the procedure.

  19. Recent updates and developments to plant genome size databases

    PubMed Central

    Garcia, Sònia; Leitch, Ilia J.; Anadon-Rosell, Alba; Canela, Miguel Á.; Gálvez, Francisco; Garnatje, Teresa; Gras, Airy; Hidalgo, Oriane; Johnston, Emmeline; Mas de Xaxars, Gemma; Pellicer, Jaume; Siljak-Yakovlev, Sonja; Vallès, Joan; Vitales, Daniel; Bennett, Michael D.

    2014-01-01

    Two plant genome size databases have been recently updated and/or extended: the Plant DNA C-values database (http://data.kew.org/cvalues), and GSAD, the Genome Size in Asteraceae database (http://www.asteraceaegenomesize.com). While the first provides information on nuclear DNA contents across land plants and some algal groups, the second is focused on one of the largest and most economically important angiosperm families, Asteraceae. Genome size data have numerous applications: they can be used in comparative studies on genome evolution, or as a tool to appraise the cost of whole-genome sequencing programs. The growing interest in genome size and increasing rate of data accumulation has necessitated the continued update of these databases. Currently, the Plant DNA C-values database (Release 6.0, Dec. 2012) contains data for 8510 species, while GSAD has 1219 species (Release 2.0, June 2013), representing increases of 17 and 51%, respectively, in the number of species with genome size data, compared with previous releases. Here we provide overviews of the most recent releases of each database, and outline new features of GSAD. The latter include (i) a tool to visually compare genome size data between species, (ii) the option to export data and (iii) a webpage containing information about flow cytometry protocols. PMID:24288377

  20. Correlation between Self-Citation and Impact Factor in Iranian English Medical Journals in WoS and ISC: A Comparative Approach

    PubMed Central

    GHAZI MIRSAEID, Seyed Javad; MOTAMEDI, Nadia; RAMEZAN GHORBANI, Nahid

    2015-01-01

    Background: In this study, the impact of self-citation (Journal and Author) on impact factor of Iranian English Medical journals in two international citation databases, Web of Science (WoS) and Islamic world science citation center (ISC), were compared by citation analysis. Methods: Twelve journals in WoS and 26 journals in ISC databases indexed between the years (2006–2009) were selected and compared. For comparison of self-citation rate in two databases, we used Wilcoxon and Mann-whitney tests. We used Pearson test for correlation of self-citation and IF in WoS, and the Spearman’s correlation coefficient for the ISC database. Covariance analysis was used for comparison of two correlation tests. P. value was 0.05 in all of tests. Results: There was no significant difference between self-citation rates in two databases (P>0.05). Findings also showed no significant difference between the correlation of Journal self-citation and impact factor in two databases (P=0.526) however, there was significant difference between the author’s self-citation and impact factor in these databases (P<0.001). Conclusion: The impact of Author’s self-citation in the Impact Factor of WoS was higher than the ISC. PMID:26587498

  1. Discrepancies among Scopus, Web of Science, and PubMed coverage of funding information in medical journal articles.

    PubMed

    Kokol, Peter; Vošner, Helena Blažun

    2018-01-01

    The overall aim of the present study was to compare the coverage of existing research funding information for articles indexed in Scopus, Web of Science, and PubMed databases. The numbers of articles with funding information published in 2015 were identified in the three selected databases and compared using bibliometric analysis of a sample of twenty-eight prestigious medical journals. Frequency analysis of the number of articles with funding information showed statistically significant differences between Scopus, Web of Science, and PubMed databases. The largest proportion of articles with funding information was found in Web of Science (29.0%), followed by PubMed (14.6%) and Scopus (7.7%). The results show that coverage of funding information differs significantly among Scopus, Web of Science, and PubMed databases in a sample of the same medical journals. Moreover, we found that, currently, funding data in PubMed is more difficult to obtain and analyze compared with that in the other two databases.

  2. National Administrative Databases in Adult Spinal Deformity Surgery: A Cautionary Tale.

    PubMed

    Buckland, Aaron J; Poorman, Gregory; Freitag, Robert; Jalai, Cyrus; Klineberg, Eric O; Kelly, Michael; Passias, Peter G

    2017-08-15

    Comparison between national administrative databases and a prospective multicenter physician managed database. This study aims to assess the applicability of National Administrative Databases (NADs) in adult spinal deformity (ASD). Our hypothesis is that NADs do not include comparable patients as in a physician-managed database (PMD) for surgical outcomes in adult spinal deformity. NADs such as National Inpatient Sample (NIS) and National Surgical Quality Improvement Program (NSQIP) provide large numbers of publications owing to ease of data access and lack of IRB approval requirement. These databases utilize billing codes, not clinical inclusion criteria, and have not been validated against PMDs in ASD surgery. The NIS was searched for years 2002 to 2012 and NSQIP for years 2006 to 2013 using validated spinal deformity diagnostic codes. Procedural codes (ICD-9 and CPT) were then applied to each database. A multicenter PMD including years 2008 to 2015 was used for comparison. Databases were assessed for levels fused, osteotomies, decompressed levels, and invasiveness. Database comparisons for surgical details were made in all patients, and also for patients with ≥ 5 level spinal fusions. Approximately, 37,368 NIS, 1291 NSQIP, and 737 PMD patients were identified. NADs showed an increased use of deformity billing codes over the study period (NIS doubled, 68x NSQIP, P < 0.001), but ASD remained stable in the PMD.Surgical invasiveness, levels fused and use of 3-column osteotomy (3-CO) were significantly lower for all patients in the NIS (11.4-13.7) and NSQIP databases (6.4-12.7) compared with PMD (27.5-32.3). When limited to patients with ≥5 levels, invasiveness, levels fused, and use of 3-CO remained significantly higher in the PMD compared with NADs (P < 0.001). National databases NIS and NSQIP do not capture the same patient population as is captured in PMDs in ASD. Physicians should remain cautious in interpreting conclusions drawn from these databases. 4.

  3. Biomedical Requirements for High Productivity Computing Systems

    DTIC Science & Technology

    2005-04-01

    server at http://www.ncbi.nlm.nih.gov/BLAST/. There are many variants of BLAST, including: 1. BLASTN - Compares a DNA query to a DNA database. Searches ...database (3 reading frames from each strand of the DNA) searching . 13 4. TBLASTN - Compares a protein query to a DNA database, in the 6 possible...the molecular during this phase. After eliminating molecules that could not match the query , an atom-by-atom search for the molecules in conducted

  4. Improved orthologous databases to ease protozoan targets inference.

    PubMed

    Kotowski, Nelson; Jardim, Rodrigo; Dávila, Alberto M R

    2015-09-29

    Homology inference helps on identifying similarities, as well as differences among organisms, which provides a better insight on how closely related one might be to another. In addition, comparative genomics pipelines are widely adopted tools designed using different bioinformatics applications and algorithms. In this article, we propose a methodology to build improved orthologous databases with the potential to aid on protozoan target identification, one of the many tasks which benefit from comparative genomics tools. Our analyses are based on OrthoSearch, a comparative genomics pipeline originally designed to infer orthologs through protein-profile comparison, supported by an HMM, reciprocal best hits based approach. Our methodology allows OrthoSearch to confront two orthologous databases and to generate an improved new one. Such can be later used to infer potential protozoan targets through a similarity analysis against the human genome. The protein sequences of Cryptosporidium hominis, Entamoeba histolytica and Leishmania infantum genomes were comparatively analyzed against three orthologous databases: (i) EggNOG KOG, (ii) ProtozoaDB and (iii) Kegg Orthology (KO). That allowed us to create two new orthologous databases, "KO + EggNOG KOG" and "KO + EggNOG KOG + ProtozoaDB", with 16,938 and 27,701 orthologous groups, respectively. Such new orthologous databases were used for a regular OrthoSearch run. By confronting "KO + EggNOG KOG" and "KO + EggNOG KOG + ProtozoaDB" databases and protozoan species we were able to detect the following total of orthologous groups and coverage (relation between the inferred orthologous groups and the species total number of proteins): Cryptosporidium hominis: 1,821 (11 %) and 3,254 (12 %); Entamoeba histolytica: 2,245 (13 %) and 5,305 (19 %); Leishmania infantum: 2,702 (16 %) and 4,760 (17 %). Using our HMM-based methodology and the largest created orthologous database, it was possible to infer 13 orthologous groups which represent potential protozoan targets; these were found because of our distant homology approach. We also provide the number of species-specific, pair-to-pair and core groups from such analyses, depicted in Venn diagrams. The orthologous databases generated by our HMM-based methodology provide a broader dataset, with larger amounts of orthologous groups when compared to the original databases used as input. Those may be used for several homology inference analyses, annotation tasks and protozoan targets identification.

  5. rCAD: A Novel Database Schema for the Comparative Analysis of RNA.

    PubMed

    Ozer, Stuart; Doshi, Kishore J; Xu, Weijia; Gutell, Robin R

    2011-12-31

    Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios.

  6. rCAD: A Novel Database Schema for the Comparative Analysis of RNA

    PubMed Central

    Ozer, Stuart; Doshi, Kishore J.; Xu, Weijia; Gutell, Robin R.

    2013-01-01

    Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios. PMID:24772454

  7. 76 FR 72931 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-28

    ... Systems (CAHPS) Clinician and Group Survey Comparative Database.'' In accordance with the Paperwork... Providers and Systems (CAHPS) Clinician and Group Survey Comparative Database The Agency for Healthcare..., and provided critical data illuminating key aspects of survey design and administration. In July 2007...

  8. For 481 biomedical open access journals, articles are not searchable in the Directory of Open Access Journals nor in conventional biomedical databases

    PubMed Central

    Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob

    2015-01-01

    Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases’ criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ’s coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general. PMID:26038727

  9. Comparison of conventional and cadmium-zinc-telluride single-photon emission computed tomography for analysis of thallium-201 myocardial perfusion imaging: an exploratory study in normal databases for different ethnicities.

    PubMed

    Ishihara, Masaru; Onoguchi, Masahisa; Taniguchi, Yasuyo; Shibutani, Takayuki

    2017-12-01

    The aim of this study was to clarify the differences in thallium-201-chloride (thallium-201) myocardial perfusion imaging (MPI) scans evaluated by conventional anger-type single-photon emission computed tomography (conventional SPECT) versus cadmium-zinc-telluride SPECT (CZT SPECT) imaging in normal databases for different ethnic groups. MPI scans from 81 consecutive Japanese patients were examined using conventional SPECT and CZT SPECT and analyzed with the pre-installed quantitative perfusion SPECT (QPS) software. We compared the summed stress score (SSS), summed rest score (SRS), and summed difference score (SDS) for the two SPECT devices. For a normal MPI reference, we usually use Japanese databases for MPI created by the Japanese Society of Nuclear Medicine, which can be used with conventional SPECT but not with CZT SPECT. In this study, we used new Japanese normal databases constructed in our institution to compare conventional and CZT SPECT. Compared with conventional SPECT, CZT SPECT showed lower SSS (p < 0.001), SRS (p = 0.001), and SDS (p = 0.189) using the pre-installed SPECT database. In contrast, CZT SPECT showed no significant difference from conventional SPECT in QPS analysis using the normal databases from our institution. Myocardial perfusion analyses by CZT SPECT should be evaluated using normal databases based on the ethnic group being evaluated.

  10. PIGD: a database for intronless genes in the Poaceae.

    PubMed

    Yan, Hanwei; Jiang, Cuiping; Li, Xiaoyu; Sheng, Lei; Dong, Qing; Peng, Xiaojian; Li, Qian; Zhao, Yang; Jiang, Haiyang; Cheng, Beijiu

    2014-10-01

    Intronless genes are a feature of prokaryotes; however, they are widespread and unequally distributed among eukaryotes and represent an important resource to study the evolution of gene architecture. Although many databases on exons and introns exist, there is currently no cohesive database that collects intronless genes in plants into a single database. In this study, we present the Poaceae Intronless Genes Database (PIGD), a user-friendly web interface to explore information on intronless genes from different plants. Five Poaceae species, Sorghum bicolor, Zea mays, Setaria italica, Panicum virgatum and Brachypodium distachyon, are included in the current release of PIGD. Gene annotations and sequence data were collected and integrated from different databases. The primary focus of this study was to provide gene descriptions and gene product records. In addition, functional annotations, subcellular localization prediction and taxonomic distribution are reported. PIGD allows users to readily browse, search and download data. BLAST and comparative analyses are also provided through this online database, which is available at http://pigd.ahau.edu.cn/. PIGD provides a solid platform for the collection, integration and analysis of intronless genes in the Poaceae. As such, this database will be useful for subsequent bio-computational analysis in comparative genomics and evolutionary studies.

  11. Evaluation of a toxicogenomic approach to the local lymph node assay (LLNA).

    PubMed

    Boverhof, Darrell R; Gollapudi, B Bhaskar; Hotchkiss, Jon A; Osterloh-Quiroz, Mandy; Woolhiser, Michael R

    2009-02-01

    Genomic technologies have the potential to enhance and complement existing toxicology endpoints; however, assessment of these approaches requires a systematic evaluation including a robust experimental design with genomic endpoints anchored to traditional toxicology endpoints. The present study was conducted to assess the sensitivity of genomic responses when compared with the traditional local lymph node assay (LLNA) endpoint of lymph node cell proliferation and to evaluate the responses for their ability to provide insights into mode of action. Female BALB/c mice were treated with the sensitizer trimellitic anhydride (TMA), following the standard LLNA dosing regimen, at doses of 0.1, 1, or 10% and traditional tritiated thymidine ((3)HTdR) incorporation and gene expression responses were monitored in the auricular lymph nodes. Additional mice dosed with either vehicle or 10% TMA and sacrificed on day 4 or 10, were also included to examine temporal effects on gene expression. Analysis of (3)HTdR incorporation revealed TMA-induced stimulation indices of 2.8, 22.9, and 61.0 relative to vehicle with an EC(3) of 0.11%. Examination of the dose-response gene expression responses identified 9, 833, and 2122 differentially expressed genes relative to vehicle for the 0.1, 1, and 10% TMA dose groups, respectively. Calculation of EC(3) values for differentially expressed genes did not identify a response that was more sensitive than the (3)HTdR value, although a number of genes displayed comparable sensitivity. Examination of temporal responses revealed 1760, 1870, and 953 differentially expressed genes at the 4-, 6-, and 10-day time points respectively. Functional analysis revealed many responses displayed dose- and time-specific induction patterns within the functional categories of cellular proliferation and immune response, including numerous immunoglobin genes which were highly induced at the day 10 time point. Overall, these experiments have systematically illustrated the potential utility of genomic endpoints to enhance the LLNA and support further exploration of this approach through examination of a more diverse array of chemicals.

  12. Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.

    ERIC Educational Resources Information Center

    Pieska, K. A. O.

    1986-01-01

    Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)

  13. Coverage and overlaps in bibliographic databases relevant to forensic medicine: a comparative analysis of MEDLINE.

    PubMed Central

    Yonker, V A; Young, K P; Beecham, S K; Horwitz, S; Cousin, K

    1990-01-01

    This study was designed to make a comparative evaluation of the performance of MEDLINE in covering serial literature. Forensic medicine was chosen because it is an interdisciplinary subject area that would test MEDLARS at the periphery of the system. The evaluation of database coverage was based upon articles included in the bibliographies of scholars in the field of forensic medicine. This method was considered appropriate for characterizing work used by researchers in this field. The results of comparing MEDLINE to other databases evoked some concerns about the selective indexing policy of MEDLINE in serving the interests of those working in forensic medicine. PMID:2403829

  14. Scopus database: a review.

    PubMed

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  15. Scopus database: a review

    PubMed Central

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. PMID:16522216

  16. A review of accessibility of administrative healthcare databases in the Asia-Pacific region.

    PubMed

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3-6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted based on requirements by data custodians. Compared with previous research, this study describes the landscape of databases in the selected countries with more granularity using an assessment tool developed for this purpose. A high number of databases were identified but most had restricted access, preventing their potential use to support research. We hope that this study helps to improve the understanding of the AHDB landscape, increase data sharing and database research in Asia-Pacific countries.

  17. The comparative effectiveness of conventional and digital image libraries.

    PubMed

    McColl, R I; Johnson, A

    2001-03-01

    Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.

  18. There Is a Significant Discrepancy Between "Big Data" Database and Original Research Publications on Hip Arthroscopy Outcomes: A Systematic Review.

    PubMed

    Sochacki, Kyle R; Jack, Robert A; Safran, Marc R; Nho, Shane J; Harris, Joshua D

    2018-06-01

    The purpose of this study was to compare (1) major complication, (2) revision, and (3) conversion to arthroplasty rates following hip arthroscopy between database studies and original research peer-reviewed publications. A systematic review was performed using PRISMA guidelines. PubMed, SCOPUS, SportDiscus, and Cochrane Central Register of Controlled Trials were searched for studies that investigated major complication (dislocation, femoral neck fracture, avascular necrosis, fluid extravasation, septic arthritis, death), revision, and hip arthroplasty conversion rates following hip arthroscopy. Major complication, revision, and conversion to hip arthroplasty rates were compared between original research (single- or multicenter therapeutic studies) and database (insurance database using ICD-9/10 and/or current procedural terminology coding terminology) publishing studies. Two hundred seven studies (201 original research publications [15,780 subjects; 54% female] and 6 database studies [20,825 subjects; 60% female]) were analyzed (mean age, 38.2 ± 11.6 years old; mean follow-up, 2.7 ± 2.9 years). The database studies had a significantly higher age (40.6 + 2.8 vs 35.4 ± 11.6), body mass index (27.4 ± 5.6 vs 24.9 ± 3.1), percentage of females (60.1% vs 53.8%), and longer follow-up (3.1 ± 1.6 vs 2.7 ± 3.0) compared with original research (P < .0001 for all). Ninety-seven (0.6%) major complications occurred in the individual studies, and 95 (0.8%) major complications occurred in the database studies (P = .029; relative risk [RR], 1.3). There was a significantly higher rate of femoral neck fracture (0.24% vs 0.03%; P < .0001; RR, 8.0), and hip dislocation (0.17% vs 0.06%; P = .023; RR, 2.2) in the database studies. Reoperations occurred at a significantly higher rate in the database studies (11.1% vs 7.3%; P < .001; RR, 1.5). There was a significantly higher rate of conversion to arthroplasty in the database studies (8.0% vs 3.7%; P < .001; RR, 2.2). Database studies report significantly increased major complication, revision, and conversion to hip arthroplasty rates compared with original research investigations of hip arthroscopy outcomes. Level IV, systematic review of Level I-IV studies. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  19. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  20. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    PubMed

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    PubMed

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Differences in the Reporting of Racial and Socioeconomic Disparities among Three Large National Databases for Breast Reconstruction.

    PubMed

    Kamali, Parisa; Zettervall, Sara L; Wu, Winona; Ibrahim, Ahmed M S; Medin, Caroline; Rakhorst, Hinne A; Schermerhorn, Marc L; Lee, Bernard T; Lin, Samuel J

    2017-04-01

    Research derived from large-volume databases plays an increasing role in the development of clinical guidelines and health policy. In breast cancer research, the Surveillance, Epidemiology and End Results, National Surgical Quality Improvement Program, and Nationwide Inpatient Sample databases are widely used. This study aims to compare the trends in immediate breast reconstruction and identify the drawbacks and benefits of each database. Patients with invasive breast cancer and ductal carcinoma in situ were identified from each database (2005-2012). Trends of immediate breast reconstruction over time were evaluated. Patient demographics and comorbidities were compared. Subgroup analysis of immediate breast reconstruction use per race was conducted. Within the three databases, 1.2 million patients were studied. Immediate breast reconstruction in invasive breast cancer patients increased significantly over time in all databases. A similar significant upward trend was seen in ductal carcinoma in situ patients. Significant differences in immediate breast reconstruction rates were seen among races; and the disparity differed among the three databases. Rates of comorbidities were similar among the three databases. There has been a significant increase in immediate breast reconstruction; however, the extent of the reporting of overall immediate breast reconstruction rates and of racial disparities differs significantly among databases. The Nationwide Inpatient Sample and the National Surgical Quality Improvement Program report similar findings, with the Surveillance, Epidemiology and End Results database reporting results significantly lower in several categories. These findings suggest that use of the Surveillance, Epidemiology and End Results database may not be universally generalizable to the entire U.S.

  3. 77 FR 5021 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-01

    ...) Clinician and Group Survey Comparative Database.'' In accordance with the Paperwork Reduction Act, 44 U.S.C... Providers and Systems (CAHPS) Clinician and Group Survey Comparative Database The Agency for Healthcare..., and provided critical data illuminating key aspects of survey design and administration. In July 2007...

  4. 78 FR 73540 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-06

    ... proposed information collection project: ``Pharmacy Survey on Patient Safety Culture Comparative Database... Project Pharmacy Survey on Patient Safety Culture Comparative Database In 1999, the Institute of Medicine... approval (OMB NO. 0935-0183; Approved 08/12/2011). The survey is designed to enable pharmacies to assess...

  5. The zebrafish embryo model in toxicology and teratology, September 2–3, 2010, Karlsruhe, Germany.

    PubMed

    Busch, Wibke; Duis, Karen; Fenske, Martina; Maack, Gerd; Legler, Juliette; Padilla, Stephanie; Strähle, Uwe; Witters, Hilda; Scholz, Stefan

    2011-05-01

    The use of fish embryos is gaining popularity for research in the area of toxicology and teratology. Particularly embryos of the zebrafish offer an array of different applications ranging from regulatory testing to mechanistic research. For this reason a consortium of two research centres and a company with the support of the COST Action EuFishBiomed has organised the Workshop “The zebrafish embryo model in toxicology and teratology”, in Karlsruhe, Germany, 2nd–3rd September 2010. The workshop aimed at bringing together experts from different areas of toxicology using the (zebra)fish embryo and stimulating networking between scientists and representatives from regulatory bodies, research institutions and industry. Recent findings, presented in various platform presentations in the area of regulatory toxicity, high throughput screening, toxicogenomics, as well as environmental and human risk assessment are highlighted in this meeting report. Furthermore, the constraints and possibilities of the model as discussed at the workshop are described. A follow up-meeting was appreciated by the about 120 participants and is planned for 2012.

  6. Novel approaches to improving the chemical safety of the meat chain towards toxicants.

    PubMed

    Engel, E; Ratel, J; Bouhlel, J; Planche, C; Meurillon, M

    2015-11-01

    In addition to microbiological issues, meat chemical safety is a growing concern for the public authorities, chain stakeholders and consumers. Meat may be contaminated by various chemical toxicants originating from the environment, treatments of agricultural production or food processing. Generally found at trace levels in meat, these toxicants may harm human health during chronic exposure. This paper overviews the key issues to be considered to ensure better control of their occurrence in meat and assessment of the related health risk. We first describe potential contaminants of meat products. Strategies to move towards a more efficient and systematic control of meat chemical safety are then presented in a second part, with a focus on emerging approaches based on toxicogenomics. The third part presents mitigation strategies to limit the impact of process-induced toxicants in meat. Finally, the last part introduces methodological advances to refine chemical risk assessment related to the occurrence of toxicants in meat by quantifying the influence of digestion on the fraction of food contaminants that may be assimilated by the human body. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A genome-wide nanotoxicology screen of Saccharomyces cerevisiae mutants reveals the basis for cadmium sulphide quantum dot tolerance and sensitivity.

    PubMed

    Marmiroli, M; Pagano, L; Pasquali, F; Zappettini, A; Tosato, V; Bruschi, C V; Marmiroli, N

    2016-01-01

    The use of cadmium sulphide quantum dots (CdS QDs) is increasing, particularly in the electronics industry. Their size (1-10 nm in diameter) is, however, such that they can be taken up by living cells. Here, a bakers' yeast (Saccharomyces cerevisiae) deletion mutant collection has been exploited to provide a high-throughput means of revealing the genetic basis for tolerance/susceptibility to CdS QD exposure. The deletion of 112 genes, some associated with the abiotic stress response, some with various metabolic processes, some with mitochondrial organization, some with transport and some with DNA repair, reduced the level of tolerance to CdS QDs. A gene ontology analysis highlighted the role of oxidative stress in determining the cellular response. The transformation of sensitive mutants with centromeric plasmids harbouring DNA from a wild type strain restored the wild type growth phenotype when the complemented genes encoded either HSC82, DSK2 or ALD3. The use of these simple eukaryote knock-out mutants for functional toxicogenomic analysis will inform studies focusing on higher organisms.

  8. Testing chemical carcinogenicity by using a transcriptomics HepaRG-based model?

    PubMed Central

    Doktorova, T. Y.; Yildirimman, Reha; Ceelen, Liesbeth; Vilardell, Mireia; Vanhaecke, Tamara; Vinken, Mathieu; Ates, Gamze; Heymans, Anja; Gmuender, Hans; Bort, Roque; Corvi, Raffaella; Phrakonkham, Pascal; Li, Ruoya; Mouchet, Nicolas; Chesne, Christophe; van Delft, Joost; Kleinjans, Jos; Castell, Jose; Herwig, Ralf; Rogiers, Vera

    2014-01-01

    The EU FP6 project carcinoGENOMICS explored the combination of toxicogenomics and in vitro cell culture models for identifying organotypical genotoxic- and non-genotoxic carcinogen-specific gene signatures. Here the performance of its gene classifier, derived from exposure of metabolically competent human HepaRG cells to prototypical non-carcinogens (10 compounds) and hepatocarcinogens (20 compounds), is reported. Analysis of the data at the gene and the pathway level by using independent biostatistical approaches showed a distinct separation of genotoxic from non-genotoxic hepatocarcinogens and non-carcinogens (up to 88 % correct prediction). The most characteristic pathway responding to genotoxic exposure was DNA damage. Interlaboratory reproducibility was assessed by blindly testing of three compounds, from the set of 30 compounds, by three independent laboratories. Subsequent classification of these compounds resulted in correct prediction of the genotoxicants. As expected, results on the non-genotoxic carcinogens and the non-carcinogens were less predictive. In conclusion, the combination of transcriptomics with the HepaRG in vitro cell model provides a potential weight of evidence approach for the evaluation of the genotoxic potential of chemical substances. PMID:26417288

  9. Comparison of Ethnic-specific Databases in Heidelberg Retina Tomography-3 to Discriminate Between Early Glaucoma and Normal Chinese Eyes.

    PubMed

    Tan, Xiu Ling; Yap, Sae Cheong; Li, Xiang; Yip, Leonard W

    2017-01-01

    To compare the diagnostic accuracy of the 3 race-specific normative databases in Heidelberg Retina Tomography (HRT)-3, in differentiating between early glaucomatous and healthy normal Chinese eyes. 52 healthy volunteers and 25 glaucoma patients were recruited for this prospective cross-sectional study. All underwent standardized interviews, ophthalmic examination, perimetry and HRT optic disc imaging. Area under the curve (AUC) receiver operating characteristics, sensitivity and specificity were derived to assess the discriminating abilities of the 3 normative databases, for both Moorfields Regression Analysis (MRA) and Glaucoma Probability Score (GPS). A significantly higher percentage (65%) of patients were classified as "within normal limits" using the MRA-Indian database, as compared to the MRA-Caucasian and MRA-African-American databases. However, for GPS, this was observed using the African-American database. For MRA, the highest sensitivity was obtained with both Caucasian and African-American databases (68%), while the highest specificity was from the Indian database (94%). The AUC for discrimination between glaucomatous and normal eyes by MRA-Caucasian, MRA-African-American and MRA-Indian databases were 0.77 (95% CI, 0.67-0.88), 0.79 (0.69-0.89) and 0.73 (0.63-0.84) respectively. For GPS, the highest sensitivity was obtained using either Caucasian or Indian databases (68%). The highest specificity was seen with the African-American database (98%). The AUC for GPS-Caucasian, GPS-African-American and GPS-Indian databases were 0.76 (95% CI, 0.66-0.87), 0.77 (0.67-0.87) and 0.76 (0.66-0.87) respectively. Comparison of the 3 ethnic databases did not reveal significant differences to differentiate early glaucomatous from normal Chinese eyes.

  10. SALAD database: a motif-based database of protein annotations for plant comparative genomics

    PubMed Central

    Mihara, Motohiro; Itoh, Takeshi; Izawa, Takeshi

    2010-01-01

    Proteins often have several motifs with distinct evolutionary histories. Proteins with similar motifs have similar biochemical properties and thus related biological functions. We constructed a unique comparative genomics database termed the SALAD database (http://salad.dna.affrc.go.jp/salad/) from plant-genome-based proteome data sets. We extracted evolutionarily conserved motifs by MEME software from 209 529 protein-sequence annotation groups selected by BLASTP from the proteome data sets of 10 species: rice, sorghum, Arabidopsis thaliana, grape, a lycophyte, a moss, 3 algae, and yeast. Similarity clustering of each protein group was performed by pairwise scoring of the motif patterns of the sequences. The SALAD database provides a user-friendly graphical viewer that displays a motif pattern diagram linked to the resulting bootstrapped dendrogram for each protein group. Amino-acid-sequence-based and nucleotide-sequence-based phylogenetic trees for motif combination alignment, a logo comparison diagram for each clade in the tree, and a Pfam-domain pattern diagram are also available. We also developed a viewer named ‘SALAD on ARRAYs’ to view arbitrary microarray data sets of paralogous genes linked to the same dendrogram in a window. The SALAD database is a powerful tool for comparing protein sequences and can provide valuable hints for biological analysis. PMID:19854933

  11. SALAD database: a motif-based database of protein annotations for plant comparative genomics.

    PubMed

    Mihara, Motohiro; Itoh, Takeshi; Izawa, Takeshi

    2010-01-01

    Proteins often have several motifs with distinct evolutionary histories. Proteins with similar motifs have similar biochemical properties and thus related biological functions. We constructed a unique comparative genomics database termed the SALAD database (http://salad.dna.affrc.go.jp/salad/) from plant-genome-based proteome data sets. We extracted evolutionarily conserved motifs by MEME software from 209,529 protein-sequence annotation groups selected by BLASTP from the proteome data sets of 10 species: rice, sorghum, Arabidopsis thaliana, grape, a lycophyte, a moss, 3 algae, and yeast. Similarity clustering of each protein group was performed by pairwise scoring of the motif patterns of the sequences. The SALAD database provides a user-friendly graphical viewer that displays a motif pattern diagram linked to the resulting bootstrapped dendrogram for each protein group. Amino-acid-sequence-based and nucleotide-sequence-based phylogenetic trees for motif combination alignment, a logo comparison diagram for each clade in the tree, and a Pfam-domain pattern diagram are also available. We also developed a viewer named 'SALAD on ARRAYs' to view arbitrary microarray data sets of paralogous genes linked to the same dendrogram in a window. The SALAD database is a powerful tool for comparing protein sequences and can provide valuable hints for biological analysis.

  12. NGSmethDB 2017: enhanced methylomes and differential methylation

    PubMed Central

    Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L.

    2017-01-01

    The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. PMID:27794041

  13. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  14. Orthology for comparative genomics in the mouse genome database.

    PubMed

    Dolan, Mary E; Baldarelli, Richard M; Bello, Susan M; Ni, Li; McAndrews, Monica S; Bult, Carol J; Kadin, James A; Richardson, Joel E; Ringwald, Martin; Eppig, Janan T; Blake, Judith A

    2015-08-01

    The mouse genome database (MGD) is the model organism database component of the mouse genome informatics system at The Jackson Laboratory. MGD is the international data resource for the laboratory mouse and facilitates the use of mice in the study of human health and disease. Since its beginnings, MGD has included comparative genomics data with a particular focus on human-mouse orthology, an essential component of the use of mouse as a model organism. Over the past 25 years, novel algorithms and addition of orthologs from other model organisms have enriched comparative genomics in MGD data, extending the use of orthology data to support the laboratory mouse as a model of human biology. Here, we describe current comparative data in MGD and review the history and refinement of orthology representation in this resource.

  15. Searching the Cambridge Structural Database for polymorphs.

    PubMed

    van de Streek, Jacco; Motherwell, Sam

    2005-10-01

    In order to identify all pairs of polymorphs in the Cambridge Structural Database (CSD), a method was devised to automatically compare two crystal structures. The comparison is based on simulated powder diffraction patterns, but with special provisions to deal with differences in unit-cell volumes caused by temperature or pressure. Among the 325,000 crystal structures in the Cambridge Structural Database, 35,000 pairs of crystal structures of the same chemical compound were identified and compared. A total of 7300 pairs of polymorphs were identified, of which 154 previously were unknown.

  16. Integrating Borrowed Records into a Database: Impact on Thesaurus Development and Retrieval.

    ERIC Educational Resources Information Center

    And Others; Kirtland, Monika

    1980-01-01

    Discusses three approaches to thesaurus and indexing/retrieval language maintenance for combined databases: reindexing, merging, and initial standardization. Two thesauri for a combined database are evaluated in terms of their compatibility, and indexing practices are compared. Tables and figures help illustrate aspects of the comparison. (SW)

  17. Multi-Database Searching in the Behavioral Sciences--Part I: Basic Techniques and Core Databases.

    ERIC Educational Resources Information Center

    Angier, Jennifer J.; Epstein, Barbara A.

    1980-01-01

    Outlines practical searching techniques in seven core behavioral science databases accessing psychological literature: Psychological Abstracts, Social Science Citation Index, Biosis, Medline, Excerpta Medica, Sociological Abstracts, ERIC. Use of individual files is discussed and their relative strengths/weaknesses are compared. Appended is a list…

  18. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  19. The International Space Station Comparative Maintenance Analysis(CMAM)

    DTIC Science & Technology

    2004-09-01

    External Component • Entire ORU Database 2. Database Connectivity The CMAM ORU database consists of three tables: an ORU master parts list , an ISS...Flight table, and an ISS Subsystem table. The ORU master parts list and the ISS Flight table can be updated or modified from the CMAM user interface

  20. Thematic video indexing to support video database retrieval and query processing

    NASA Astrophysics Data System (ADS)

    Khoja, Shakeel A.; Hall, Wendy

    1999-08-01

    This paper presents a novel video database system, which caters for complex and long videos, such as documentaries, educational videos, etc. As compared to relatively structured format videos like CNN news or commercial advertisements, this database system has the capacity to work with long and unstructured videos.

  1. Academic Journal Embargoes and Full Text Databases.

    ERIC Educational Resources Information Center

    Brooks, Sam

    2003-01-01

    Documents the reasons for embargoes of academic journals in full text databases (i.e., publisher-imposed delays on the availability of full text content) and provides insight regarding common misconceptions. Tables present data on selected journals covering a cross-section of subjects and publishers and comparing two full text business databases.…

  2. The Israeli National Genetic database: a 10-year experience.

    PubMed

    Zlotogora, Joël; Patrinos, George P

    2017-03-16

    The Israeli National and Ethnic Mutation database ( http://server.goldenhelix.org/israeli ) was launched in September 2006 on the ETHNOS software to include clinically relevant genomic variants reported among Jewish and Arab Israeli patients. In 2016, the database was reviewed and corrected according to ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar ) and ExAC ( http://exac.broadinstitute.org ) database entries. The present article summarizes some key aspects from the development and continuous update of the database over a 10-year period, which could serve as a paradigm of successful database curation for other similar resources. In September 2016, there were 2444 entries in the database, 890 among Jews, 1376 among Israeli Arabs, and 178 entries among Palestinian Arabs, corresponding to an ~4× data content increase compared to when originally launched. While the Israeli Arab population is much smaller than the Jewish population, the number of pathogenic variants causing recessive disorders reported in the database is higher among Arabs (934) than among Jews (648). Nevertheless, the number of pathogenic variants classified as founder mutations in the database is smaller among Arabs (175) than among Jews (192). In 2016, the entire database content was compared to that of other databases such as ClinVar and ExAC. We show that a significant difference in the percentage of pathogenic variants from the Israeli genetic database that were present in ExAC was observed between the Jewish population (31.8%) and the Israeli Arab population (20.6%). The Israeli genetic database was launched in 2006 on the ETHNOS software and is available online ever since. It allows querying the database according to the disorder and the ethnicity; however, many other features are not available, in particular the possibility to search according to the name of the gene. In addition, due to the technical limitations of the previous ETHNOS software, new features and data are not included in the present online version of the database and upgrade is currently ongoing.

  3. Use of a German longitudinal prescription database (LRx) in pharmacoepidemiology.

    PubMed

    Richter, Hartmut; Dombrowski, Silvia; Hamer, Hajo; Hadji, Peyman; Kostev, Karel

    2015-01-01

    Large epidemiological databases are often used to examine matters pertaining to drug utilization, health services, and drug safety. The major strength of such databases is that they include large sample sizes, which allow precise estimates to be made. The IMS® LRx database has in recent years been used as a data source for epidemiological research. The aim of this paper is to review a number of recent studies published with the aid of this database and compare these with the results of similar studies using independent data published in the literature. In spite of being somewhat limited to studies for which comparative independent results were available, it was possible to include a wide range of possible uses of the LRx database in a variety of therapeutic fields: prevalence/incidence rate determination (diabetes, epilepsy), persistence analyses (diabetes, osteoporosis), use of comedication (diabetes), drug utilization (G-CSF market) and treatment costs (diabetes, G-CSF market). In general, the results of the LRx studies were found to be clearly in line with previously published reports. In some cases, noticeable discrepancies between the LRx results and the literature data were found (e.g. prevalence in epilepsy, persistence in osteoporosis) and these were discussed and possible reasons presented. Overall, it was concluded that the IMS® LRx database forms a suitable database for pharmacoepidemiological studies.

  4. A comparative study of six European databases of medically oriented Web resources.

    PubMed

    Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes

    2005-10-01

    The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.

  5. A comparative cellular and molecular biology of longevity database.

    PubMed

    Stuart, Jeffrey A; Liang, Ping; Luo, Xuemei; Page, Melissa M; Gallagher, Emily J; Christoff, Casey A; Robb, Ellen L

    2013-10-01

    Discovering key cellular and molecular traits that promote longevity is a major goal of aging and longevity research. One experimental strategy is to determine which traits have been selected during the evolution of longevity in naturally long-lived animal species. This comparative approach has been applied to lifespan research for nearly four decades, yielding hundreds of datasets describing aspects of cell and molecular biology hypothesized to relate to animal longevity. Here, we introduce a Comparative Cellular and Molecular Biology of Longevity Database, available at ( http://genomics.brocku.ca/ccmbl/ ), as a compendium of comparative cell and molecular data presented in the context of longevity. This open access database will facilitate the meta-analysis of amalgamated datasets using standardized maximum lifespan (MLSP) data (from AnAge). The first edition contains over 800 data records describing experimental measurements of cellular stress resistance, reactive oxygen species metabolism, membrane composition, protein homeostasis, and genome homeostasis as they relate to vertebrate species MLSP. The purpose of this review is to introduce the database and briefly demonstrate its use in the meta-analysis of combined datasets.

  6. MOSAIC: an online database dedicated to the comparative genomics of bacterial strains at the intra-species level.

    PubMed

    Chiapello, Hélène; Gendrault, Annie; Caron, Christophe; Blum, Jérome; Petit, Marie-Agnès; El Karoui, Meriem

    2008-11-27

    The recent availability of complete sequences for numerous closely related bacterial genomes opens up new challenges in comparative genomics. Several methods have been developed to align complete genomes at the nucleotide level but their use and the biological interpretation of results are not straightforward. It is therefore necessary to develop new resources to access, analyze, and visualize genome comparisons. Here we present recent developments on MOSAIC, a generalist comparative bacterial genome database. This database provides the bacteriologist community with easy access to comparisons of complete bacterial genomes at the intra-species level. The strategy we developed for comparison allows us to define two types of regions in bacterial genomes: backbone segments (i.e., regions conserved in all compared strains) and variable segments (i.e., regions that are either specific to or variable in one of the aligned genomes). Definition of these segments at the nucleotide level allows precise comparative and evolutionary analyses of both coding and non-coding regions of bacterial genomes. Such work is easily performed using the MOSAIC Web interface, which allows browsing and graphical visualization of genome comparisons. The MOSAIC database now includes 493 pairwise comparisons and 35 multiple maximal comparisons representing 78 bacterial species. Genome conserved regions (backbones) and variable segments are presented in various formats for further analysis. A graphical interface allows visualization of aligned genomes and functional annotations. The MOSAIC database is available online at http://genome.jouy.inra.fr/mosaic.

  7. Comparative analysis of perioperative complications between a multicenter prospective cervical deformity database and the Nationwide Inpatient Sample database.

    PubMed

    Passias, Peter G; Horn, Samantha R; Jalai, Cyrus M; Poorman, Gregory; Bono, Olivia J; Ramchandran, Subaraman; Smith, Justin S; Scheer, Justin K; Sciubba, Daniel M; Hamilton, D Kojo; Mundis, Gregory; Oh, Cheongeun; Klineberg, Eric O; Lafage, Virginie; Shaffrey, Christopher I; Ames, Christopher P

    2017-11-01

    Complication rates for adult cervical deformity are poorly characterized given the complexity and heterogeneity of cases. To compare perioperative complication rates following adult cervical deformity corrective surgery between a prospective multicenter database for patients with cervical deformity (PCD) and the Nationwide Inpatient Sample (NIS). Retrospective review of prospective databases. A total of 11,501 adult patients with cervical deformity (11,379 patients from the NIS and 122 patients from the PCD database). Perioperative medical and surgical complications. The NIS was queried (2001-2013) for cervical deformity discharges for patients ≥18 years undergoing cervical fusions using International Classification of Disease, Ninth Revision (ICD-9) coding. Patients ≥18 years from the PCD database (2013-2015) were selected. Equivalent complications were identified and rates were compared. Bonferroni correction (p<.004) was used for Pearson chi-square. Binary logistic regression was used to evaluate differences in complication rates between databases. A total of 11,379 patients from the NIS database and 122 patiens from the PCD database were identified. Patients from the PCD database were older (62.49 vs. 55.15, p<.001) but displayed similar gender distribution. Intraoperative complication rate was higher in the PCD (39.3%) group than in the NIS (9.2%, p<.001) database. The PCD database had an increased risk of reporting overall complications than the NIS (odds ratio: 2.81, confidence interval: 1.81-4.38). Only device-related complications were greater in the NIS (7.1% vs. 1.1%, p=.007). Patients from the PCD database displayed higher rates of the following complications: peripheral vascular (0.8% vs. 0.1%, p=.001), gastrointestinal (GI) (2.5% vs. 0.2%, p<.001), infection (8.2% vs. 0.5%, p<.001), dural tear (4.1% vs. 0.6%, p<.001), and dysphagia (9.8% vs. 1.9%, p<.001). Genitourinary, wound, and deep veinthrombosis (DVT) complications were similar between databases (p>.004). Based on surgicalapproach, the PCD reported higher GI and neurologic complication rates for combined anterior-posterior procedures (p<.001). For posterior-only procedures, the NIS had more device-related complications (12.4% vs. 0.1%, p=.003), whereas PCD had more infections (9.3% vs. 0.7%, p<.001). Analysis of the surgeon-maintained cervical database revealed higher overall and individual complication rates and higher data granularity. The nationwide database may underestimate complications of patients with adult cervical deformity (ACD) particularly in regard to perioperative surgical details owing to coding and deformity generalizations. The surgeon-maintained database captures the surgical details, but may underestimate some medical complications. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted based on requirements by data custodians. Conclusions Compared with previous research, this study describes the landscape of databases in the selected countries with more granularity using an assessment tool developed for this purpose. A high number of databases were identified but most had restricted access, preventing their potential use to support research. We hope that this study helps to improve the understanding of the AHDB landscape, increase data sharing and database research in Asia-Pacific countries. PMID:27123180

  9. [Comparison between administrative and clinical databases in the evaluation of cardiac surgery performance].

    PubMed

    Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia

    2008-08-01

    The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.

  10. CycADS: an annotation database system to ease the development and update of BioCyc databases

    PubMed Central

    Vellozo, Augusto F.; Véron, Amélie S.; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E.; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org PMID:21474551

  11. Production and distribution of scientific and technical databases - Comparison among Japan, US and Europe

    NASA Astrophysics Data System (ADS)

    Onodera, Natsuo; Mizukami, Masayuki

    This paper estimates several quantitative indice on production and distribution of scientific and technical databases based on various recent publications and attempts to compare the indice internationally. Raw data used for the estimation are brought mainly from the Database Directory (published by MITI) for database production and from some domestic and foreign study reports for database revenues. The ratio of the indice among Japan, US and Europe for usage of database is similar to those for general scientific and technical activities such as population and R&D expenditures. But Japanese contributions to production, revenue and over-countory distribution of databases are still lower than US and European countries. International comparison of relative database activities between public and private sectors is also discussed.

  12. Sodium content of foods contributing to sodium intake: A comparison between selected foods from the CDC Packaged Food Database and the USDA National Nutrient Database for Standard Reference

    USDA-ARS?s Scientific Manuscript database

    The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 Standard Reference 26 (SR 26) database. Sentinel Foods are foods and beverages identified by USDA to be monitored as primary indicat...

  13. Interactive toxicity of usnic acid and lipopolysaccharides in human liver HepG2 cells.

    PubMed

    Sahu, Saura C; O'Donnell, Michael W; Sprando, Robert L

    2012-09-01

    Usnic acid (UA), a natural botanical product, is a constituent of some dietary supplements used for weight loss. It has been associated with clinical hepatotoxicity leading to liver failure in humans. The present study was undertaken to evaluate the interactive toxicity, if any, of UA with lipopolysaccarides (LPS), a potential contaminant of food, at low non-toxic concentrations. The human hepatoblastoma HepG2 cells were treated with the vehicle control and test agents, separately and in a binary mixture, for 24 h at 37°C in 5% CO2. After the treatment period, the cells were evaluated by the traditional biochemical endpoints of toxicity in combination with the toxicogenomic endpoints that included cytotoxicity, oxidative stress, mitochondrial injury and changes in pathway-focused gene expression profiles. Compared with the controls, low non-toxic concentrations of UA and LPS separately showed no effect on the cells as determined by the biochemical endpoints. However, the simultaneous mixed exposure of the cells to their binary mixture resulted in increased cytotoxicity, oxidative stress and mitochondrial injury. The pathway-focused gene expression analysis resulted in the altered expression of several genes out of 84 genes examined. Most altered gene expressions induced by the binary mixture of UA and LPS were different from those induced by the individual constituents. The genes affected by the mixture were not modulated by either UA or LPS. The results of the present study suggest that the interactions of low nontoxic concentrations of UA and LPS produce toxicity in HepG2 cells. Published 2012. This article is a US Government work and is in the public domain in the USA.

  14. Compound-specific effects of diverse neurodevelopmental toxicants on global gene expression in the neural embryonic stem cell test (ESTn)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theunissen, P.T., E-mail: Peter.Theunissen@rivm.nl; Department of Toxicogenomics, Maastricht University, Maastricht; Robinson, J.F.

    Alternative assays for developmental toxicity testing are needed to reduce animal use in regulatory toxicology. The in vitro murine neural embryonic stem cell test (ESTn) was designed as an alternative for neurodevelopmental toxicity testing. The integration of toxicogenomic-based approaches may further increase predictivity as well as provide insight into underlying mechanisms of developmental toxicity. In the present study, we investigated concentration-dependent effects of six mechanistically diverse compounds, acetaldehyde (ACE), carbamazepine (CBZ), flusilazole (FLU), monoethylhexyl phthalate (MEHP), penicillin G (PENG) and phenytoin (PHE), on the transcriptome and neural differentiation in the ESTn. All compounds with the exception of PENG altered ESTnmore » morphology (cytotoxicity and neural differentiation) in a concentration-dependent manner. Compound induced gene expression changes and corresponding enriched gene ontology biological processes (GO–BP) were identified after 24 h exposure at equipotent differentiation-inhibiting concentrations of the compounds. Both compound-specific and common gene expression changes were observed between subsets of tested compounds, in terms of significance, magnitude of regulation and functionality. For example, ACE, CBZ and FLU induced robust changes in number of significantly altered genes (≥ 687 genes) as well as a variety of GO–BP, as compared to MEHP, PHE and PENG (≤ 55 genes with no significant changes in GO–BP observed). Genes associated with developmentally related processes (embryonic morphogenesis, neuron differentiation, and Wnt signaling) showed diverse regulation after exposure to ACE, CBZ and FLU. In addition, gene expression and GO–BP enrichment showed concentration dependence, allowing discrimination of non-toxic versus toxic concentrations on the basis of transcriptomics. This information may be used to define adaptive versus toxic responses at the transcriptome level.« less

  15. Identifying relevant data for a biological database: handcrafted rules versus machine learning.

    PubMed

    Sehgal, Aditya Kumar; Das, Sanmay; Noto, Keith; Saier, Milton H; Elkan, Charles

    2011-01-01

    With well over 1,000 specialized biological databases in use today, the task of automatically identifying novel, relevant data for such databases is increasingly important. In this paper, we describe practical machine learning approaches for identifying MEDLINE documents and Swiss-Prot/TrEMBL protein records, for incorporation into a specialized biological database of transport proteins named TCDB. We show that both learning approaches outperform rules created by hand by a human expert. As one of the first case studies involving two different approaches to updating a deployed database, both the methods compared and the results will be of interest to curators of many specialized databases.

  16. Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database

    ERIC Educational Resources Information Center

    Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan

    2016-01-01

    Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…

  17. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    ERIC Educational Resources Information Center

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  18. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  19. Atomic Spectroscopic Databases at NIST

    NASA Technical Reports Server (NTRS)

    Reader, J.; Kramida, A. E.; Ralchenko, Yu.

    2006-01-01

    We describe recent work at NIST to develop and maintain databases for spectra, transition probabilities, and energy levels of atoms that are astrophysically important. Our programs to critically compile these data as well as to develop a new database to compare plasma calculations for atoms that are not in local thermodynamic equilibrium are also summarized.

  20. A prototypic small molecule database for bronchoalveolar lavage-based metabolomics

    NASA Astrophysics Data System (ADS)

    Walmsley, Scott; Cruickshank-Quinn, Charmion; Quinn, Kevin; Zhang, Xing; Petrache, Irina; Bowler, Russell P.; Reisdorph, Richard; Reisdorph, Nichole

    2018-04-01

    The analysis of bronchoalveolar lavage fluid (BALF) using mass spectrometry-based metabolomics can provide insight into lung diseases, such as asthma. However, the important step of compound identification is hindered by the lack of a small molecule database that is specific for BALF. Here we describe prototypic, small molecule databases derived from human BALF samples (n=117). Human BALF was extracted into lipid and aqueous fractions and analyzed using liquid chromatography mass spectrometry. Following filtering to reduce contaminants and artifacts, the resulting BALF databases (BALF-DBs) contain 11,736 lipid and 658 aqueous compounds. Over 10% of these were found in 100% of samples. Testing the BALF-DBs using nested test sets produced a 99% match rate for lipids and 47% match rate for aqueous molecules. Searching an independent dataset resulted in 45% matching to the lipid BALF-DB compared to<25% when general databases are searched. The BALF-DBs are available for download from MetaboLights. Overall, the BALF-DBs can reduce false positives and improve confidence in compound identification compared to when general databases are used.

  1. Network-based statistical comparison of citation topology of bibliographic databases

    PubMed Central

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  2. NGSmethDB 2017: enhanced methylomes and differential methylation.

    PubMed

    Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L

    2017-01-04

    The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. SymDex: increasing the efficiency of chemical fingerprint similarity searches for comparing large chemical libraries by using query set indexing.

    PubMed

    Tai, David; Fang, Jianwen

    2012-08-27

    The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.

  4. 75 FR 57272 - The Dun & Bradstreet Corporation; Analysis of Agreement Containing Consent Order to Aid Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-20

    ..., demographic, and other information that allow their customers to market to teachers, administrators, schools... turning to the other company. By contrast, MCH lacked a K-12 database comparable to MDR or QED's..., including the time and cost to develop a database with market coverage and accuracy comparable to MDR or QED...

  5. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  6. Impact of database quality in knowledge-based treatment planning for prostate cancer.

    PubMed

    Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D

    2018-03-13

    This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P < .001), whereas CPD predictions for rectum D 10 (P = .005) and D 30 (P < .001) were significantly less than MCOD predictions. KBP predictions were statistically achievable in the replans for all predicted dose-volumes, excluding D 10 of bladder (P = .03) and rectum (P = .04). Compared with clinical plans, replans showed significant average reductions in D mean for bladder (7.8 Gy; P < .001) and rectum (9.4 Gy; P < .001), while maintaining statistically similar planning target volume, femoral head, and penile bulb dose. KBP dose-volume predictions derived from Pareto plans were more optimal overall than those resulting from manually optimized clinical plans, which significantly improved KBP-assisted plan quality. This work investigates how the plan quality of knowledge databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Critical assessment of human metabolic pathway databases: a stepping stone for future integration

    PubMed Central

    2011-01-01

    Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison provides a stepping stone for such an endeavor. PMID:21999653

  8. Understanding Differences in Administrative and Audited Patient Data in Cardiac Surgery: Comparison of the University HealthSystem Consortium and Society of Thoracic Surgeons Databases.

    PubMed

    Prasad, Anjali; Helder, Meghana R; Brown, Dwight A; Schaff, Hartzell V

    2016-10-01

    The University HealthSystem Consortium (UHC) administrative database has been used increasingly as a quality indicator for hospitals and even individual surgeons. We aimed to determine the accuracy of cardiac surgical data in the administrative UHC database vs data in the clinical Society of Thoracic Surgeons database. We reviewed demographic and outcomes information of patients with aortic valve replacement (AVR), mitral valve replacement (MVR), and coronary artery bypass grafting (CABG) surgery between January 1, 2012, and December 31, 2013. Data collected in aggregate and compared across the databases included case volume, physician specialty coding, patient age and sex, comorbidities, mortality rate, and postoperative complications. In these 2 years, the UHC database recorded 1,270 AVRs, 355 MVRs, and 1,473 CABGs. The Society of Thoracic Surgeons database case volumes were less by 2% to 12% (1,219 AVRs; 316 MVRs; and 1,442 CABGs). Errors in physician specialty coding occurred in UHC data (AVR, 0.6%; MVR, 0.8%; and CABG, 0.7%). In matched patients from each database, demographic age and sex information was identical. Although definitions differed in the databases, percentages of patients with at least one comorbidity were similar. Hospital mortality rates were similar as well, but postoperative recorded complications differed greatly. In comparing the 2 databases, we found similarity in patient demographic information and percentage of patients with comorbidities. The small difference in volumes of each operation type and the larger disparity in postoperative complications between the databases were related to differences in data definition, data collection, and coding errors. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  9. The mind-set of teens towards food communications revealed by conjoint measurement and multi-food databases.

    PubMed

    Foley, Michele; Beckley, Jacqueline; Ashman, Hollis; Moskowitz, Howard R

    2009-06-01

    We introduce a new type of study that combines self-profile of behaviors and attitudes regarding food together with responses to structured, systematically varied concepts about the food. We deal here with the responses of teens, for 28 different foods and beverages. The study creates a database that reveals how a person responds to different types of messaging about the food. We show how to develop the database for many different foods, from which one can compare foods to each other, or compare the performance of messages within a specific food.

  10. Tautomerism in chemical information management systems

    NASA Astrophysics Data System (ADS)

    Warr, Wendy A.

    2010-06-01

    Tautomerism has an impact on many of the processes in chemical information management systems including novelty checking during registration into chemical structure databases; storage of structures; exact and substructure searching in chemical structure databases; and depiction of structures retrieved by a search. The approaches taken by 27 different software vendors and database producers are compared. It is hoped that this comparison will act as a discussion document that could ultimately improve databases and software for researchers in the future.

  11. [Review of digital ground object spectral library].

    PubMed

    Zhou, Xiao-Hu; Zhou, Ding-Wu

    2009-06-01

    A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.

  12. How does the size and shape of local populations in China compare to general anthropometric surveys currently used for product design?

    PubMed

    Daniell, Nathan; Fraysse, François; Paul, Gunther

    2012-01-01

    Anthropometry has long been used for a range of ergonomic applications & product design. Although products are often designed for specific cohorts, anthropometric data are typically sourced from large scale surveys representative of the general population. Additionally, few data are available for emerging markets like China and India. This study measured 80 Chinese males that were representative of a specific cohort targeted for the design of a new product. Thirteen anthropometric measurements were recorded and compared to two large databases that represented a general population, a Chinese database and a Western database. Substantial differences were identified between the Chinese males measured in this study and both databases. The subjects were substantially taller, heavier and broader than subjects in the older Chinese database. However, they were still substantially smaller, lighter and thinner than Western males. Data from current Western anthropometric surveys are unlikely to accurately represent the target population for product designers and manufacturers in emerging markets like China.

  13. O-GLYCBASE Version 3.0: a revised database of O-glycosylated proteins.

    PubMed Central

    Hansen, J E; Lund, O; Nilsson, J; Rapacki, K; Brunak, S

    1998-01-01

    O-GLYCBASE is a revised database of information on glycoproteins and their O-linked glycosylation sites. Entries are compiled and revised from the literature, and from the sequence databases. Entries include information about species, sequence, glycosylation sites and glycan type and is fully cross-referenced. Compared to version 2.0 the number of entries has increased by 20%. Sequence logos displaying the acceptor specificity patterns for the GalNAc, mannose and GlcNAc transferases are shown. The O-GLYCBASE database is available through the WWW at http://www.cbs.dtu. dk/databases/OGLYCBASE/ PMID:9399880

  14. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.

    PubMed

    Silva, Ikaro; Moody, George B

    The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  15. Analysis of commercial and public bioactivity databases.

    PubMed

    Tiikkainen, Pekka; Franke, Lutz

    2012-02-27

    Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.

  16. microPIR2: a comprehensive database for human–mouse comparative study of microRNA–promoter interactions

    PubMed Central

    Piriyapongsa, Jittima; Bootchai, Chaiwat; Ngamphiw, Chumpol; Tongsima, Sissades

    2014-01-01

    microRNA (miRNA)–promoter interaction resource (microPIR) is a public database containing over 15 million predicted miRNA target sites located within human promoter sequences. These predicted targets are presented along with their related genomic and experimental data, making the microPIR database the most comprehensive repository of miRNA promoter target sites. Here, we describe major updates of the microPIR database including new target predictions in the mouse genome and revised human target predictions. The updated database (microPIR2) now provides ∼80 million human and 40 million mouse predicted target sites. In addition to being a reference database, microPIR2 is a tool for comparative analysis of target sites on the promoters of human–mouse orthologous genes. In particular, this new feature was designed to identify potential miRNA–promoter interactions conserved between species that could be stronger candidates for further experimental validation. We also incorporated additional supporting information to microPIR2 such as nuclear and cytoplasmic localization of miRNAs and miRNA–disease association. Extra search features were also implemented to enable various investigations of targets of interest. Database URL: http://www4a.biotec.or.th/micropir2 PMID:25425035

  17. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  18. Comparing Unique Title Coverage of Web of Science and Scopus in Earth and Atmospheric Sciences

    ERIC Educational Resources Information Center

    Barnett, Philip; Lascar, Claudia

    2012-01-01

    The current journal titles in earth and atmospheric sciences, that are unique to each of two databases, Web of Science and Scopus, were identified using different methods. Comparing by subject category shows that Scopus has hundreds of unique titles, and Web of Science just 16. The titles unique to each database have low SCImago Journal Rank…

  19. The assessment and interpretation of Demirjian, Goldstein and Tanner's dental maturity.

    PubMed

    Liversidge, Helen M

    2012-09-01

    A frequently reported advancement in dental maturity compared with the 50(th) percentile of Demirjian, Goldstein and Tanner (1973, Hum Biol 45:211-27) has been interpreted as a population difference. To review the assessment and interpretation of Demirjian et al.'s dental maturity. Dental maturity of boys from published reports was compared as maturity curves and difference to the 50(th) percentile in terms of chronological age and score. Dental maturity, as well as maturity of individual teeth, was compared in the fastest and slowest maturing groups of boys from the Chaillet database. Maturity curves from published reports by age category were broadly similar and differences occurred at the steepest part of the curve. These reduced when expressed as score rather than age. Many studies report a higher than expected score for chronological age and the database contained more than expected children with scores>97(th) percentile. Revised scores for chronological age from this database were calculated (4072 males, 3958 females, aged 2.1-17.9). Most published reports were similar to the database smoothed maturity curve. This method of dental maturity is designed to assess maturity for a single child and is unsuitable to compare groups.

  20. [Morbidity and drug consumption. Comparison of results between the National Health Survey and electronic medical records].

    PubMed

    Aguilar-Palacio, Isabel; Carrera-Lasfuentes, Patricia; Poblador-Plou, Beatriz; Prados-Torres, Alexandra; Rabanaque-Hernández, M José

    2014-01-01

    To compare the prevalence of disease and drug consumption obtained by using the National Health Survey (NHS) with the information provided by the electronic medical records (EMR) in primary health care and the Pharmaceutical Consumption Registry in Aragon (Farmasalud) in the adult population in the province of Zaragoza. A cross-sectional study was performed to compare the prevalence of diseases in the NHS-2006 and in the EMR. The prevalence of drug consumption was obtained from the NHS-2006 and Farmasalud. Estimations using each database were compared with their 95% confidence intervals (95% CI) and the results were stratified by gender and age groups. The comparability of the databases was tested. According to the NHS, a total of 81.8% of the adults in the province of Zaragoza visited a physician in 2006. According to the EMR, 61.4% of adults visited a primary care physician. The most prevalent disease in both databases was hypertension (NHS: 21.5%, 95% CI: 19.4-23.9; EMR: 21.6%, 95% CI: 21.3-21.8). The greatest differences between the NHS and EMR was observed in the prevalence of depression, anxiety, and other mental illnesses (NHS: 10.9%; EMR: 26.6%). The most widely consumed drugs were analgesics The prevalence of drug consumption differed in the two databases, with the greatest differences being found in pain medication (NHS: 23.3%; Farmasalud: 63.8%) and antibiotics (NHS: 3.4%; Farmasalud: 41.7%). These differences persisted after we stratified by gender and were especially important in the group aged more than 75 years. The prevalence of morbidity and drug consumption differed depending on the database employed. The use of different databases is recommended to estimate real prevalences. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.

  1. The Dutch Hospital Standardised Mortality Ratio (HSMR) method and cardiac surgery: benchmarking in a national cohort using hospital administration data versus a clinical database

    PubMed Central

    Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H

    2014-01-01

    Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377

  2. Cost considerations in database selection - A comparison of DIALOG and ESA/IRS

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    It is pointed out that there are many factors which affect the decision-making process in determining which databases should be selected for conducting the online search on a given topic. In many cases, however, the major consideration will be related to cost. The present investigation is concerned with a comparison of the costs involved in making use of DIALOG and the European Space Agency's Information Retrieval Service (ESA/IRS). The two services are very comparable in many respects. Attention is given to pricing structure, telecommunications, the number of databases, prints, time requirements, a table listing online costs for DIALOG and ESA/IRS, and differences in mounting databases. It is found that ESA/IRS is competitively priced when compared to DIALOG, and, despite occasionally higher telecommunications costs, may be even more economical to use in some cases.

  3. Scheduled Civil Aircraft Emission Inventories for 1999: Database Development and Analysis

    NASA Technical Reports Server (NTRS)

    Sutkus, Donald J., Jr.; Baughcum, Steven L.; DuBois, Douglas P.

    2001-01-01

    This report describes the development of a three-dimensional database of aircraft fuel burn and emissions (NO(x), CO, and hydrocarbons) for the scheduled commercial aircraft fleet for each month of 1999. Global totals of emissions and fuel burn for 1999 are compared to global totals from 1992 and 2015 databases. 1999 fuel burn, departure and distance totals for selected airlines are compared to data reported on DOT Form 41 to evaluate the accuracy of the calculations. DOT Form T-100 data were used to determine typical payloads for freighter aircraft and this information was used to model freighter aircraft more accurately by using more realistic payloads. Differences in the calculation methodology used to create the 1999 fuel burn and emissions database from the methodology used in previous work are described and evaluated.

  4. Comparison of hospital databases on antibiotic consumption in France, for a single management tool.

    PubMed

    Henard, S; Boussat, S; Demoré, B; Clément, S; Lecompte, T; May, T; Rabaud, C

    2014-07-01

    The surveillance of antibiotic use in hospitals and of data on resistance is an essential measure for antibiotic stewardship. There are 3 national systems in France to collect data on antibiotic use: DREES, ICATB, and ATB RAISIN. We compared these databases and drafted recommendations for the creation of an optimized database of information on antibiotic use, available to all concerned personnel: healthcare authorities, healthcare facilities, and healthcare professionals. We processed and analyzed the 3 databases (2008 data), and surveyed users. The qualitative analysis demonstrated major discrepancies in terms of objectives, healthcare facilities, participation rate, units of consumption, conditions for collection, consolidation, and control of data, and delay before availability of results. The quantitative analysis revealed that the consumption data for a given healthcare facility differed from one database to another, challenging the reliability of data collection. We specified user expectations: to compare consumption and resistance data, to carry out benchmarking, to obtain data on the prescribing habits in healthcare units, or to help understand results. The study results demonstrated the need for a reliable, single, and automated tool to manage data on antibiotic consumption compared with resistance data on several levels (national, regional, healthcare facility, healthcare units), providing rapid local feedback and educational benchmarking. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  5. [Trauma and accident documentation in Germany compared with elsewhere in Europe].

    PubMed

    Probst, C; Richter, M; Haasper, C; Lefering, R; Otte, D; Oestern, H J; Krettek, C; Hüfner, T

    2008-07-01

    The role of trauma documentation has grown continuously since the 1970s. Prevention and management of injuries were adapted according to the results of many analyses. Since 1993 there have been two different trauma databases in Germany: the German trauma registry (TR) and the database of the Accident Research Unit (UFO). Modern computer applications improved the data processing. Our study analysed the pros and cons of each system and compared them with those of our European neighbours. We compared the TR and the UFO databases with respect to aims and goals, advantages and disadvantages, and current status. Results were reported as means +/- standard errors of the mean. The level of significance was set at P<0.05. There were differences between the two databases concerning number and types of items, aims and goals, and demographics. The TR documents care for severely injured patients and the clinical course of different types of accidents. The UFO describes traffic accidents, accident conditions, and interrelations. The German and British systems are similar, and the French system shows interesting differences. The German trauma documentation systems focus on different points. Therefore both can be used for substantiated analyses of different hypotheses. Certain intersections of both databases may help to answer very special questions in the future.

  6. MODBASE, a database of annotated comparative protein structure models

    PubMed Central

    Pieper, Ursula; Eswar, Narayanan; Stuart, Ashley C.; Ilyin, Valentin A.; Sali, Andrej

    2002-01-01

    MODBASE (http://guitar.rockefeller.edu/modbase) is a relational database of annotated comparative protein structure models for all available protein sequences matched to at least one known protein structure. The models are calculated by MODPIPE, an automated modeling pipeline that relies on PSI-BLAST, IMPALA and MODELLER. MODBASE uses the MySQL relational database management system for flexible and efficient querying, and the MODVIEW Netscape plugin for viewing and manipulating multiple sequences and structures. It is updated regularly to reflect the growth of the protein sequence and structure databases, as well as improvements in the software for calculating the models. For ease of access, MODBASE is organized into different datasets. The largest dataset contains models for domains in 304 517 out of 539 171 unique protein sequences in the complete TrEMBL database (23 March 2001); only models based on significant alignments (PSI-BLAST E-value < 10–4) and models assessed to have the correct fold are included. Other datasets include models for target selection and structure-based annotation by the New York Structural Genomics Research Consortium, models for prediction of genes in the Drosophila melanogaster genome, models for structure determination of several ribosomal particles and models calculated by the MODWEB comparative modeling web server. PMID:11752309

  7. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.

  8. Possibility of Database Research as a Means of Pharmacovigilance in Japan Based on a Comparison with Sertraline Postmarketing Surveillance.

    PubMed

    Hirano, Yoko; Asami, Yuko; Kuribayashi, Kazuhiko; Kitazaki, Shigeru; Yamamoto, Yuji; Fujimoto, Yoko

    2018-05-01

    Many pharmacoepidemiologic studies using large-scale databases have recently been utilized to evaluate the safety and effectiveness of drugs in Western countries. In Japan, however, conventional methodology has been applied to postmarketing surveillance (PMS) to collect safety and effectiveness information on new drugs to meet regulatory requirements. Conventional PMS entails enormous costs and resources despite being an uncontrolled observational study method. This study is aimed at examining the possibility of database research as a more efficient pharmacovigilance approach by comparing a health care claims database and PMS with regard to the characteristics and safety profiles of sertraline-prescribed patients. The characteristics of sertraline-prescribed patients recorded in a large-scale Japanese health insurance claims database developed by MinaCare Co. Ltd. were scanned and compared with the PMS results. We also explored the possibility of detecting signals indicative of adverse reactions based on the claims database by using sequence symmetry analysis. Diabetes mellitus, hyperlipidemia, and hyperthyroidism served as exploratory events, and their detection criteria for the claims database were reported by the Pharmaceuticals and Medical Devices Agency in Japan. Most of the characteristics of sertraline-prescribed patients in the claims database did not differ markedly from those in the PMS. There was no tendency for higher risks of the exploratory events after exposure to sertraline, and this was consistent with sertraline's known safety profile. Our results support the concept of using database research as a cost-effective pharmacovigilance tool that is free of selection bias . Further investigation using database research is required to confirm our preliminary observations. Copyright © 2018. Published by Elsevier Inc.

  9. NUCFRG2: An evaluation of the semiempirical nuclear fragmentation database

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Cucinotta, F. A.; Shinn, J. L.; Badavi, F. F.; Chun, S. Y.; Norbury, J. W.; Zeitlin, C. J.; Heilbronn, L.; Miller, J.

    1995-01-01

    A semiempirical abrasion-ablation model has been successful in generating a large nuclear database for the study of high charge and energy (HZE) ion beams, radiation physics, and galactic cosmic ray shielding. The cross sections that are generated are compared with measured HZE fragmentation data from various experimental groups. A research program for improvement of the database generator is also discussed.

  10. Mapping the literature of transcultural nursing*

    PubMed Central

    Murphy, Sharon C.

    2006-01-01

    Overview: No bibliometric studies of the literature of the field of transcultural nursing have been published. This paper describes a citation analysis as part of the project undertaken by the Nursing and Allied Health Resources Section of the Medical Library Association to map the literature of nursing. Objective: The purpose of this study was to identify the core literature and determine which databases provided the most complete access to the transcultural nursing literature. Methods: Cited references from essential source journals were analyzed for a three-year period. Eight major databases were compared for indexing coverage of the identified core list of journals. Results: This study identifies 138 core journals. Transcultural nursing relies on journal literature from associated health sciences fields in addition to nursing. Books provide an important format. Nearly all cited references were from the previous 18 years. In comparing indexing coverage among 8 major databases, 3 databases rose to the top. Conclusions: No single database can claim comprehensive indexing coverage for this broad field. It is essential to search multiple databases. Based on this study, PubMed/MEDLINE, Social Sciences Citation Index, and CINAHL provide the best coverage. Collections supporting transcultural nursing require robust access to literature beyond nursing publications. PMID:16710461

  11. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  12. Comparison of glaucoma diagnostic accuracy of macular ganglion cell complex thickness based on nonhighly myopic and highly myopic normative database

    PubMed Central

    Chen, Henry Shen-Lih; Liu, Chun-Hsiu; Lu, Da-Wen

    2016-01-01

    Background/Purpose: To evaluate and compare the diagnostic discriminative ability for detecting glaucoma in highly myopic eyes from a normative database of macular ganglion cell complex (mGCC) thickness based on nonhighly myopic and highly myopic normal eyes. Methods: Forty-nine eyes of 49 participants with high myopia (axial length ≥ 26.0 mm) were enrolled. Spectral-domain optical coherence tomography scans were done using RS-3000, and the mGCC thickness/significance maps within a 9-mm diameter circle were generated using built-in software. We compared the difference of sensitivity, specificity, and diagnostic accuracy between the nonhighly myopic database and the highly myopic database for differentiating the early glaucomatous eyes from the nonglaucomatous eyes. Results: This study enrolled 15 normal eyes and 34 eyes with glaucoma. The mean mGCC thickness of the glaucoma group was significantly less than that of the normal group (p < 0.001). Sensitivity was 96.3%, and the specificity was 50.0% when using the nonhighly myopic normative database. When the highly myopic normative database was used, the sensitivity was 88.9%, and the specificity was 90.0%. The false positive rate was significantly lower when using the highly myopic normative database (p < 0.05). Conclusion: The evaluations of glaucoma in eyes with high myopia using a nonhighly myopic normative database may lead to a frequent misdiagnosis. When evaluating glaucoma in high myopic eyes, the mGCC thickness determined by the long axial length high myopic normative database should be applied. PMID:29018704

  13. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  14. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  15. Survival in commercially insured multiple sclerosis patients and comparator subjects in the U.S.

    PubMed

    Kaufman, D W; Reshef, S; Golub, H L; Peucker, M; Corwin, M J; Goodin, D S; Knappertz, V; Pleimes, D; Cutter, G

    2014-05-01

    Compare survival in patients with multiple sclerosis (MS) from a U.S. commercial health insurance database with a matched cohort of non-MS subjects. 30,402 MS patients and 89,818 non-MS subjects (comparators) in the OptumInsight Research (OIR) database from 1996 to 2009 were included. An MS diagnosis required at least 3 consecutive months of database reporting, with two or more ICD-9 codes of 340 at least 30 days apart, or the combination of 1 ICD-9-340 code and at least 1 MS disease-modifying treatment (DMT) code. Comparators required the absence of ICD-9-340 and DMT codes throughout database reporting. Up to three comparators were matched to each patient for: age in the year of the first relevant code (index year - at least 3 months of reporting in that year were required); sex; region of residence in the index year. Deaths were ascertained from the National Death Index and the Social Security Administration Death Master File. Subjects not identified as deceased were assumed to be alive through the end of 2009. Annual mortality rates were 899/100,000 among MS patients and 446/100,000 among comparators. Standardized mortality ratios compared to the U.S. population were 1.70 and 0.80, respectively. Kaplan-Meier analysis yielded a median survival from birth that was 6 years lower among MS patients than among comparators. The results show, for the first time in a U.S. population, a survival disadvantage for contemporary MS patients compared to non-MS subjects from the same healthcare system. The 6-year decrement in lifespan parallels a recent report from British Columbia. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Clinical decision support tools: personal digital assistant versus online dietary supplement databases.

    PubMed

    Clauson, Kevin A; Polen, Hyla H; Peak, Amy S; Marsh, Wallace A; DiScala, Sandra L

    2008-11-01

    Clinical decision support tools (CDSTs) on personal digital assistants (PDAs) and online databases assist healthcare practitioners who make decisions about dietary supplements. To assess and compare the content of PDA dietary supplement databases and their online counterparts used as CDSTs. A total of 102 question-and-answer pairs were developed within 10 weighted categories of the most clinically relevant aspects of dietary supplement therapy. PDA versions of AltMedDex, Lexi-Natural, Natural Medicines Comprehensive Database, and Natural Standard and their online counterparts were assessed by scope (percent of correct answers present), completeness (3-point scale), ease of use, and a composite score integrating all 3 criteria. Descriptive statistics and inferential statistics, including a chi(2) test, Scheffé's multiple comparison test, McNemar's test, and the Wilcoxon signed rank test were used to analyze data. The scope scores for PDA databases were: Natural Medicines Comprehensive Database 84.3%, Natural Standard 58.8%, Lexi-Natural 50.0%, and AltMedDex 36.3%, with Natural Medicines Comprehensive Database statistically superior (p < 0.01). Completeness scores were: Natural Medicines Comprehensive Database 78.4%, Natural Standard 51.0%, Lexi-Natural 43.5%, and AltMedDex 29.7%. Lexi-Natural was superior in ease of use (p < 0.01). Composite scores for PDA databases were: Natural Medicines Comprehensive Database 79.3, Natural Standard 53.0, Lexi-Natural 48.0, and AltMedDex 32.5, with Natural Medicines Comprehensive Database superior (p < 0.01). There was no difference between the scope for PDA and online database pairs with Lexi-Natural (50.0% and 53.9%, respectively) or Natural Medicines Comprehensive Database (84.3% and 84.3%, respectively) (p > 0.05), whereas differences existed for AltMedDex (36.3% vs 74.5%, respectively) and Natural Standard (58.8% vs 80.4%, respectively) (p < 0.01). For composite scores, AltMedDex and Natural Standard online were better than their PDA counterparts (p < 0.01). Natural Medicines Comprehensive Database achieved significantly higher scope, completeness, and composite scores compared with other dietary supplement PDA CDSTs in this study. There was no difference between the PDA and online databases for Lexi-Natural and Natural Medicines Comprehensive Database, whereas online versions of AltMedDex and Natural Standard were significantly better than their PDA counterparts.

  17. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2002-08-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, and relational databases, as well as ACeDB. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.

  18. Nursing Child Assessment Satellite Training Parent-Child Interaction Scales: Comparing American and Canadian Normative and High-Risk Samples.

    PubMed

    Letourneau, Nicole L; Tryphonopoulos, Panagiota D; Novick, Jason; Hart, J Martha; Giesbrecht, Gerald; Oxford, Monica L

    Many nurses rely on the American Nursing Child Assessment Satellite Training (NCAST) Parent-Child Interaction (PCI) Teaching and Feeding Scales to identify and target interventions for families affected by severe/chronic stressors (e.g. postpartum depression (PPD), intimate partner violence (IPV), low-income). However, the NCAST Database that provides normative data for comparisons may not apply to Canadian families. The purpose of this study was to compare NCAST PCI scores in Canadian and American samples and to assess the reliability of the NCAST PCI Scales in Canadian samples. This secondary analysis employed independent samples t-tests (p < 0.005) to compare PCI between the American NCAST Database and Canadian high-risk (families with PPD, exposure to IPV or low-income) and community samples. Cronbach's alphas were calculated for the Canadian and American samples. In both American and Canadian samples, belonging to a high-risk population reduced parents' abilities to engage in sensitive and responsive caregiving (i.e. healthy serve and return relationships) as measured by the PCI Scales. NCAST Database mothers were more effective at executing caregiving responsibilities during PCI compared to the Canadian community sample, while infants belonging to the Canadian community sample provided clearer cues to caregivers during PCI compared to those of the NCAST Database. Internal consistency coefficients for the Canadian samples were generally acceptable. The NCAST Database can be reliably used for assessing PCI in normative and high-risk Canadian families. Canadian nurses can be assured that the PCI Scales adequately identify risks and can help target interventions to promote optimal parent-child relationships and ultimately child development. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  19. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    ERIC Educational Resources Information Center

    Fitzgibbons, Megan; Meert, Deborah

    2010-01-01

    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  20. OperomeDB: A Database of Condition-Specific Transcription Units in Prokaryotic Genomes.

    PubMed

    Chetal, Kashish; Janga, Sarath Chandra

    2015-01-01

    Background. In prokaryotic organisms, a substantial fraction of adjacent genes are organized into operons-codirectionally organized genes in prokaryotic genomes with the presence of a common promoter and terminator. Although several available operon databases provide information with varying levels of reliability, very few resources provide experimentally supported results. Therefore, we believe that the biological community could benefit from having a new operon prediction database with operons predicted using next-generation RNA-seq datasets. Description. We present operomeDB, a database which provides an ensemble of all the predicted operons for bacterial genomes using available RNA-sequencing datasets across a wide range of experimental conditions. Although several studies have recently confirmed that prokaryotic operon structure is dynamic with significant alterations across environmental and experimental conditions, there are no comprehensive databases for studying such variations across prokaryotic transcriptomes. Currently our database contains nine bacterial organisms and 168 transcriptomes for which we predicted operons. User interface is simple and easy to use, in terms of visualization, downloading, and querying of data. In addition, because of its ability to load custom datasets, users can also compare their datasets with publicly available transcriptomic data of an organism. Conclusion. OperomeDB as a database should not only aid experimental groups working on transcriptome analysis of specific organisms but also enable studies related to computational and comparative operomics.

  1. MEDLINE versus EMBASE and CINAHL for telemedicine searches.

    PubMed

    Bahaadinbeigy, Kambiz; Yogesan, Kanagasingam; Wootton, Richard

    2010-10-01

    Researchers in the domain of telemedicine throughout the world tend to search multiple bibliographic databases to retrieve the highest possible number of publications when conducting review projects. Medical Literature Analysis and Retrieval System Online (MEDLINE), Excerpta Medica Database (EMBASE), and Cumulative Index to Nursing and Allied Health Literature (CINAHL) are three popular databases in the discipline of biomedicine that are used for conducting reviews. Access to the MEDLINE database is free and easy, whereas EMBASE and CINAHL are not free and sometimes not easy to access for researchers in small research centers. This project sought to compare MEDLINE with EMBASE and CINAHL to estimate what proportion of potentially relevant publications would be missed when only MEDLINE is used in a review project, in comparison to when EMBASE and CINAHL are also used. Twelve simple keywords relevant to 12 different telemedicine applications were searched using all three databases, and the results were compared. About 9%-18% of potentially relevant articles would have been missed if MEDLINE had been the only database used. It is preferable if all three or more databases are used when conducting a review in telemedicine. Researchers from developing countries or small research institutions could rely on only MEDLINE, but they would loose 9%-18% of the potentially relevant publications. Searching MEDLINE alone is not ideal, but in a resource-constrained situation, it is definitely better than nothing.

  2. ERIC: Sphinx or Golden Griffin?

    ERIC Educational Resources Information Center

    Lopez, Manuel D.

    1989-01-01

    Evaluates the Educational Resources Information Center (ERIC) database. Summarizes ERIC's history and organization, and discusses criticisms concerning access, currency, and database content. Reviews role of component clearinghouses, indexing practices, thesaurus structure, international coverage, and comparative studies. Finds ERIC a valuable…

  3. Genotoxicity testing: progress and prospects for the next decade.

    PubMed

    Turkez, Hasan; Arslan, Mehmet E; Ozdemir, Ozlem

    2017-10-01

    Genotoxicity and mutagenicity analyses have a significant role in the identification of hazard effects of therapeutic drugs, cosmetics, agrochemicals, industrial compounds, food additives, natural toxins and nanomaterials for regulatory purposes. To evaluate mutagenicity or genotoxicity, different in vitro and in vivo methodologies exert various genotoxicological endpoints such as point mutations, changes in number and structure of chromosomes. Areas covered: This review covered the basics of genotoxicity and in vitro/in vivo methods for determining of genetic damages. The limitations that have arisen as a result of the common use of these methods were also discussed. Finally, the perspectives of further prospects on the use of genotoxicity testing and genotoxic mode of action were emphasized. Expert opinion: The solution of actual and practical problems of genetic toxicology is inarguably based on the understanding of DNA damage mechanisms at molecular, subcellular, cellular, organ, system and organism levels. Current strategies to investigate human health risks should be modified to increase their performance for more reliable results and also new techniques such as toxicogenomics, epigenomics and single cell approaches must be integrated into genetic safety evolutions. The explored new biomarkers by the omic techniques will provide forceful genotoxicity assessment to reduce the cancer risk.

  4. Alterations of gene expression indicating effects on estrogen signaling and lipid homeostasis in seabream hepatocytes exposed to extracts of seawater sampled from a coastal area of the central Adriatic Sea (Italy).

    PubMed

    Cocci, Paolo; Capriotti, Martina; Mosconi, Gilberto; Campanelli, Alessandra; Frapiccini, Emanuela; Marini, Mauro; Caprioli, Giovanni; Sagratini, Gianni; Aretusi, Graziano; Palermo, Francesco Alessandro

    2017-02-01

    Recent evidences suggest that the toxicological effects of endocrine disrupting chemicals (EDCs) involve multiple nuclear receptor-mediated pathways, including estrogen receptor (ER) and peroxisome proliferator-activated receptor (PPAR) signaling systems. Thus, our objective in this study was to detect the summated endocrine effects of EDCs with metabolic activity in coastal waters of the central Adriatic Sea by means of a toxicogenomic approach using seabream hepatocytes. Gene expression patterns were also correlated with seawater levels of polychlorinated biphenyls (PCBs) and polycyclic aromatic hydrocarbons (PAHs). We found that seawater extracts taken at certain areas induced gene expression profiles of ERα/vitellogenin, PPARα/Stearoyl-CoA desaturase 1A, cytochrome P4501A (CYP1A) and metallothionein. These increased levels of biomarkers responses correlated with spatial distribution of PAHs/PCBs concentrations observed by chemical analysis in the different study areas. Collectively, our data give a snapshot of the presence of complex EDC mixtures that are able to perturb metabolic signaling in coastal marine waters. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Hydra as a model organism to decipher the toxic effects of copper oxide nanorod: Eco-toxicogenomics approach.

    PubMed

    Murugadas, Anbazhagan; Zeeshan, Mohammed; Thamaraiselvi, Kaliannan; Ghaskadbi, Surendra; Akbarsha, Mohammad Abdulkader

    2016-07-15

    Nanotechnology has emerged as a powerful field of applied research. However, the potential toxicity of nano-materials is a cause of concern. A thorough toxicological investigation is required before a nanomaterial is evaluated for application of any kind. In this context, there is concerted effort to find appropriate test systems to assess the toxicity of nanomaterials. Toxicity of a nanomaterial greatly depends on its physicochemical properties and the biological system with which it interacts. The present research was carried out with a view to generate data on eco-toxicological impacts of copper oxide nanorod (CuO NR) in Hydra magnipapillata 105 at organismal, cellular and molecular levels. Exposure of hydra to CuO NR resulted in severe morphological alterations in a concentration- as well as duration-dependent manner. Impairment of feeding, population growth, and regeneration was also observed. In vivo and in vitro analyses revealed induction of oxidative stress, genotoxicity, and molecular machinery of apoptotic cell death, accompanied by disruption of cell cycle progression. Taken together, CuO nanorod is potentially toxic to the biological systems. Also, hydra offers potential to be used as a convenient model organism for aquatic ecotoxicological risk assessment of nanomaterials.

  6. Hydra as a model organism to decipher the toxic effects of copper oxide nanorod: Eco-toxicogenomics approach

    PubMed Central

    Murugadas, Anbazhagan; Zeeshan, Mohammed; Thamaraiselvi, Kaliannan; Ghaskadbi, Surendra; Akbarsha, Mohammad Abdulkader

    2016-01-01

    Nanotechnology has emerged as a powerful field of applied research. However, the potential toxicity of nano-materials is a cause of concern. A thorough toxicological investigation is required before a nanomaterial is evaluated for application of any kind. In this context, there is concerted effort to find appropriate test systems to assess the toxicity of nanomaterials. Toxicity of a nanomaterial greatly depends on its physicochemical properties and the biological system with which it interacts. The present research was carried out with a view to generate data on eco-toxicological impacts of copper oxide nanorod (CuO NR) in Hydra magnipapillata 105 at organismal, cellular and molecular levels. Exposure of hydra to CuO NR resulted in severe morphological alterations in a concentration- as well as duration-dependent manner. Impairment of feeding, population growth, and regeneration was also observed. In vivo and in vitro analyses revealed induction of oxidative stress, genotoxicity, and molecular machinery of apoptotic cell death, accompanied by disruption of cell cycle progression. Taken together, CuO nanorod is potentially toxic to the biological systems. Also, hydra offers potential to be used as a convenient model organism for aquatic ecotoxicological risk assessment of nanomaterials. PMID:27417574

  7. Toxicogenomic investigation of Tetrahymena thermophila exposed to dichlorodiphenyltrichloroethane (DDT), tributyltin (TBT), and 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD).

    PubMed

    Chang, Yue; Feng, LiFang; Miao, Wei

    2011-07-01

    Dichlorodiphenyltrichloroethane (DDT), tributyltin (TBT), and 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) are persistent in the environment and cause continuous toxic effects in humans and aquatic life. Tetrahymena thermophila has the potential for use as a model for research regarding toxicants. In this study, this organism was used to analyze a genome-wide microarray generated from cells exposed to DDT, TBT and TCDD. To accomplish this, genes differentially expressed when treated with each toxicant were identified, after which their functions were categorized using GO enrichment analysis. The results suggested that the responses of T. thermophila were similar to those of multicellular organisms. Additionally, the context likelihood of relatedness method (CLR) was applied to construct a TCDD-relevant network. The T-shaped network obtained could be functionally divided into two subnetworks. The general functions of both subnetworks were related to the epigenetic mechanism of TCDD. Based on analysis of the networks, a model of the TCDD effect on T. thermophila was inferred. Thus, Tetrahymena has the potential to be a good unicellular eukaryotic model for toxic mechanism research at the genome level.

  8. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  9. System, method and apparatus for conducting a keyterm search

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.

  10. System, method and apparatus for conducting a phrase search

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.

  11. Database for LDV Signal Processor Performance Analysis

    NASA Technical Reports Server (NTRS)

    Baker, Glenn D.; Murphy, R. Jay; Meyers, James F.

    1989-01-01

    A comparative and quantitative analysis of various laser velocimeter signal processors is difficult because standards for characterizing signal bursts have not been established. This leaves the researcher to select a signal processor based only on manufacturers' claims without the benefit of direct comparison. The present paper proposes the use of a database of digitized signal bursts obtained from a laser velocimeter under various configurations as a method for directly comparing signal processors.

  12. Total choline and choline-containing moieties of commercially available pulses.

    PubMed

    Lewis, Erin D; Kosik, Sarah J; Zhao, Yuan-Yuan; Jacobs, René L; Curtis, Jonathan M; Field, Catherine J

    2014-06-01

    Estimating dietary choline intake can be challenging due to missing foods in the current United States Department of Agriculture (USDA) database. The objectives of the study were to quantify the choline-containing moieties and the total choline content of a variety of pulses available in North America and use the expanded compositional database to determine the potential contribution of pulses to dietary choline intake. Commonly consumed pulses (n = 32) were analyzed by hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) and compared to the current USDA database. Cooking was found to reduce the relative percent from free choline and increased the contribution of phosphatidylcholine to total choline for most pulses (P < 0.05). Using the expanded database to estimate choline content of recipes using pulses as meat alternatives, resulted in a different estimation of choline content per serving (±30%), compared to the USDA database. These results suggest that when pulses are a large part of a meal or diet, the use of accurate food composition data should be used.

  13. Visualization and manipulating the image of a formal data structure (FDS)-based database

    NASA Astrophysics Data System (ADS)

    Verdiesen, Franc; de Hoop, Sylvia; Molenaar, Martien

    1994-08-01

    A vector map is a terrain representation with a vector-structured geometry. Molenaar formulated an object-oriented formal data structure for 3D single valued vector maps. This FDS is implemented in a database (Oracle). In this study we describe a methodology for visualizing a FDS-based database and manipulating the image. A data set retrieved by querying the database is converted into an import file for a drawing application. An objective of this study is that an end-user can alter and add terrain objects in the image. The drawing application creates an export file, that is compared with the import file. Differences between these files result in updating the database which involves checks on consistency. In this study Autocad is used for visualizing and manipulating the image of the data set. A computer program has been written for the data exchange and conversion between Oracle and Autocad. The data structure of the FDS is compared to the data structure of Autocad and the data of the FDS is converted into the structure of Autocad equal to the FDS.

  14. Nucleotide Sequence Database Comparison for Routine Dermatophyte Identification by Internal Transcribed Spacer 2 Genetic Region DNA Barcoding.

    PubMed

    Normand, A C; Packeu, A; Cassagne, C; Hendrickx, M; Ranque, S; Piarroux, R

    2018-05-01

    Conventional dermatophyte identification is based on morphological features. However, recent studies have proposed to use the nucleotide sequences of the rRNA internal transcribed spacer (ITS) region as an identification barcode of all fungi, including dermatophytes. Several nucleotide databases are available to compare sequences and thus identify isolates; however, these databases often contain mislabeled sequences that impair sequence-based identification. We evaluated five of these databases on a clinical isolate panel. We selected 292 clinical dermatophyte strains that were prospectively subjected to an ITS2 nucleotide sequence analysis. Sequences were analyzed against the databases, and the results were compared to clusters obtained via DNA alignment of sequence segments. The DNA tree served as the identification standard throughout the study. According to the ITS2 sequence identification, the majority of strains (255/292) belonged to the genus Trichophyton , mainly T. rubrum complex ( n = 184), T. interdigitale ( n = 40), T. tonsurans ( n = 26), and T. benhamiae ( n = 5). Other genera included Microsporum (e.g., M. canis [ n = 21], M. audouinii [ n = 10], Nannizzia gypsea [ n = 3], and Epidermophyton [ n = 3]). Species-level identification of T. rubrum complex isolates was an issue. Overall, ITS DNA sequencing is a reliable tool to identify dermatophyte species given that a comprehensive and correctly labeled database is consulted. Since many inaccurate identification results exist in the DNA databases used for this study, reference databases must be verified frequently and amended in line with the current revisions of fungal taxonomy. Before describing a new species or adding a new DNA reference to the available databases, its position in the phylogenetic tree must be verified. Copyright © 2018 American Society for Microbiology.

  15. Exploring performance issues for a clinical database organized using an entity-attribute-value representation.

    PubMed

    Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L

    2000-01-01

    The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.

  16. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  17. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-04

    PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less

  19. Molecular Identification and Databases in Fusarium

    USDA-ARS?s Scientific Manuscript database

    DNA sequence-based methods for identifying pathogenic and mycotoxigenic Fusarium isolates have become the gold standard worldwide. Moreover, fusarial DNA sequence data are increasing rapidly in several web-accessible databases for comparative purposes. Unfortunately, the use of Basic Alignment Sea...

  20. The CSB Incident Screening Database: description, summary statistics and uses.

    PubMed

    Gomez, Manuel R; Casper, Susan; Smith, E Allen

    2008-11-15

    This paper briefly describes the Chemical Incident Screening Database currently used by the CSB to identify and evaluate chemical incidents for possible investigations, and summarizes descriptive statistics from this database that can potentially help to estimate the number, character, and consequences of chemical incidents in the US. The report compares some of the information in the CSB database to roughly similar information available from databases operated by EPA and the Agency for Toxic Substances and Disease Registry (ATSDR), and explores the possible implications of these comparisons with regard to the dimension of the chemical incident problem. Finally, the report explores in a preliminary way whether a system modeled after the existing CSB screening database could be developed to serve as a national surveillance tool for chemical incidents.

  1. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    NASA Astrophysics Data System (ADS)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  2. Database of Novel and Emerging Adsorbent Materials

    National Institute of Standards and Technology Data Gateway

    SRD 205 NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials (Web, free access)   The NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials is a free, web-based catalog of adsorbent materials and measured adsorption properties of numerous materials obtained from article entries from the scientific literature. Search fields for the database include adsorbent material, adsorbate gas, experimental conditions (pressure, temperature), and bibliographic information (author, title, journal), and results from queries are provided as a list of articles matching the search parameters. The database also contains adsorption isotherms digitized from the cataloged articles, which can be compared visually online in the web application or exported for offline analysis.

  3. Comparison of the NCI open database with seven large chemical structural databases.

    PubMed

    Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C

    2001-01-01

    Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.

  4. Database constraints applied to metabolic pathway reconstruction tools.

    PubMed

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  5. A comparative study of satellite estimation for solar insolation in Albania with ground measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrushi, Driada, E-mail: driadamitrushi@yahoo.com; Berberi, Pëllumb, E-mail: pellumb.berberi@gmail.com; Muda, Valbona, E-mail: vmuda@hotmail.com

    The main objective of this study is to compare data provided by Database of NASA with available ground data for regions covered by national meteorological net NASA estimates that their measurements of average daily solar radiation have a root-mean-square deviation RMSD error of 35 W/m{sup 2} (roughly 20% inaccuracy). Unfortunately valid data from meteorological stations for regions of interest are quite rare in Albania. In these cases, use of Solar Radiation Database of NASA would be a satisfactory solution for different case studies. Using a statistical method allows to determine most probable margins between to sources of data. Comparison of meanmore » insulation data provided by NASA with ground data of mean insulation provided by meteorological stations show that ground data for mean insolation results, in all cases, to be underestimated compared with data provided by Database of NASA. Converting factor is 1.149.« less

  6. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    NASA Astrophysics Data System (ADS)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  7. Comparison of in vitro bioactivation of flutamide and its cyano analogue: evidence for reductive activation by human NADPH:cytochrome P450 reductase.

    PubMed

    Wen, Bo; Coe, Kevin J; Rademacher, Peter; Fitch, William L; Monshouwer, Mario; Nelson, Sidney D

    2008-12-01

    Flutamide (FLU), a nonsteroidal antiandrogen drug widely used in the treatment of prostate cancer, has been associated with idiosyncratic hepatotoxicity in patients. It is proposed that bioactivation of FLU and subsequent binding of reactive metabolite(s) to cellular proteins play a causative role. A toxicogenomic study comparing FLU and its nitro to cyano analogue (CYA) showed that the nitroaromatic group of FLU enhanced cytotoxicity to hepatocytes, indicating that reduction of the nitroaromatic group may represent a potential route of FLU-induced hepatotoxicity [Coe et al. (2007) Chem. Res. Toxicol. 20, 1277-1290]. In the current study, we compared in vitro bioactivation of FLU and CYA in human liver microsomes and cryopreserved human hepatocytes. A nitroreduction metabolite FLU-6 was formed in liver microsomal incubations of FLU under atmospheric oxygen levels and, to a greater extent, under anaerobic conditions. Seven glutathione (GSH) adducts of FLU, FLU-G1-7, were tentatively identified in human liver microsomal incubations using liquid chromatography-tandem mass spectrometry (LC/ MS/MS), while CYA formed only four corresponding GSH adducts, CYA-G1-4, under the same conditions. Of particular interest was the formation of FLU-G5-7 from FLU, where the nitroaromatic group of FLU was reduced to an amino group. A tentative pathway is that upon nitroreduction, the para-diamines undergo cytochrome P450 (P450)-catalyzed two-electron oxidations to form corresponding para-diimine intermediates that react with GSH to form GSH adducts FLU-G5-7, respectively. The identities of FLU-G5-7 were further confirmed by LC/MS/MS analyses of microsomal incubations of a synthesized standard FLU-6. In an attempt to identify enzymes involved in the nitroreduction of FLU, NADPH:cytochrome P450 reductase (CPR) was shown to reduce FLU to FLU-6 under both aerobic and anaerobic conditions. Furthermore, the formation of FLU-G5-7 was completely blocked by the addition of a reversible CPR inhibitor, alpha-lipoic acid, to the incubations of FLU under aerobic conditions. In summary, these results clearly demonstrate that nitroreduction of FLU by CPR contributes to bioactivation and potentially to hepatotoxicity of FLU.

  8. Three conazoles increase hepatic microsomal retinoic acid metabolism and decrease mouse hepatic retinoic acid levels in vivo.

    PubMed

    Chen, Pei-Jen; Padgett, William T; Moore, Tanya; Winnik, Witold; Lambert, Guy R; Thai, Sheau-Fung; Hester, Susan D; Nesnow, Stephen

    2009-01-15

    Conazoles are fungicides used in agriculture and as pharmaceuticals. In a previous toxicogenomic study of triazole-containing conazoles we found gene expression changes consistent with the alteration of the metabolism of all trans-retinoic acid (atRA), a vitamin A metabolite with cancer-preventative properties (Ward et al., Toxicol. Pathol. 2006; 34:863-78). The goals of this study were to examine effects of propiconazole, triadimefon, and myclobutanil, three triazole-containing conazoles, on the microsomal metabolism of atRA, the associated hepatic cytochrome P450 (P450) enzyme(s) involved in atRA metabolism, and their effects on hepatic atRA levels in vivo. The in vitro metabolism of atRA was quantitatively measured in liver microsomes from male CD-1 mice following four daily intraperitoneal injections of propiconazole (210 mg/kg/d), triadimefon (257 mg/kg/d) or myclobutanil (270 mg/kg/d). The formation of both 4-hydroxy-atRA and 4-oxo-atRA were significantly increased by all three conazoles. Propiconazole-induced microsomes possessed slightly greater metabolizing activities compared to myclobutanil-induced microsomes. Both propiconazole and triadimefon treatment induced greater formation of 4-hydroxy-atRA compared to myclobutanil treatment. Chemical and immuno-inhibition metabolism studies suggested that Cyp26a1, Cyp2b, and Cyp3a, but not Cyp1a1 proteins were involved in atRA metabolism. Cyp2b10/20 and Cyp3a11 genes were significantly over-expressed in the livers of both triadimefon- and propiconazole-treated mice while Cyp26a1, Cyp2c65 and Cyp1a2 genes were over-expressed in the livers of either triadimefon- or propiconazole-treated mice, and Cyp2b10/20 and Cyp3a13 genes were over-expressed in the livers of myclobutanil-treated mice. Western blot analyses indicated conazole induced-increases in Cyp2b and Cyp3a proteins. All three conazoles decreased hepatic atRA tissue levels ranging from 45-67%. The possible implications of these changes in hepatic atRA levels on cell proliferation in the mouse tumorigenesis process are discussed.

  9. Enhancing navigation in biomedical databases by community voting and database-driven text classification

    PubMed Central

    Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph

    2009-01-01

    Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796

  10. Introducing the CPL/MUW proteome database: interpretation of human liver and liver cancer proteome profiles by referring to isolated primary cells.

    PubMed

    Wimmer, Helge; Gundacker, Nina C; Griss, Johannes; Haudek, Verena J; Stättner, Stefan; Mohr, Thomas; Zwickl, Hannes; Paulitschke, Verena; Baron, David M; Trittner, Wolfgang; Kubicek, Markus; Bayer, Editha; Slany, Astrid; Gerner, Christopher

    2009-06-01

    Interpretation of proteome data with a focus on biomarker discovery largely relies on comparative proteome analyses. Here, we introduce a database-assisted interpretation strategy based on proteome profiles of primary cells. Both 2-D-PAGE and shotgun proteomics are applied. We obtain high data concordance with these two different techniques. When applying mass analysis of tryptic spot digests from 2-D gels of cytoplasmic fractions, we typically identify several hundred proteins. Using the same protein fractions, we usually identify more than thousand proteins by shotgun proteomics. The data consistency obtained when comparing these independent data sets exceeds 99% of the proteins identified in the 2-D gels. Many characteristic differences in protein expression of different cells can thus be independently confirmed. Our self-designed SQL database (CPL/MUW - database of the Clinical Proteomics Laboratories at the Medical University of Vienna accessible via www.meduniwien.ac.at/proteomics/database) facilitates (i) quality management of protein identification data, which are based on MS, (ii) the detection of cell type-specific proteins and (iii) of molecular signatures of specific functional cell states. Here, we demonstrate, how the interpretation of proteome profiles obtained from human liver tissue and hepatocellular carcinoma tissue is assisted by the Clinical Proteomics Laboratories at the Medical University of Vienna-database. Therefore, we suggest that the use of reference experiments supported by a tailored database may substantially facilitate data interpretation of proteome profiling experiments.

  11. Landscape features, standards, and semantics in U.S. national topographic mapping databases

    USGS Publications Warehouse

    Varanka, Dalia

    2009-01-01

    The objective of this paper is to examine the contrast between local, field-surveyed topographical representation and feature representation in digital, centralized databases and to clarify their ontological implications. The semantics of these two approaches are contrasted by examining the categorization of features by subject domains inherent to national topographic mapping. When comparing five USGS topographic mapping domain and feature lists, results indicate that multiple semantic meanings and ontology rules were applied to the initial digital database, but were lost as databases became more centralized at national scales, and common semantics were replaced by technological terms.

  12. The Sequenced Angiosperm Genomes and Genome Databases.

    PubMed

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.

  13. The Sequenced Angiosperm Genomes and Genome Databases

    PubMed Central

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology. PMID:29706973

  14. WebBee: A Platform for Secure Coordination and Communication in Crisis Scenarios

    DTIC Science & Technology

    2008-04-16

    implemented through database triggers. The Webbee Database Server contains an Information Server, which is a Postgres database with PostGIS [5] extension...sends it to the target user. The heavy lifting for this mechanism is done through an extension of Postgres triggers (Figures 6.1 and 6.2), resulting...in fewer queries and better performance. Trigger support in Postgres is table-based and comparatively primitive: with n table triggers, an update

  15. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  16. Four Current Awareness Databases: Coverage and Currency Compared.

    ERIC Educational Resources Information Center

    Jaguszewski, Janice M.; Kempf, Jody L.

    1995-01-01

    Discusses the usability and content of the following table of contents (TOC) databases selected by science and engineering librarians at the University of Minnesota Twin Cities: Current Contents on Diskette (CCoD), CARL Uncover2, Inside Information, and Contents1st. (AEF)

  17. The burden of clostridium difficile infection: estimates of the incidence of CDI from U.S. Administrative databases.

    PubMed

    Olsen, Margaret A; Young-Xu, Yinong; Stwalley, Dustin; Kelly, Ciarán P; Gerding, Dale N; Saeed, Mohammed J; Mahé, Cedric; Dubberke, Erik R

    2016-04-22

    Many administrative data sources are available to study the epidemiology of infectious diseases, including Clostridium difficile infection (CDI), but few publications have compared CDI event rates across databases using similar methodology. We used comparable methods with multiple administrative databases to compare the incidence of CDI in older and younger persons in the United States. We performed a retrospective study using three longitudinal data sources (Medicare, OptumInsight LabRx, and Healthcare Cost and Utilization Project State Inpatient Database (SID)), and two hospital encounter-level data sources (Nationwide Inpatient Sample (NIS) and Premier Perspective database) to identify CDI in adults aged 18 and older with calculation of CDI incidence rates/100,000 person-years of observation (pyo) and CDI categorization (onset and association). The incidence of CDI ranged from 66/100,000 in persons under 65 years (LabRx), 383/100,000 in elderly persons (SID), and 677/100,000 in elderly persons (Medicare). Ninety percent of CDI episodes in the LabRx population were characterized as community-onset compared to 41 % in the Medicare population. The majority of CDI episodes in the Medicare and LabRx databases were identified based on only a CDI diagnosis, whereas almost ¾ of encounters coded for CDI in the Premier hospital data were confirmed with a positive test result plus treatment with metronidazole or oral vancomycin. Using only the Medicare inpatient data to calculate encounter-level CDI events resulted in 553 CDI events/100,000 persons, virtually the same as the encounter proportion calculated using the NIS (544/100,000 persons). We found that the incidence of CDI was 35 % higher in the Medicare data and fewer episodes were attributed to hospital acquisition when all medical claims were used to identify CDI, compared to only inpatient data lacking information on diagnosis and treatment in the outpatient setting. The incidence of CDI was 10-fold lower and the proportion of community-onset CDI was much higher in the privately insured younger LabRx population compared to the elderly Medicare population. The methods we developed to identify incident CDI can be used by other investigators to study the incidence of other infectious diseases and adverse events using large generalizable administrative datasets.

  18. MIGRATORY IMPLICATIONS FOR CORONARY HEART DISEASE RISK PREVENTION IN ASIAN INDIANS: EVIDENCE FROM THE LEADING HEALTH INDICATORS.

    PubMed

    Fernandez, Ritin; Everett, Bronwyn; Miranda, Charmaine; Rolley, John X; Rajaratnam, Rohan; Davidson, Patricia M

    2015-01-01

    OBJECTIVEctives of this descriptive comparative study were to (1) review data obtained from the World Health Organisation Statistical Information System (WHOSIS) database relating to the prevalence of risk factors for coronary heart disease (CHD) among Indians and Australians and (2) compare these data with published epidemiological studies of CHD riskfactors in adult migrant Asian Indians to provide a comprehensive and comparable assessment of risk factors relating to CHD and the mortality attributable to these risk factors. Design: ThDESIGNdy was undertaken using a database search and integrative review methodology. Data were obtained for comparison of CHD risk factors between Indians and Australians using the WHOSIS database. For the integrative review the MEDLINE, CINAHL, EMBASE, and Cochrane databases were searched using the keywords 'Migrants', 'Asian Indian', 'India', 'Migration', 'Immigration', 'Risk factors', and coronary heart disease. Two reviewers independently assessed the eligibility of the studies for inclusion in the review, the methodological quality and extracted details of eligible studies. Results from the integrative review on CHD risk factors in Asian Indians are presented in a narrative format, along with results from the WHOSIS database. Results: TRESULTSadjusted mortality for CHD was four times higher in migrant Asian Indians when compared to both the native population of the host country and migrants from other countries. Similarly when compared to migrants from other countries migrant Asian Indians had the highest prevalence of overweight individuals. Prevalence rates for hypercholesterolemia were up to 18.5 % among mgrant Asian Indians and migrant Asian Indian women had a higher prevalence of hypertriglyceridaemia compared to Caucasian females. Migrant Asian Indians also had a higher incidence of hypertension and upto 71 % of migrnt Asian Indian men did not meet current guidelines for participation in physical activity. Ethnic-specific prevalence of diabetes ranged from 6-7% among the normal weight to 19-33% among the obese migrant Asian Indians compared with non-Hispanic whites. ConclusionCONCLUSIONAsian Indians have an increased risk of CHD. Culturally sensitive strategies that recognise the effects of migration and extend beyond the health sector should be developed to target lifestyle changes in this high risk population.

  19. Comparison of Conflicts of Interest among Published Hernia Researchers Self-Reported with the Centers for Medicare and Medicaid Services Open Payments Database.

    PubMed

    Olavarria, Oscar A; Holihan, Julie L; Cherla, Deepa; Perez, Cristina A; Kao, Lillian S; Ko, Tien C; Liang, Mike K

    2017-05-01

    Many healthcare providers have financial interests and relationships with healthcare companies. To maintain transparency, investigators are expected to disclose their conflicts of interest (COIs). Recently, the Centers for Medicare and Medicaid Services developed the Open Payment database of COIs reported by industry. We hypothesize that there is discordance between industry-reported and physician self-reported COIs in ventral hernia publications. PubMed was searched for ventral hernia studies accepted for publication between June 2013 and October 2015 and published by authors from the US. Conflicts of interest were defined as payments received as honoraria, consulting fees, compensation for serving as faculty or as a speaker at a venue, research funding payments, or having ownerships/partnerships in companies. Conflicts of interest disclosed on the published articles were compared with the financial relationships in the Open Payments database. A total of 100 studies were selected with 497 participating authors. Information was available from the Open Payments database for 245 (49.2%) authors, of which 134 (26.9%) met the definition for COI. When comparing COIs disclosed by authors and data in the Open Payments database, 81 (16.3%) authors had at least 1 COI but did not declare any, 35 (7.0%) authors had COIs other than what they declared, and 20 (4.0%) declared a COI not listed in the Open Payments database, for a combined discordance rate of 27.3%. There is substantial discordance between self-reported COI in published articles compared with those in the Centers for Medicare and Medicaid Services Open Payments database. Additional studies are needed to determine the reasons for these differences, as COI can influence the validity of the design, conduct, and results of a study. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  20. CSE database: extended annotations and new recommendations for ECG software testing.

    PubMed

    Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie

    2017-08-01

    Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.

  1. Updated folate data in the Dutch Food Composition Database and implications for intake estimates

    PubMed Central

    Westenbrink, Susanne; Jansen-van der Vliet, Martine; van Rossum, Caroline

    2012-01-01

    Background and objective Nutrient values are influenced by the analytical method used. Food folate measured by high performance liquid chromatography (HPLC) or by microbiological assay (MA) yield different results, with in general higher results from MA than from HPLC. This leads to the question of how to deal with different analytical methods in compiling standardised and internationally comparable food composition databases? A recent inventory on folate in European food composition databases indicated that currently MA is more widely used than HPCL. Since older Dutch values are produced by HPLC and newer values by MA, analytical methods and procedures for compiling folate data in the Dutch Food Composition Database (NEVO) were reconsidered and folate values were updated. This article describes the impact of this revision of folate values in the NEVO database as well as the expected impact on the folate intake assessment in the Dutch National Food Consumption Survey (DNFCS). Design The folate values were revised by replacing HPLC with MA values from recent Dutch analyses. Previously MA folate values taken from foreign food composition tables had been recalculated to the HPLC level, assuming a 27% lower value from HPLC analyses. These recalculated values were replaced by the original MA values. Dutch HPLC and MA values were compared to each other. Folate intake was assessed for a subgroup within the DNFCS to estimate the impact of the update. Results In the updated NEVO database nearly all folate values were produced by MA or derived from MA values which resulted in an average increase of 24%. The median habitual folate intake in young children was increased by 11–15% using the updated folate values. Conclusion The current approach for folate in NEVO resulted in more transparency in data production and documentation and higher comparability among European databases. Results of food consumption surveys are expected to show higher folate intakes when using the updated values. PMID:22481900

  2. Analysis of Lunar Highland Regolith Samples From Apollo 16 Drive Core 64001/2 and Lunar Regolith Simulants - an Expanding Comparative Database

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Wentworth, Susan; McKay, Dave S.; Botha, Pieter; Butcher, Alan R.; Horsch, Hanna E.; Benedictus, Aukje; Gottlieb, Paul

    2008-01-01

    This slide presentation reviews the work to analyze the lunar highland regolith samples that came from the Apollo 16 core sample 64001/2 and simulants of lunar regolith, and build a comparative database. The work is part of a larger effort to compile an internally consistent database on lunar regolith (Apollo Samples) and lunar regolith simulants. This is in support of a future lunar outpost. The work is to characterize existing lunar regolith and simulants in terms of particle type, particle size distribution, particle shape distribution, bulk density, and other compositional characteristics, and to evaluate the regolith simulants by the same properties in comparison to the Apollo sample lunar regolith.

  3. A review of automatic mass detection and segmentation in mammographic images.

    PubMed

    Oliver, Arnau; Freixenet, Jordi; Martí, Joan; Pérez, Elsa; Pont, Josep; Denton, Erika R E; Zwiggelaar, Reyer

    2010-04-01

    The aim of this paper is to review existing approaches to the automatic detection and segmentation of masses in mammographic images, highlighting the key-points and main differences between the used strategies. The key objective is to point out the advantages and disadvantages of the various approaches. In contrast with other reviews which only describe and compare different approaches qualitatively, this review also provides a quantitative comparison. The performance of seven mass detection methods is compared using two different mammographic databases: a public digitised database and a local full-field digital database. The results are given in terms of Receiver Operating Characteristic (ROC) and Free-response Receiver Operating Characteristic (FROC) analysis. Copyright 2009 Elsevier B.V. All rights reserved.

  4. Using School-Level Student Achievement to Engage in Formative Evaluation: Comparative School-Level Rates of Oral Reading Fluency Growth Conditioned by Initial Skill for Second Grade Students

    ERIC Educational Resources Information Center

    Cummings, Kelli D.; Stoolmiller, Michael L.; Baker, Scott K.; Fien, Hank; Kame'enui, Edward J.

    2015-01-01

    We present a method for data-based decision making at the school level using student achievement data. We demonstrate the potential of a national assessment database [i.e., the University of Oregon DIBELS Data System (DDS)] to provide comparative levels of school-level data on average student achievement gains. Through the DDS as a data source,…

  5. Enhanced annotations and features for comparing thousands of Pseudomonas genomes in the Pseudomonas genome database.

    PubMed

    Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L

    2016-01-04

    The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Description of two waterborne disease outbreaks in France: a comparative study with data from cohort studies and from health administrative databases.

    PubMed

    Mouly, D; Van Cauteren, D; Vincent, N; Vaissiere, E; Beaudeau, P; Ducrot, C; Gallay, A

    2016-02-01

    Waterborne disease outbreaks (WBDO) of acute gastrointestinal illness (AGI) are a public health concern in France. Their occurrence is probably underestimated due to the lack of a specific surveillance system. The French health insurance database provides an interesting opportunity to improve the detection of these events. A specific algorithm to identify AGI cases from drug payment reimbursement data in the health insurance database has been previously developed. The purpose of our comparative study was to retrospectively assess the ability of the health insurance data to describe WBDO. Data from the health insurance database was compared with the data from cohort studies conducted in two WBDO in 2010 and 2012. The temporal distribution of cases, the day of the peak and the duration of the epidemic, as measured using the health insurance data, were similar to the data from one of the two cohort studies. However, health insurance data accounted for 54 cases compared to the estimated 252 cases accounted for in the cohort study. The accuracy of using health insurance data to describe WBDO depends on the medical consultation rate in the impacted population. As this is never the case, data analysis underestimates the total number of AGI cases. However this data source can be considered for the development of a detection system of a WBDO in France, given its ability to describe an epidemic signal.

  7. Brassica ASTRA: an integrated database for Brassica genomic research.

    PubMed

    Love, Christopher G; Robinson, Andrew J; Lim, Geraldine A C; Hopkins, Clare J; Batley, Jacqueline; Barker, Gary; Spangenberg, German C; Edwards, David

    2005-01-01

    Brassica ASTRA is a public database for genomic information on Brassica species. The database incorporates expressed sequences with Swiss-Prot and GenBank comparative sequence annotation as well as secondary Gene Ontology (GO) annotation derived from the comparison with Arabidopsis TAIR GO annotations. Simple sequence repeat molecular markers are identified within resident sequences and mapped onto the closely related Arabidopsis genome sequence. Bacterial artificial chromosome (BAC) end sequences derived from the Multinational Brassica Genome Project are also mapped onto the Arabidopsis genome sequence enabling users to identify candidate Brassica BACs corresponding to syntenic regions of Arabidopsis. This information is maintained in a MySQL database with a web interface providing the primary means of interrogation. The database is accessible at http://hornbill.cspp.latrobe.edu.au.

  8. Trials and tribulations: how we established a major incident database.

    PubMed

    Hardy, S E J; Fattah, S

    2017-01-25

    We describe the process of setting up a database of major incident reports and its potential future application. A template for reporting on major incidents was developed using a consensus-based process involving a team of experts in the field. A website was set up as a platform from which to launch the template and as a database of submitted reports. This paper describes the processes involved in setting up a major incident reporting database. It describes how specific difficulties have been overcome and anticipates challenges for the future. We have successfully set up a major incident database, the main purpose of which is to have a repository of standardised major incident reports that can be analysed and compared in order to learn from them.

  9. ProteinWorldDB: querying radical pairwise alignments among protein sets from complete genomes.

    PubMed

    Otto, Thomas Dan; Catanho, Marcos; Tristão, Cristian; Bezerra, Márcia; Fernandes, Renan Mathias; Elias, Guilherme Steinberger; Scaglia, Alexandre Capeletto; Bovermann, Bill; Berstis, Viktors; Lifschitz, Sergio; de Miranda, Antonio Basílio; Degrave, Wim

    2010-03-01

    Many analyses in modern biological research are based on comparisons between biological sequences, resulting in functional, evolutionary and structural inferences. When large numbers of sequences are compared, heuristics are often used resulting in a certain lack of accuracy. In order to improve and validate results of such comparisons, we have performed radical all-against-all comparisons of 4 million protein sequences belonging to the RefSeq database, using an implementation of the Smith-Waterman algorithm. This extremely intensive computational approach was made possible with the help of World Community Grid, through the Genome Comparison Project. The resulting database, ProteinWorldDB, which contains coordinates of pairwise protein alignments and their respective scores, is now made available. Users can download, compare and analyze the results, filtered by genomes, protein functions or clusters. ProteinWorldDB is integrated with annotations derived from Swiss-Prot, Pfam, KEGG, NCBI Taxonomy database and gene ontology. The database is a unique and valuable asset, representing a major effort to create a reliable and consistent dataset of cross-comparisons of the whole protein content encoded in hundreds of completely sequenced genomes using a rigorous dynamic programming approach. The database can be accessed through http://proteinworlddb.org

  10. IDD Info: a software to manage surveillance data of Iodine Deficiency Disorders.

    PubMed

    Liu, Peng; Teng, Bai-Jun; Zhang, Shu-Bin; Su, Xiao-Hui; Yu, Jun; Liu, Shou-Jun

    2011-08-01

    IDD info, a new software for managing survey data of Iodine Deficiency Disorders (IDD), is presented in this paper. IDD Info aims to create IDD project databases, process, analyze various national or regional surveillance data and form final report. It has series measures of choosing database from existing ones, revising it, choosing indicators from pool to establish database and adding indicators to pool. It also provides simple tools to scan one database and compare two databases, to set IDD standard parameters, to analyze data by single indicator and multi-indicators, and finally to form typeset report with content customized. IDD Info was developed using Chinese national IDD surveillance data of 2005. Its validity was evaluated by comparing with survey report given by China CDC. The IDD Info is a professional analysis tool, which succeeds in speeding IDD data analysis up to about 14.28% with respect to standard reference routines. It consequently enhances analysis performance and user compliance. IDD Info is a practical and accurate means of managing the multifarious IDD surveillance data that can be widely used by non-statisticians in national and regional IDD surveillance. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Uniform standards for genome databases in forest and fruit trees

    USDA-ARS?s Scientific Manuscript database

    TreeGenes and tfGDR serve the international forestry and fruit tree genomics research communities, respectively. These databases hold similar sequence data and provide resources for the submission and recovery of this information in order to enable comparative genomics research. Large-scale genotype...

  12. Online Reference Service--How to Begin: A Selected Bibliography.

    ERIC Educational Resources Information Center

    Shroder, Emelie J., Ed.

    1982-01-01

    Materials in this bibliography were selected and recommended by members of the Use of Machine-Assisted Reference in Public Libraries Committee, Reference and Adult Services Division, American Library Association. Topics include: financial aspects, equipment and communications considerations, comparing databases and database systems, advertising…

  13. dbHiMo: a web-based epigenomics platform for histone-modifying enzymes

    PubMed Central

    Choi, Jaeyoung; Kim, Ki-Tae; Huh, Aram; Kwon, Seomun; Hong, Changyoung; Asiegbu, Fred O.; Jeon, Junhyun; Lee, Yong-Hwan

    2015-01-01

    Over the past two decades, epigenetics has evolved into a key concept for understanding regulation of gene expression. Among many epigenetic mechanisms, covalent modifications such as acetylation and methylation of lysine residues on core histones emerged as a major mechanism in epigenetic regulation. Here, we present the database for histone-modifying enzymes (dbHiMo; http://hme.riceblast.snu.ac.kr/) aimed at facilitating functional and comparative analysis of histone-modifying enzymes (HMEs). HMEs were identified by applying a search pipeline built upon profile hidden Markov model (HMM) to proteomes. The database incorporates 11 576 HMEs identified from 603 proteomes including 483 fungal, 32 plants and 51 metazoan species. The dbHiMo provides users with web-based personalized data browsing and analysis tools, supporting comparative and evolutionary genomics. With comprehensive data entries and associated web-based tools, our database will be a valuable resource for future epigenetics/epigenomics studies. Database URL: http://hme.riceblast.snu.ac.kr/ PMID:26055100

  14. Acoustic analysis of normal Saudi adult voices.

    PubMed

    Malki, Khalid H; Al-Habib, Salman F; Hagr, Abulrahman A; Farahat, Mohamed M

    2009-08-01

    To determine the acoustic differences between Saudi adult male and female voices, and to compare the acoustic variables of the Multidimensional Voice Program (MDVP) obtained from North American adults to a group of Saudi males and females. A cross-sectional survey of normal adult male and female voices was conducted at King Abdulaziz University Hospital, Riyadh, Kingdom of Saudi Arabia between March 2007 and December 2008. Ninety-five Saudi subjects sustained the vowel /a/ 6 times, and the steady state portion of 3 samples was analyzed and compared with the samples of the KayPentax normative voice database. Significant differences were found between Saudi and North American KayPentax database groups. In the male subjects, 15 of 33 MDVP variables, and 10 of 33 variables in the female subjects were found to be significantly different from the KayPentax database. We conclude that the acoustical differences may reflect laryngeal anatomical or tissue differences between the Saudi and the KayPentax database.

  15. Low Cost Comprehensive Microcomputer-Based Medical History Database Acquisition

    PubMed Central

    Buchan, Robert R. C.

    1980-01-01

    A carefully detailed, comprehensive medical history database is the fundamental essence of patient-physician interaction. Computer generated medical history acquisition has repeatedly been shown to be highly acceptable to both patient and physician while consistantly providing a superior product. Cost justification of machine derived problem and history databases, however, has in the past been marginal, at best. Routine use of the technology has therefore been limited to large clinics, university hospitals and federal installations where feasible volume applications are supported by endowment, research funds or taxes. This paper summarizes the use of a unique low cost device which marries advanced microprocessor technology with random access, variable-frame film projection techniques to acquire a detailed comprehensive medical history database. Preliminary data are presented which compare patient, physician, and machine generated histories for content, discovery, compliance and acceptability. Results compare favorably with the findings in similar studies by a variety of authors. ImagesFigure 1Figure 2Figure 3Figure 4

  16. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  17. Meta-Storms: efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data.

    PubMed

    Su, Xiaoquan; Xu, Jian; Ning, Kang

    2012-10-01

    It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.

  18. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    PubMed Central

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes. PMID:25202745

  19. Supporting Social Data Observatory with Customizable Index Structures on HBase - Architecture and Performance

    DTIC Science & Technology

    2013-01-01

    commercial NoSQL database system. The results show that In-dexedHBase provides a data loading speed that is 6 times faster than Riak, and is...compare it with Riak, a widely adopted commercial NoSQL database system. The results show that In- dexedHBase provides a data loading speed that is 6...events. This chapter describes our research towards building an efficient and scalable storage platform for Truthy. Many existing NoSQL databases

  20. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform.

    PubMed

    Zheng, Wenning; Mutha, Naresh V R; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah; Choo, Siew Woh

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my.

  1. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform

    PubMed Central

    Zheng, Wenning; Mutha, Naresh V.R.; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S.; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my. PMID:27017950

  2. Normative Databases for Imaging Instrumentation.

    PubMed

    Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray

    2015-08-01

    To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.

  3. Normative Databases for Imaging Instrumentation

    PubMed Central

    Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray

    2015-01-01

    Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003

  4. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  5. SwePep, a database designed for endogenous peptides and mass spectrometry.

    PubMed

    Fälth, Maria; Sköld, Karl; Norrman, Mathias; Svensson, Marcus; Fenyö, David; Andren, Per E

    2006-06-01

    A new database, SwePep, specifically designed for endogenous peptides, has been constructed to significantly speed up the identification process from complex tissue samples utilizing mass spectrometry. In the identification process the experimental peptide masses are compared with the peptide masses stored in the database both with and without possible post-translational modifications. This intermediate identification step is fast and singles out peptides that are potential endogenous peptides and can later be confirmed with tandem mass spectrometry data. Successful applications of this methodology are presented. The SwePep database is a relational database developed using MySql and Java. The database contains 4180 annotated endogenous peptides from different tissues originating from 394 different species as well as 50 novel peptides from brain tissue identified in our laboratory. Information about the peptides, including mass, isoelectric point, sequence, and precursor protein, is also stored in the database. This new approach holds great potential for removing the bottleneck that occurs during the identification process in the field of peptidomics. The SwePep database is available to the public.

  6. The EMBL nucleotide sequence database

    PubMed Central

    Stoesser, Guenter; Baker, Wendy; van den Broek, Alexandra; Camon, Evelyn; Garcia-Pastor, Maria; Kanz, Carola; Kulikova, Tamara; Lombard, Vincent; Lopez, Rodrigo; Parkinson, Helen; Redaschi, Nicole; Sterk, Peter; Stoehr, Peter; Tuli, Mary Ann

    2001-01-01

    The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl/) is maintained at the European Bioinformatics Institute (EBI) in an international collaboration with the DNA Data Bank of Japan (DDBJ) and GenBank at the NCBI (USA). Data is exchanged amongst the collaborating databases on a daily basis. The major contributors to the EMBL database are individual authors and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via ftp, email and World Wide Web interfaces. EBI’s Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many specialized databases. For sequence similarity searching a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT. PMID:11125039

  7. Extension of the COG and arCOG databases by amino acid and nucleotide sequences

    PubMed Central

    Meereis, Florian; Kaufmann, Michael

    2008-01-01

    Background The current versions of the COG and arCOG databases, both excellent frameworks for studies in comparative and functional genomics, do not contain the nucleotide sequences corresponding to their protein or protein domain entries. Results Using sequence information obtained from GenBank flat files covering the completely sequenced genomes of the COG and arCOG databases, we constructed NUCOCOG (nucleotide sequences containing COG databases) as an extended version including all nucleotide sequences and in addition the amino acid sequences originally utilized to construct the current COG and arCOG databases. We make available three comprehensive single XML files containing the complete databases including all sequence information. In addition, we provide a web interface as a utility suitable to browse the NUCOCOG database for sequence retrieval. The database is accessible at . Conclusion NUCOCOG offers the possibility to analyze any sequence related property in the context of the COG and arCOG framework simply by using script languages such as PERL applied to a large but single XML document. PMID:19014535

  8. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency

    PubMed Central

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254

  9. Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases

    PubMed Central

    Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz

    2015-01-01

    Background and Aims: The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Methods: Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. Results: The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. Conclusion: PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles. PMID:26236086

  10. Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases.

    PubMed

    Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz

    2015-06-01

    The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles.

  11. Overlap and diversity in antimicrobial peptide databases: compiling a non-redundant set of sequences.

    PubMed

    Aguilera-Mendoza, Longendri; Marrero-Ponce, Yovani; Tellez-Ibarra, Roberto; Llorente-Quesada, Monica T; Salgado, Jesús; Barigye, Stephen J; Liu, Jun

    2015-08-01

    The large variety of antimicrobial peptide (AMP) databases developed to date are characterized by a substantial overlap of data and similarity of sequences. Our goals are to analyze the levels of redundancy for all available AMP databases and use this information to build a new non-redundant sequence database. For this purpose, a new software tool is introduced. A comparative study of 25 AMP databases reveals the overlap and diversity among them and the internal diversity within each database. The overlap analysis shows that only one database (Peptaibol) contains exclusive data, not present in any other, whereas all sequences in the LAMP_Patent database are included in CAMP_Patent. However, the majority of databases have their own set of unique sequences, as well as some overlap with other databases. The complete set of non-duplicate sequences comprises 16 990 cases, which is almost half of the total number of reported peptides. On the other hand, the diversity analysis identifies the most and least diverse databases and proves that all databases exhibit some level of redundancy. Finally, we present a new parallel-free software, named Dover Analyzer, developed to compute the overlap and diversity between any number of databases and compile a set of non-redundant sequences. These results are useful for selecting or building a suitable representative set of AMPs, according to specific needs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. A database for the analysis of immunity genes in Drosophila: PADMA database.

    PubMed

    Lee, Mark J; Mondal, Ariful; Small, Chiyedza; Paddibhatla, Indira; Kawaguchi, Akira; Govind, Shubha

    2011-01-01

    While microarray experiments generate voluminous data, discerning trends that support an existing or alternative paradigm is challenging. To synergize hypothesis building and testing, we designed the Pathogen Associated Drosophila MicroArray (PADMA) database for easy retrieval and comparison of microarray results from immunity-related experiments (www.padmadatabase.org). PADMA also allows biologists to upload their microarray-results and compare it with datasets housed within PADMA. We tested PADMA using a preliminary dataset from Ganaspis xanthopoda-infected fly larvae, and uncovered unexpected trends in gene expression, reshaping our hypothesis. Thus, the PADMA database will be a useful resource to fly researchers to evaluate, revise, and refine hypotheses.

  13. Evaluation of Electronic Healthcare Databases for Post-Marketing Drug Safety Surveillance and Pharmacoepidemiology in China.

    PubMed

    Yang, Yu; Zhou, Xiaofeng; Gao, Shuangqing; Lin, Hongbo; Xie, Yanming; Feng, Yuji; Huang, Kui; Zhan, Siyan

    2018-01-01

    Electronic healthcare databases (EHDs) are used increasingly for post-marketing drug safety surveillance and pharmacoepidemiology in Europe and North America. However, few studies have examined the potential of these data sources in China. Three major types of EHDs in China (i.e., a regional community-based database, a national claims database, and an electronic medical records [EMR] database) were selected for evaluation. Forty core variables were derived based on the US Mini-Sentinel (MS) Common Data Model (CDM) as well as the data features in China that would be desirable to support drug safety surveillance. An email survey of these core variables and eight general questions as well as follow-up inquiries on additional variables was conducted. These 40 core variables across the three EHDs and all variables in each EHD along with those in the US MS CDM and Observational Medical Outcomes Partnership (OMOP) CDM were compared for availability and labeled based on specific standards. All of the EHDs' custodians confirmed their willingness to share their databases with academic institutions after appropriate approval was obtained. The regional community-based database contained 1.19 million people in 2015 with 85% of core variables. Resampled annually nationwide, the national claims database included 5.4 million people in 2014 with 55% of core variables, and the EMR database included 3 million inpatients from 60 hospitals in 2015 with 80% of core variables. Compared with MS CDM or OMOP CDM, the proportion of variables across the three EHDs available or able to be transformed/derived from the original sources are 24-83% or 45-73%, respectively. These EHDs provide potential value to post-marketing drug safety surveillance and pharmacoepidemiology in China. Future research is warranted to assess the quality and completeness of these EHDs or additional data sources in China.

  14. Inaccurate Ascertainment of Morbidity and Mortality due to Influenza in Administrative Databases: A Population-Based Record Linkage Study

    PubMed Central

    Muscatello, David J.; Amin, Janaki; MacIntyre, C. Raina; Newall, Anthony T.; Rawlinson, William D.; Sintchenko, Vitali; Gilmour, Robin; Thackway, Sarah

    2014-01-01

    Background Historically, counting influenza recorded in administrative health outcome databases has been considered insufficient to estimate influenza attributable morbidity and mortality in populations. We used database record linkage to evaluate whether modern databases have similar limitations. Methods Person-level records were linked across databases of laboratory notified influenza, emergency department (ED) presentations, hospital admissions and death registrations, from the population (∼6.9 million) of New South Wales (NSW), Australia, 2005 to 2008. Results There were 2568 virologically diagnosed influenza infections notified. Among those, 25% of 40 who died, 49% of 1451 with a hospital admission and 7% of 1742 with an ED presentation had influenza recorded on the respective database record. Compared with persons aged ≥65 years and residents of regional and remote areas, respectively, children and residents of major cities were more likely to have influenza coded on their admission record. Compared with older persons and admitted patients, respectively, working age persons and non-admitted persons were more likely to have influenza coded on their ED record. On both ED and admission records, persons with influenza type A infection were more likely than those with type B infection to have influenza coded. Among death registrations, hospital admissions and ED presentations with influenza recorded as a cause of illness, 15%, 28% and 1.4%, respectively, also had laboratory notified influenza. Time trends in counts of influenza recorded on the ED, admission and death databases reflected the trend in counts of virologically diagnosed influenza. Conclusions A minority of the death, hospital admission and ED records for persons with a virologically diagnosed influenza infection identified influenza as a cause of illness. Few database records with influenza recorded as a cause had laboratory confirmation. The databases have limited value for estimating incidence of influenza outcomes, but can be used for monitoring variation in incidence over time. PMID:24875306

  15. Industry ties in otolaryngology: initial insights from the physician payment sunshine act.

    PubMed

    Rathi, Vinay K; Samuel, Andre M; Mehra, Saral

    2015-06-01

    To characterize nonresearch payments made by industry to otolaryngologists in order to explore how the potential for conflicts of interests varies among otolaryngologists and compares between otolaryngologists and other surgical specialists. Retrospective cross-sectional database analysis. Open Payments program database recently released by Centers for Medicare and Medicaid Services. Surgeons nationwide who were identified as receiving nonresearch payment from industry in accordance with the Physician Payment Sunshine Act. The proportion of otolaryngologists receiving payment, the mean payment per otolaryngologist, and the standard deviation thereof were determined using the Open Payments database and compared to other surgical specialties. Otolaryngologists were further compared by specialization, census region, sponsor, and payment amount. Less than half of otolaryngologists (48.1%) were reported as receiving payments over the study period, the second smallest proportion among surgical specialties. Otolaryngologists received the lowest mean payment per compensated individual ($573) compared to other surgical specialties. Although otolaryngology had the smallest variance in payment among surgical specialties (SD, $2806), the distribution was skewed by top earners; the top 10% of earners accounted for 87% ($2,199,254) of all payment to otolaryngologists. Otolaryngologists in the West census region were less likely to receive payments (38.6%, P < .001). Over the study period, otolaryngologists appeared to have more limited financial ties with industry compared to other surgeons, though variation exists within otolaryngology. Further refinement of the Open Payments database is needed to explore differences between otolaryngologists and leverage payment information as a tool for self-regulation. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  16. A systematic review of model-based economic evaluations of diagnostic and therapeutic strategies for lower extremity artery disease.

    PubMed

    Vaidya, Anil; Joore, Manuela A; ten Cate-Hoek, Arina J; Kleinegris, Marie-Claire; ten Cate, Hugo; Severens, Johan L

    2014-01-01

    Lower extremity artery disease (LEAD) is a sign of wide spread atherosclerosis also affecting coronary, cerebral and renal arteries and is associated with increased risk of cardiovascular events. Many economic evaluations have been published for LEAD due to its clinical, social and economic importance. The aim of this systematic review was to assess modelling methods used in published economic evaluations in the field of LEAD. Our review appraised and compared the general characteristics, model structure and methodological quality of published models. Electronic databases MEDLINE and EMBASE were searched until February 2013 via OVID interface. Cochrane database of systematic reviews, Health Technology Assessment database hosted by National Institute for Health research and National Health Services Economic Evaluation Database (NHSEED) were also searched. The methodological quality of the included studies was assessed by using the Philips' checklist. Sixteen model-based economic evaluations were identified and included. Eleven models compared therapeutic health technologies; three models compared diagnostic tests and two models compared a combination of diagnostic and therapeutic options for LEAD. Results of this systematic review revealed an acceptable to low methodological quality of the included studies. Methodological diversity and insufficient information posed a challenge for valid comparison of the included studies. In conclusion, there is a need for transparent, methodologically comparable and scientifically credible model-based economic evaluations in the field of LEAD. Future modelling studies should include clinically and economically important cardiovascular outcomes to reflect the wider impact of LEAD on individual patients and on the society.

  17. Comparative effectiveness analysis of anticoagulant strategies in a large observational database of percutaneous coronary interventions.

    PubMed

    Wise, Gregory R; Schwartz, Brian P; Dittoe, Nathaniel; Safar, Ammar; Sherman, Steven; Bowdy, Bruce; Hahn, Harvey S

    2012-06-01

    Percutaneous coronary intervention (PCI) is the most commonly used procedure for coronary revascularization. There are multiple adjuvant anticoagulation strategies available. In this era of cost containment, we performed a comparative effectiveness analysis of clinical outcomes and cost of the major anticoagulant strategies across all types of PCI procedures in a large observational database. A retrospective, comparative effectiveness analysis of the Premier observational database was conducted to determine the impact of anticoagulant treatment on outcomes. Multiple linear regression and logistic regression models were used to assess the association of initial antithrombotic treatment with outcomes while controlling for other factors. A total of 458,448 inpatient PCI procedures with known antithrombotic regimen from 299 hospitals between January 1, 2004 and March 31, 2008 were identified. Compared to patients treated with heparin plus glycoprotein IIb/IIIa inhibitor (GPI), bivalirudin was associated with a 41% relative risk reduction (RRR) for inpatient mortality, a 44% RRR for clinically apparent bleeding, and a 37% RRR for any transfusion. Furthermore, treatment with bivalirudin alone resulted in a cost savings of $976 per case. Similar results were seen between bivalirudin and heparin in all end-points. Combined use of both bivalirudin and GPI substantially attenuated the cost benefits demonstrated with bivalirudin alone. Bivalirudin use was associated with both improved clinical outcomes and decreased hospital costs in this large "real-world" database. To our knowledge, this study is the first to demonstrate the ideal comparative effectiveness end-point of both improved clinical outcomes with decreased costs in PCI. ©2012, Wiley Periodicals, Inc.

  18. Field Validation of Food Service Listings: A Comparison of Commercial and Online Geographic Information System Databases

    PubMed Central

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-01-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases. PMID:23066385

  19. Field validation of food service listings: a comparison of commercial and online geographic information system databases.

    PubMed

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-08-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.

  20. SNaX: A Database of Supernova X-Ray Light Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Mathias; Dwarkadas, Vikram V., E-mail: Mathias_Ross@msn.com, E-mail: vikram@oddjob.uchicago.edu

    We present the Supernova X-ray Database (SNaX), a compilation of the X-ray data from young supernovae (SNe). The database includes the X-ray fluxes and luminosities of young SNe, from days to years after outburst. The original goal and intent of this study was to present a database of Type IIn SNe (SNe IIn), which we have accomplished. Our ongoing goal is to expand the database to include all SNe for which published data are available. The database interface allows one to search for SNe using various criteria, plot all or selected data points, and download both the data and themore » plot. The plotting facility allows for significant customization. There is also a facility for the user to submit data that can be directly incorporated into the database. We include an option to fit the decay of any given SN light curve with a power-law. The database includes a conversion of most data points to a common 0.3–8 keV band so that SN light curves may be directly compared with each other. A mailing list has been set up to disseminate information about the database. We outline the structure and function of the database, describe its various features, and outline the plans for future expansion.« less

  1. Comprehensive Assessments of RNA-seq by the SEQC Consortium: FDA-Led Efforts Advance Precision Medicine.

    PubMed

    Xu, Joshua; Gong, Binsheng; Wu, Leihong; Thakkar, Shraddha; Hong, Huixiao; Tong, Weida

    2016-03-15

    Studies on gene expression in response to therapy have led to the discovery of pharmacogenomics biomarkers and advances in precision medicine. Whole transcriptome sequencing (RNA-seq) is an emerging tool for profiling gene expression and has received wide adoption in the biomedical research community. However, its value in regulatory decision making requires rigorous assessment and consensus between various stakeholders, including the research community, regulatory agencies, and industry. The FDA-led SEquencing Quality Control (SEQC) consortium has made considerable progress in this direction, and is the subject of this review. Specifically, three RNA-seq platforms (Illumina HiSeq, Life Technologies SOLiD, and Roche 454) were extensively evaluated at multiple sites to assess cross-site and cross-platform reproducibility. The results demonstrated that relative gene expression measurements were consistently comparable across labs and platforms, but not so for the measurement of absolute expression levels. As part of the quality evaluation several studies were included to evaluate the utility of RNA-seq in clinical settings and safety assessment. The neuroblastoma study profiled tumor samples from 498 pediatric neuroblastoma patients by both microarray and RNA-seq. RNA-seq offers more utilities than microarray in determining the transcriptomic characteristics of cancer. However, RNA-seq and microarray-based models were comparable in clinical endpoint prediction, even when including additional features unique to RNA-seq beyond gene expression. The toxicogenomics study compared microarray and RNA-seq profiles of the liver samples from rats exposed to 27 different chemicals representing multiple toxicity modes of action. Cross-platform concordance was dependent on chemical treatment and transcript abundance. Though both RNA-seq and microarray are suitable for developing gene expression based predictive models with comparable prediction performance, RNA-seq offers advantages over microarray in profiling genes with low expression. The rat BodyMap study provided a comprehensive rat transcriptomic body map by performing RNA-Seq on 320 samples from 11 organs in either sex of juvenile, adolescent, adult and aged Fischer 344 rats. Lastly, the transferability study demonstrated that signature genes of predictive models are reciprocally transferable between microarray and RNA-seq data for model development using a comprehensive approach with two large clinical data sets. This result suggests continued usefulness of legacy microarray data in the coming RNA-seq era. In conclusion, the SEQC project enhances our understanding of RNA-seq and provides valuable guidelines for RNA-seq based clinical application and safety evaluation to advance precision medicine.

  2. Network control processor for a TDMA system

    NASA Astrophysics Data System (ADS)

    Suryadevara, Omkarmurthy; Debettencourt, Thomas J.; Shulman, R. B.

    Two unique aspects of designing a network control processor (NCP) to monitor and control a demand-assigned, time-division multiple-access (TDMA) network are described. The first involves the implementation of redundancy by synchronizing the databases of two geographically remote NCPs. The two sets of databases are kept in synchronization by collecting data on both systems, transferring databases, sending incremental updates, and the parallel updating of databases. A periodic audit compares the checksums of the databases to ensure synchronization. The second aspect involves the use of a tracking algorithm to dynamically reallocate TDMA frame space. This algorithm detects and tracks current and long-term load changes in the network. When some portions of the network are overloaded while others have excess capacity, the algorithm automatically calculates and implements a new burst time plan.

  3. Identifying work-related motor vehicle crashes in multiple databases.

    PubMed

    Thomas, Andrea M; Thygerson, Steven M; Merrill, Ray M; Cook, Lawrence J

    2012-01-01

    To compare and estimate the magnitude of work-related motor vehicle crashes in Utah using 2 probabilistically linked statewide databases. Data from 2006 and 2007 motor vehicle crash and hospital databases were joined through probabilistic linkage. Summary statistics and capture-recapture were used to describe occupants injured in work-related motor vehicle crashes and estimate the size of this population. There were 1597 occupants in the motor vehicle crash database and 1673 patients in the hospital database identified as being in a work-related motor vehicle crash. We identified 1443 occupants with at least one record from either the motor vehicle crash or hospital database indicating work-relatedness that linked to any record in the opposing database. We found that 38.7 percent of occupants injured in work-related motor vehicle crashes identified in the motor vehicle crash database did not have a primary payer code of workers' compensation in the hospital database and 40.0 percent of patients injured in work-related motor vehicle crashes identified in the hospital database did not meet our definition of a work-related motor vehicle crash in the motor vehicle crash database. Depending on how occupants injured in work-related motor crashes are identified, we estimate the population to be between 1852 and 8492 in Utah for the years 2006 and 2007. Research on single databases may lead to biased interpretations of work-related motor vehicle crashes. Combining 2 population based databases may still result in an underestimate of the magnitude of work-related motor vehicle crashes. Improved coding of work-related incidents is needed in current databases.

  4. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  5. Object-oriented structures supporting remote sensing databases

    NASA Technical Reports Server (NTRS)

    Wichmann, Keith; Cromp, Robert F.

    1995-01-01

    Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.

  6. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Laska, Jason A

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  7. Inter Annual Variability of the Acoustic Propagation in the Yellow Sea Identified from a Synoptic Monthly Gridded Database as Compared with GDEM

    DTIC Science & Technology

    2016-09-01

    the world climate is in fact warming due to anthropogenic causes (Anderegg et al. 2010; Solomon et al. 2009). To put this in terms for this research ...2006). The present research uses a 0.5’ resolution. B. SEDIMENTS DATABASE There are four openly available sediment databases: Enhanced, Standard...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This research investigates the inter-annual acoustic variability in the Yellow Sea identified from

  8. Clear-water abutment and contraction scour in the Coastal Plain and Piedmont Provinces of South Carolina, 1996-99

    USGS Publications Warehouse

    Benedict, Stephen T.

    2016-01-01

    Data from this study have been compiled into a database that includes photographs, figures, observed scour depths, theoretical scour depths, limited basin characteristics, limited soil data, and theoretical hydraulic data. The database can be used to compare studied sites with unstudied sites to assess the potential for scour at the unstudied sites. In addition, the database can be used to assess the performance of various theoretical methods for predicting clear-water abutment and contraction scour.

  9. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…

  10. Commercial Aircraft Emission Scenario for 2020: Database Development and Analysis

    NASA Technical Reports Server (NTRS)

    Sutkus, Donald J., Jr.; Baughcum, Steven L.; DuBois, Douglas P.; Wey, Chowen C. (Technical Monitor)

    2003-01-01

    This report describes the development of a three-dimensional database of aircraft fuel use and emissions (NO(x), CO, and hydrocarbons) for the commercial aircraft fleet projected to 2020. Global totals of emissions and fuel burn for 2020 are compared to global totals from previous aircraft emission scenario calculations.

  11. Fuzzy Relational Databases: Representational Issues and Reduction Using Similarity Measures.

    ERIC Educational Resources Information Center

    Prade, Henri; Testemale, Claudette

    1987-01-01

    Compares and expands upon two approaches to dealing with fuzzy relational databases. The proposed similarity measure is based on a fuzzy Hausdorff distance and estimates the mismatch between two possibility distributions using a reduction process. The consequences of the reduction process on query evaluation are studied. (Author/EM)

  12. PIECE 2.0: an update for the plant gene structure comparison and evolution database

    USDA-ARS?s Scientific Manuscript database

    PIECE (Plant Intron Exon Comparision and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in ...

  13. Sports Information Online: Searching the SPORT Database and Tips for Finding Sports Medicine Information Online.

    ERIC Educational Resources Information Center

    Janke, Richard V.; And Others

    1988-01-01

    The first article describes SPORT, a database providing international coverage of athletics and physical education, and compares it to other online services in terms of coverage, thesauri, possible search strategies, and actual usage. The second article reviews available online information on sports medicine. (CLB)

  14. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework

    EPA Science Inventory

    Managing the world’s largest and complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that are comparable across the region. To meet such a need, we developed a hierarchi...

  15. New Technology, New Questions: Using an Internet Database in Chemistry.

    ERIC Educational Resources Information Center

    Hayward, Roger

    1996-01-01

    Describes chemistry software that is part of a balanced educational program. Provides several applications including graphs of various relationships among the elements. Includes a brief historical treatment of the periodic table and compares the traditional historical approach with perspectives gained by manipulating an electronic database. (DDR)

  16. IDENTIFICATION OF BIOLOGICALLY RELEVANT GENES USING A DATABASE OF RAT LIVER AND KIDNEY BASELINE GENE EXPRESSION

    EPA Science Inventory

    Microarray data from independent labs and studies can be compared to potentially identify toxicologically and biologically relevant genes. The Baseline Animal Database working group of HESI was formed to assess baseline gene expression from microarray data derived from control or...

  17. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    PubMed

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  18. GenColors-based comparative genome databases for small eukaryotic genomes.

    PubMed

    Felder, Marius; Romualdi, Alessandro; Petzold, Andreas; Platzer, Matthias; Sühnel, Jürgen; Glöckner, Gernot

    2013-01-01

    Many sequence data repositories can give a quick and easily accessible overview on genomes and their annotations. Less widespread is the possibility to compare related genomes with each other in a common database environment. We have previously described the GenColors database system (http://gencolors.fli-leibniz.de) and its applications to a number of bacterial genomes such as Borrelia, Legionella, Leptospira and Treponema. This system has an emphasis on genome comparison. It combines data from related genomes and provides the user with an extensive set of visualization and analysis tools. Eukaryote genomes are normally larger than prokaryote genomes and thus pose additional challenges for such a system. We have, therefore, adapted GenColors to also handle larger datasets of small eukaryotic genomes and to display eukaryotic gene structures. Further recent developments include whole genome views, genome list options and, for bacterial genome browsers, the display of horizontal gene transfer predictions. Two new GenColors-based databases for two fungal species (http://fgb.fli-leibniz.de) and for four social amoebas (http://sacgb.fli-leibniz.de) were set up. Both new resources open up a single entry point for related genomes for the amoebozoa and fungal research communities and other interested users. Comparative genomics approaches are greatly facilitated by these resources.

  19. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  20. Comparative effectiveness research in hand surgery.

    PubMed

    Johnson, Shepard P; Chung, Kevin C

    2014-08-01

    Comparative effectiveness research (CER) is a concept initiated by the Institute of Medicine and financially supported by the federal government. The primary objective of CER is to improve decision making in medicine. This research is intended to evaluate the effectiveness, benefits, and harmful effects of alternative interventions. CER studies are commonly large, simple, observational, and conducted using electronic databases. To date, there is little comparative effectiveness evidence within hand surgery to guide therapeutic decisions. To draw conclusions on effectiveness through electronic health records, databases must contain clinical information and outcomes relevant to hand surgery interventions, such as patient-related outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

Top