Li, Jin; Wang, Limei; Guo, Maozu; Zhang, Ruijie; Dai, Qiguo; Liu, Xiaoyan; Wang, Chunyu; Teng, Zhixia; Xuan, Ping; Zhang, Mingming
2015-01-01
In humans, despite the rapid increase in disease-associated gene discovery, a large proportion of disease-associated genes are still unknown. Many network-based approaches have been used to prioritize disease genes. Many networks, such as the protein-protein interaction (PPI), KEGG, and gene co-expression networks, have been used. Expression quantitative trait loci (eQTLs) have been successfully applied for the determination of genes associated with several diseases. In this study, we constructed an eQTL-based gene-gene co-regulation network (GGCRN) and used it to mine for disease genes. We adopted the random walk with restart (RWR) algorithm to mine for genes associated with Alzheimer disease. Compared to the Human Protein Reference Database (HPRD) PPI network alone, the integrated HPRD PPI and GGCRN networks provided faster convergence and revealed new disease-related genes. Therefore, using the RWR algorithm for integrated PPI and GGCRN is an effective method for disease-associated gene mining.
Protein annotation from protein interaction networks and Gene Ontology.
Nguyen, Cao D; Gardiner, Katheleen J; Cios, Krzysztof J
2011-10-01
We introduce a novel method for annotating protein function that combines Naïve Bayes and association rules, and takes advantage of the underlying topology in protein interaction networks and the structure of graphs in the Gene Ontology. We apply our method to proteins from the Human Protein Reference Database (HPRD) and show that, in comparison with other approaches, it predicts protein functions with significantly higher recall with no loss of precision. Specifically, it achieves 51% precision and 60% recall versus 45% and 26% for Majority and 24% and 61% for χ²-statistics, respectively. Copyright © 2011 Elsevier Inc. All rights reserved.
Hermjakob, Henning; Montecchi-Palazzi, Luisa; Bader, Gary; Wojcik, Jérôme; Salwinski, Lukasz; Ceol, Arnaud; Moore, Susan; Orchard, Sandra; Sarkans, Ugis; von Mering, Christian; Roechert, Bernd; Poux, Sylvain; Jung, Eva; Mersch, Henning; Kersey, Paul; Lappe, Michael; Li, Yixue; Zeng, Rong; Rana, Debashis; Nikolski, Macha; Husi, Holger; Brun, Christine; Shanker, K; Grant, Seth G N; Sander, Chris; Bork, Peer; Zhu, Weimin; Pandey, Akhilesh; Brazma, Alvis; Jacq, Bernard; Vidal, Marc; Sherman, David; Legrain, Pierre; Cesareni, Gianni; Xenarios, Ioannis; Eisenberg, David; Steipe, Boris; Hogue, Chris; Apweiler, Rolf
2004-02-01
A major goal of proteomics is the complete description of the protein interaction network underlying cell physiology. A large number of small scale and, more recently, large-scale experiments have contributed to expanding our understanding of the nature of the interaction network. However, the necessary data integration across experiments is currently hampered by the fragmentation of publicly available protein interaction data, which exists in different formats in databases, on authors' websites or sometimes only in print publications. Here, we propose a community standard data model for the representation and exchange of protein interaction data. This data model has been jointly developed by members of the Proteomics Standards Initiative (PSI), a work group of the Human Proteome Organization (HUPO), and is supported by major protein interaction data providers, in particular the Biomolecular Interaction Network Database (BIND), Cellzome (Heidelberg, Germany), the Database of Interacting Proteins (DIP), Dana Farber Cancer Institute (Boston, MA, USA), the Human Protein Reference Database (HPRD), Hybrigenics (Paris, France), the European Bioinformatics Institute's (EMBL-EBI, Hinxton, UK) IntAct, the Molecular Interactions (MINT, Rome, Italy) database, the Protein-Protein Interaction Database (PPID, Edinburgh, UK) and the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING, EMBL, Heidelberg, Germany).
Puértolas, Jaime; Conesa, María R.; Ballester, Carlos; Dodd, Ian C.
2015-01-01
Patterns of root abscisic acid (ABA) accumulation ([ABA]root), root water potential (Ψroot), and root water uptake (RWU), and their impact on xylem sap ABA concentration ([X-ABA]) were measured under vertical partial root-zone drying (VPRD, upper compartment dry, lower compartment wet) and horizontal partial root-zone drying (HPRD, two lateral compartments: one dry, the other wet) of potato (Solanum tuberosum L.). When water was withheld from the dry compartment for 0–10 d, RWU and Ψroot were similarly lower in the dry compartment when soil volumetric water content dropped below 0.22cm3 cm–3 for both spatial distributions of soil moisture. However, [ABA]root increased in response to decreasing Ψroot in the dry compartment only for HPRD, resulting in much higher ABA accumulation than in VPRD. The position of the sampled roots (~4cm closer to the surface in the dry compartment of VPRD than in HPRD) might account for this difference, since older (upper) roots may accumulate less ABA in response to decreased Ψroot than younger (deeper) roots. This would explain differences in root ABA accumulation patterns under vertical and horizontal soil moisture gradients reported in the literature. In our experiment, these differences in root ABA accumulation did not influence [X-ABA], since the RWU fraction (and thus ABA export to shoots) from the dry compartment dramatically decreased simultaneously with any increase in [ABA]root. Thus, HPRD might better trigger a long-distance ABA signal than VPRD under conditions allowing simultaneous high [ABA]root and relatively high RWU fraction. PMID:25547916
Puértolas, Jaime; Conesa, María R; Ballester, Carlos; Dodd, Ian C
2015-04-01
Patterns of root abscisic acid (ABA) accumulation ([ABA]root), root water potential (Ψroot), and root water uptake (RWU), and their impact on xylem sap ABA concentration ([X-ABA]) were measured under vertical partial root-zone drying (VPRD, upper compartment dry, lower compartment wet) and horizontal partial root-zone drying (HPRD, two lateral compartments: one dry, the other wet) of potato (Solanum tuberosum L.). When water was withheld from the dry compartment for 0-10 d, RWU and Ψroot were similarly lower in the dry compartment when soil volumetric water content dropped below 0.22cm(3) cm(-3) for both spatial distributions of soil moisture. However, [ABA]root increased in response to decreasing Ψroot in the dry compartment only for HPRD, resulting in much higher ABA accumulation than in VPRD. The position of the sampled roots (~4cm closer to the surface in the dry compartment of VPRD than in HPRD) might account for this difference, since older (upper) roots may accumulate less ABA in response to decreased Ψroot than younger (deeper) roots. This would explain differences in root ABA accumulation patterns under vertical and horizontal soil moisture gradients reported in the literature. In our experiment, these differences in root ABA accumulation did not influence [X-ABA], since the RWU fraction (and thus ABA export to shoots) from the dry compartment dramatically decreased simultaneously with any increase in [ABA]root. Thus, HPRD might better trigger a long-distance ABA signal than VPRD under conditions allowing simultaneous high [ABA]root and relatively high RWU fraction. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology.
iRefWeb: interactive analysis of consolidated protein interaction data and their supporting evidence
Turner, Brian; Razick, Sabry; Turinsky, Andrei L.; Vlasblom, James; Crowdy, Edgard K.; Cho, Emerson; Morrison, Kyle; Wodak, Shoshana J.
2010-01-01
We present iRefWeb, a web interface to protein interaction data consolidated from 10 public databases: BIND, BioGRID, CORUM, DIP, IntAct, HPRD, MINT, MPact, MPPI and OPHID. iRefWeb enables users to examine aggregated interactions for a protein of interest, and presents various statistical summaries of the data across databases, such as the number of organism-specific interactions, proteins and cited publications. Through links to source databases and supporting evidence, researchers may gauge the reliability of an interaction using simple criteria, such as the detection methods, the scale of the study (high- or low-throughput) or the number of cited publications. Furthermore, iRefWeb compares the information extracted from the same publication by different databases, and offers means to follow-up possible inconsistencies. We provide an overview of the consolidated protein–protein interaction landscape and show how it can be automatically cropped to aid the generation of meaningful organism-specific interactomes. iRefWeb can be accessed at: http://wodaklab.org/iRefWeb. Database URL: http://wodaklab.org/iRefWeb/ PMID:20940177
Do Medicaid Wage Pass-through Payments Increase Nursing Home Staffing?
Feng, Zhanlian; Lee, Yong Suk; Kuo, Sylvia; Intrator, Orna; Foster, Andrew; Mor, Vincent
2010-01-01
Objective To assess the impact of state Medicaid wage pass-through policy on direct-care staffing levels in U.S. nursing homes. Data Sources Online Survey Certification and Reporting (OSCAR) data, and state Medicaid nursing home reimbursement policies over the period 1996–2004. Study Design A fixed-effects panel model with two-step feasible-generalized least squares estimates is used to examine the effect of pass-through adoption on direct-care staff hours per resident day (HPRD) in nursing homes. Data Collection/Extraction Methods A panel data file tracking annual OSCAR surveys per facility over the study period is linked with annual information on state Medicaid wage pass-through and related policies. Principal Findings Among the states introducing wage pass-through over the study period, the policy is associated with between 3.0 and 4.0 percent net increases in certified nurse aide (CNA) HPRD in the years following adoption. No discernable pass-through effect is observed on either registered nurse or licensed practical nurse HPRD. Conclusions State Medicaid wage pass-through programs offer a potentially effective policy tool to boost direct-care CNA staffing in nursing homes, at least in the short term. PMID:20403054
Atlas - a data warehouse for integrative bioinformatics.
Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis
2005-02-21
We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: http://bioinformatics.ubc.ca/atlas/
Atlas – a data warehouse for integrative bioinformatics
Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire MS; Ling, John; Ouellette, BF Francis
2005-01-01
Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: PMID:15723693
PIPE: a protein–protein interaction passage extraction module for BioCreative challenge
Chu, Chun-Han; Su, Yu-Chen; Chen, Chien Chin; Hsu, Wen-Lian
2016-01-01
Identifying the interactions between proteins mentioned in biomedical literatures is one of the frequently discussed topics of text mining in the life science field. In this article, we propose PIPE, an interaction pattern generation module used in the Collaborative Biocurator Assistant Task at BioCreative V (http://www.biocreative.org/) to capture frequent protein-protein interaction (PPI) patterns within text. We also present an interaction pattern tree (IPT) kernel method that integrates the PPI patterns with convolution tree kernel (CTK) to extract PPIs. Methods were evaluated on LLL, IEPA, HPRD50, AIMed and BioInfer corpora using cross-validation, cross-learning and cross-corpus evaluation. Empirical evaluations demonstrate that our method is effective and outperforms several well-known PPI extraction methods. Database URL: PMID:27524807
Subramani, Suresh; Kalpana, Raja; Monickaraj, Pankaj Moses; Natarajan, Jeyakumar
2015-04-01
The knowledge on protein-protein interactions (PPI) and their related pathways are equally important to understand the biological functions of the living cell. Such information on human proteins is highly desirable to understand the mechanism of several diseases such as cancer, diabetes, and Alzheimer's disease. Because much of that information is buried in biomedical literature, an automated text mining system for visualizing human PPI and pathways is highly desirable. In this paper, we present HPIminer, a text mining system for visualizing human protein interactions and pathways from biomedical literature. HPIminer extracts human PPI information and PPI pairs from biomedical literature, and visualize their associated interactions, networks and pathways using two curated databases HPRD and KEGG. To our knowledge, HPIminer is the first system to build interaction networks from literature as well as curated databases. Further, the new interactions mined only from literature and not reported earlier in databases are highlighted as new. A comparative study with other similar tools shows that the resultant network is more informative and provides additional information on interacting proteins and their associated networks. Copyright © 2015 Elsevier Inc. All rights reserved.
Ethics in Health, Physical Education, Recreation, and Dance. ERIC Digest.
ERIC Educational Resources Information Center
Fain, Gerald S.
This digest addresses the importance to professional practice of ethics and shared values, focusing on the fields of health, physical education, recreation, and dance (HPRD). Practitioners in these fields routinely deal with situations that call upon moral reasoning and the articulation of values such as instruction about personal health, sexual…
Nurse Staffing and Quality of Care of Nursing Home Residents in Korea.
Shin, Juh Hyun; Hyun, Ta Kyung
2015-11-01
To investigate the relationship between nurse staffing and quality of care in nursing homes in Korea. This study used a cross-sectional design to describe the relationship between nurse staffing and 15 quality-of-care outcomes. Independent variables were hours per resident day (HPRD), skill mix, and turnover of each nursing staff, developed with the definitions of the Centers for Medicare & Medicaid Services and the American Health Care Association. Dependent variables were prevalence of residents who experienced more than one fall in the recent 3 months, aggressive behaviors, depression, cognitive decline, pressure sores, incontinence, prescribed antibiotics because of urinary tract infection, weight loss, dehydration, tube feeding, bed rest, increased activities of daily living, decreased range of motion, use of antidepressants, and use of restraints. Outcome variables were quality indicators from the U.S. Centers for Medicare & Medicaid and 2013 nursing home evaluation manual by the Korean National Health Insurance Service. The effects of registered nurse (RN) HPRD was supported in fall prevention, decreased tube feeding, decreased numbers of residents with deteriorated range of motion, and decreased aggressive behavior. Higher turnover of RNs related to more residents with dehydration, bed rest, and use of antipsychotic medication. Study results supported RNs' unique contribution to resident outcomes in comparison to alternative nurse staffing in fall prevention, decreased use of tube feeding, better range of motion for residents, and decreased aggressive behaviors in nursing homes in Korea. More research is required to confirm the effects of nurse staffing on residents' outcomes in Korea. We found consistency in the effects of RN staffing on resident outcomes acceptable. By assessing nurse staffing levels and compositions of nursing staffs, this study contributes to more effective long-term care insurance by reflecting on appropriate policies, and ultimately contributes to the stable settlement of the long-term care insurance system for elders. © 2015 Sigma Theta Tau International.
Mao, Song; Chai, Xiaoqiang; Hu, Yuling; Hou, Xugang; Tang, Yiheng; Bi, Cheng; Li, Xiao
2014-01-01
Mitochondrion plays a central role in diverse biological processes in most eukaryotes, and its dysfunctions are critically involved in a large number of diseases and the aging process. A systematic identification of mitochondrial proteomes and characterization of functional linkages among mitochondrial proteins are fundamental in understanding the mechanisms underlying biological functions and human diseases associated with mitochondria. Here we present a database MitProNet which provides a comprehensive knowledgebase for mitochondrial proteome, interactome and human diseases. First an inventory of mammalian mitochondrial proteins was compiled by widely collecting proteomic datasets, and the proteins were classified by machine learning to achieve a high-confidence list of mitochondrial proteins. The current version of MitProNet covers 1124 high-confidence proteins, and the remainders were further classified as middle- or low-confidence. An organelle-specific network of functional linkages among mitochondrial proteins was then generated by integrating genomic features encoded by a wide range of datasets including genomic context, gene expression profiles, protein-protein interactions, functional similarity and metabolic pathways. The functional-linkage network should be a valuable resource for the study of biological functions of mitochondrial proteins and human mitochondrial diseases. Furthermore, we utilized the network to predict candidate genes for mitochondrial diseases using prioritization algorithms. All proteins, functional linkages and disease candidate genes in MitProNet were annotated according to the information collected from their original sources including GO, GEO, OMIM, KEGG, MIPS, HPRD and so on. MitProNet features a user-friendly graphic visualization interface to present functional analysis of linkage networks. As an up-to-date database and analysis platform, MitProNet should be particularly helpful in comprehensive studies of complicated biological mechanisms underlying mitochondrial functions and human mitochondrial diseases. MitProNet is freely accessible at http://bio.scu.edu.cn:8085/MitProNet. PMID:25347823
Database of Geoscientific References Through 2007 for Afghanistan, Version 2
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2007-01-01
This report describes an accompanying database of geoscientific references for the country of Afghanistan. Included is an accompanying Microsoft? Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan's energy, mineral, and water resources, and geologic hazards, currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,462) and unpublished (n = 174) references compiled through September, 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable, interface and only minimum knowledge of the use of Microsoft? Access is required.
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2008-01-01
This report includes a document and accompanying Microsoft Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan?s energy, mineral, and water resources, and geologic hazards currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,489) and unpublished (n = 176) references compiled through calendar year 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable interface and only minimum knowledge of the use of Microsoft Access is required.
Chapter 4 - The LANDFIRE Prototype Project reference database
John F. Caratti
2006-01-01
This chapter describes the data compilation process for the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project) reference database (LFRDB) and explains the reference data applications for LANDFIRE Prototype maps and models. The reference database formed the foundation for all LANDFIRE tasks. All products generated by the...
Normative Databases for Imaging Instrumentation.
Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray
2015-08-01
To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.
Normative Databases for Imaging Instrumentation
Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray
2015-01-01
Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003
Wright, Judy M; Cottrell, David J; Mir, Ghazala
2014-07-01
To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
National Institute of Standards and Technology Data Gateway
SRD 60 NIST ITS-90 Thermocouple Database (Web, free access) Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).
NASA Astrophysics Data System (ADS)
Dziedzic, Adam; Mulawka, Jan
2014-11-01
NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.
Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow
2013-09-01
The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user`s guide and a reference guide. The user`s guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMAmore » and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices.« less
Reference Fluid Thermodynamic and Transport Properties Database (REFPROP)
National Institute of Standards and Technology Data Gateway
SRD 23 NIST Reference Fluid Thermodynamic and Transport Properties Database (REFPROP) (PC database for purchase) NIST 23 contains revised data in a Windows version of the database, including 105 pure fluids and allowing mixtures of up to 20 components. The fluids include the environmentally acceptable HFCs, traditional HFCs and CFCs and 'natural' refrigerants like ammonia
FreeSolv: A database of experimental and calculated hydration free energies, with input files
Mobley, David L.; Guthrie, J. Peter
2014-01-01
This work provides a curated database of experimental and calculated hydration free energies for small neutral molecules in water, along with molecular structures, input files, references, and annotations. We call this the Free Solvation Database, or FreeSolv. Experimental values were taken from prior literature and will continue to be curated, with updated experimental references and data added as they become available. Calculated values are based on alchemical free energy calculations using molecular dynamics simulations. These used the GAFF small molecule force field in TIP3P water with AM1-BCC charges. Values were calculated with the GROMACS simulation package, with full details given in references cited within the database itself. This database builds in part on a previous, 504-molecule database containing similar information. However, additional curation of both experimental data and calculated values has been done here, and the total number of molecules is now up to 643. Additional information is now included in the database, such as SMILES strings, PubChem compound IDs, accurate reference DOIs, and others. One version of the database is provided in the Supporting Information of this article, but as ongoing updates are envisioned, the database is now versioned and hosted online. In addition to providing the database, this work describes its construction process. The database is available free-of-charge via http://www.escholarship.org/uc/item/6sd403pz. PMID:24928188
A Sediment Testing Reference Area Database for the San Francisco Deep Ocean Disposal Site (SF-DODS)
EPA established and maintains a SF-DODS reference area database of previously-collected sediment test data. Several sets of sediment test data have been successfully collected from the SF-DODS reference area.
Pruitt, Kim D.; Tatusova, Tatiana; Maglott, Donna R.
2005-01-01
The National Center for Biotechnology Information (NCBI) Reference Sequence (RefSeq) database (http://www.ncbi.nlm.nih.gov/RefSeq/) provides a non-redundant collection of sequences representing genomic data, transcripts and proteins. Although the goal is to provide a comprehensive dataset representing the complete sequence information for any given species, the database pragmatically includes sequence data that are currently publicly available in the archival databases. The database incorporates data from over 2400 organisms and includes over one million proteins representing significant taxonomic diversity spanning prokaryotes, eukaryotes and viruses. Nucleotide and protein sequences are explicitly linked, and the sequences are linked to other resources including the NCBI Map Viewer and Gene. Sequences are annotated to include coding regions, conserved domains, variation, references, names, database cross-references, and other features using a combined approach of collaboration and other input from the scientific community, automated annotation, propagation from GenBank and curation by NCBI staff. PMID:15608248
A reference system for animal biometrics: application to the northern leopard frog
Petrovska-Delacretaz, D.; Edwards, A.; Chiasson, J.; Chollet, G.; Pilliod, D.S.
2014-01-01
Reference systems and public databases are available for human biometrics, but to our knowledge nothing is available for animal biometrics. This is surprising because animals are not required to give their agreement to be in a database. This paper proposes a reference system and database for the northern leopard frog (Lithobates pipiens). Both are available for reproducible experiments. Results of both open set and closed set experiments are given.
PHYTOTOX: DATABASE DEALING WITH THE EFFECT OF ORGANIC CHEMICALS ON TERRESTRIAL VASCULAR PLANTS
A new database, PHYTOTOX, dealing with the direct effects of exogenously supplied organic chemicals on terrestrial vascular plants is described. The database consists of two files, a Reference File and Effects File. The Reference File is a bibliographic file of published research...
Online Reference Service--How to Begin: A Selected Bibliography.
ERIC Educational Resources Information Center
Shroder, Emelie J., Ed.
1982-01-01
Materials in this bibliography were selected and recommended by members of the Use of Machine-Assisted Reference in Public Libraries Committee, Reference and Adult Services Division, American Library Association. Topics include: financial aspects, equipment and communications considerations, comparing databases and database systems, advertising…
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
A Circular Dichroism Reference Database for Membrane Proteins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace,B.; Wien, F.; Stone, T.
2006-01-01
Membrane proteins are a major product of most genomes and the target of a large number of current pharmaceuticals, yet little information exists on their structures because of the difficulty of crystallising them; hence for the most part they have been excluded from structural genomics programme targets. Furthermore, even methods such as circular dichroism (CD) spectroscopy which seek to define secondary structure have not been fully exploited because of technical limitations to their interpretation for membrane embedded proteins. Empirical analyses of circular dichroism (CD) spectra are valuable for providing information on secondary structures of proteins. However, the accuracy of themore » results depends on the appropriateness of the reference databases used in the analyses. Membrane proteins have different spectral characteristics than do soluble proteins as a result of the low dielectric constants of membrane bilayers relative to those of aqueous solutions (Chen & Wallace (1997) Biophys. Chem. 65:65-74). To date, no CD reference database exists exclusively for the analysis of membrane proteins, and hence empirical analyses based on current reference databases derived from soluble proteins are not adequate for accurate analyses of membrane protein secondary structures (Wallace et al (2003) Prot. Sci. 12:875-884). We have therefore created a new reference database of CD spectra of integral membrane proteins whose crystal structures have been determined. To date it contains more than 20 proteins, and spans the range of secondary structures from mostly helical to mostly sheet proteins. This reference database should enable more accurate secondary structure determinations of membrane embedded proteins and will become one of the reference database options in the CD calculation server DICHROWEB (Whitmore & Wallace (2004) NAR 32:W668-673).« less
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838
Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar
2017-01-01
Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.
Microcomputer-Based Access to Machine-Readable Numeric Databases.
ERIC Educational Resources Information Center
Wenzel, Patrick
1988-01-01
Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)
Automated processing of shoeprint images based on the Fourier transform for use in forensic science.
de Chazal, Philip; Flynn, John; Reilly, Richard B
2005-03-01
The development of a system for automatically sorting a database of shoeprint images based on the outsole pattern in response to a reference shoeprint image is presented. The database images are sorted so that those from the same pattern group as the reference shoeprint are likely to be at the start of the list. A database of 476 complete shoeprint images belonging to 140 pattern groups was established with each group containing two or more examples. A panel of human observers performed the grouping of the images into pattern categories. Tests of the system using the database showed that the first-ranked database image belongs to the same pattern category as the reference image 65 percent of the time and that a correct match appears within the first 5 percent of the sorted images 87 percent of the time. The system has translational and rotational invariance so that the spatial positioning of the reference shoeprint images does not have to correspond with the spatial positioning of the shoeprint images of the database. The performance of the system for matching partial-prints was also determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, W.; von Laven, G.M.; Parker, T.
1993-09-01
The Bibliographic Retrieval System (BARS) is a data base management system specially designed to retrieve bibliographic references. Two databases are available, (i) the Sandia Shock Compression (SSC) database which contains over 5700 references to the literature related to stress waves in solids and their applications, and (ii) the Shock Physics Index (SPHINX) which includes over 8000 further references to stress waves in solids, material properties at intermediate and low rates, ballistic and hypervelocity impact, and explosive or shock fabrication methods. There is some overlap in the information in the two data bases.
USDA-ARS?s Scientific Manuscript database
The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 Standard Reference 26 (SR 26) database. Sentinel Foods are foods and beverages identified by USDA to be monitored as primary indicat...
Document creation, linking, and maintenance system
Claghorn, Ronald [Pasco, WA
2011-02-15
A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.
Selecting a database for literature searches in nursing: MEDLINE or CINAHL?
Brazier, H; Begley, C M
1996-10-01
This study compares the usefulness of the MEDLINE and CINAHL databases for students on post-registration nursing courses. We searched for nine topics, using title words only. Identical searches of the two databases retrieved 1162 references, of which 88% were in MEDLINE, 33% in CINAHL and 20% in both sources. The relevance of the references was assessed by student reviewers. The positive predictive value of CINAHL (70%) was higher than that of MEDLINE (54%), but MEDLINE produced more than twice as many relevant references as CINAHL. The sensitivity of MEDLINE was 85% (95% CI 82-88%), and that of CINAHL was 41% (95% CI 37-45%). To assess the ease of obtaining the references, we developed an index of accessibility, based on the holdings of a number of Irish and British libraries. Overall, 47% of relevant references were available in the students' own library, and 64% could be obtained within 48 hours. There was no difference between the two databases overall, but when two topics relating specifically to the organization of nursing were excluded, references found in MEDLINE were significantly more accessible. We recommend that MEDLINE should be regarded as the first choice of bibliographic database for any subject other than one related strictly to the organization of nursing.
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
ERIC Educational Resources Information Center
Harzbecker, Joseph, Jr.
1993-01-01
Describes the National Institute of Health's GenBank DNA sequence database and how it can be accessed through the Internet. A real reference question, which was answered successfully using the database, is reproduced to illustrate and elaborate on the potential of the Internet for information retrieval. (10 references) (KRN)
ERIC Educational Resources Information Center
Kurhan, Scott H.; Griffing, Elizabeth A.
2011-01-01
Reference services in public libraries are changing dramatically. The Internet, online databases, and shrinking budgets are all making it necessary for non-traditional reference staff to become familiar with online reference tools. Recognizing the need for cross-training, Chesapeake Public Library (CPL) developed a program called the Database…
Planning for CD-ROM in the Reference Department.
ERIC Educational Resources Information Center
Graves, Gail T.; And Others
1987-01-01
Outlines the evaluation criteria used by the reference department at the Williams Library at the University of Mississippi in selecting databases and hardware used in CD-ROM workstations. The factors discussed include database coverage, costs, and security. (CLB)
TRENDS: A flight test relational database user's guide and reference manual
NASA Technical Reports Server (NTRS)
Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.
1994-01-01
This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
ReprDB and panDB: minimalist databases with maximal microbial representation.
Zhou, Wei; Gay, Nicole; Oh, Julia
2018-01-18
Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.
The land management and operations database (LMOD)
USDA-ARS?s Scientific Manuscript database
This paper presents the design, implementation, deployment, and application of the Land Management and Operations Database (LMOD). LMOD is the single authoritative source for reference land management and operation reference data within the USDA enterprise data warehouse. LMOD supports modeling appl...
Fire-induced water-repellent soils, an annotated bibliography
Kalendovsky, M.A.; Cannon, S.H.
1997-01-01
The development and nature of water-repellent, or hydrophobic, soils are important issues in evaluating hillslope response to fire. The following annotated bibliography was compiled to consolidate existing published research on the topic. Emphasis was placed on the types, causes, effects and measurement techniques of water repellency, particularly with respect to wildfires and prescribed burns. Each annotation includes a general summary of the respective publication, as well as highlights of interest to this focus. Although some references on the development of water repellency without fires, the chemistry of hydrophobic substances, and remediation of water-repellent conditions are included, coverage of these topics is not intended to be comprehensive. To develop this database, the GeoRef, Agricola, and Water Resources Abstracts databases were searched for appropriate references, and the bibliographies of each reference were then reviewed for additional entries. Additional references will be added to this bibliography as they become available. The annotated bibliography can be accessed on the Web at http://geohazards.cr.usgs.gov/html_files/landslides/ofr97-720/biblio.html. A database consisting of the references and keywords is available through a link at the above address. This database was compiled using EndNote2 plus software by Niles and Associates, and is necessary to search the database.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...
Error and Uncertainty in the Accuracy Assessment of Land Cover Maps
NASA Astrophysics Data System (ADS)
Sarmento, Pedro Alexandre Reis
Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None
Construction and comparative evaluation of different activity detection methods in brain FDG-PET.
Buchholz, Hans-Georg; Wenzel, Fabian; Gartenschläger, Martin; Thiele, Frank; Young, Stewart; Reuss, Stefan; Schreckenberger, Mathias
2015-08-18
We constructed and evaluated reference brain FDG-PET databases for usage by three software programs (Computer-aided diagnosis for dementia (CAD4D), Statistical Parametric Mapping (SPM) and NEUROSTAT), which allow a user-independent detection of dementia-related hypometabolism in patients' brain FDG-PET. Thirty-seven healthy volunteers were scanned in order to construct brain FDG reference databases, which reflect the normal, age-dependent glucose consumption in human brain, using either software. Databases were compared to each other to assess the impact of different stereotactic normalization algorithms used by either software package. In addition, performance of the new reference databases in the detection of altered glucose consumption in the brains of patients was evaluated by calculating statistical maps of regional hypometabolism in FDG-PET of 20 patients with confirmed Alzheimer's dementia (AD) and of 10 non-AD patients. Extent (hypometabolic volume referred to as cluster size) and magnitude (peak z-score) of detected hypometabolism was statistically analyzed. Differences between the reference databases built by CAD4D, SPM or NEUROSTAT were observed. Due to the different normalization methods, altered spatial FDG patterns were found. When analyzing patient data with the reference databases created using CAD4D, SPM or NEUROSTAT, similar characteristic clusters of hypometabolism in the same brain regions were found in the AD group with either software. However, larger z-scores were observed with CAD4D and NEUROSTAT than those reported by SPM. Better concordance with CAD4D and NEUROSTAT was achieved using the spatially normalized images of SPM and an independent z-score calculation. The three software packages identified the peak z-scores in the same brain region in 11 of 20 AD cases, and there was concordance between CAD4D and SPM in 16 AD subjects. The clinical evaluation of brain FDG-PET of 20 AD patients with either CAD4D-, SPM- or NEUROSTAT-generated databases from an identical reference dataset showed similar patterns of hypometabolism in the brain regions known to be involved in AD. The extent of hypometabolism and peak z-score appeared to be influenced by the calculation method used in each software package rather than by different spatial normalization parameters.
Dimai, Hans P
2017-11-01
Dual-energy X-ray absorptiometry (DXA) is a two-dimensional imaging technology developed to assess bone mineral density (BMD) of the entire human skeleton and also specifically of skeletal sites known to be most vulnerable to fracture. In order to simplify interpretation of BMD measurement results and allow comparability among different DXA-devices, the T-score concept was introduced. This concept involves an individual's BMD which is then compared with the mean value of a young healthy reference population, with the difference expressed as a standard deviation (SD). Since the early nineties of the past century, the diagnostic categories "normal, osteopenia, and osteoporosis", as recommended by a WHO working Group, are based on this concept. Thus, DXA is still the globally accepted "gold-standard" method for the noninvasive diagnosis of osteoporosis. Another score obtained from DXA measurement, termed Z-score, describes the number of SDs by which the BMD in an individual differs from the mean value expected for age and sex. Although not intended for diagnosis of osteoporosis in adults, it nevertheless provides information about an individual's fracture risk compared to peers. DXA measurement can either be used as a "stand-alone" means in the assessment of an individual's fracture risk, or incorporated into one of the available fracture risk assessment tools such as FRAX® or Garvan, thus improving the predictive power of such tools. The issue which reference databases should be used by DXA-device manufacturers for T-score reference standards has been recently addressed by an expert group, who recommended use National Health and Nutrition Examination Survey III (NHANES III) databases for the hip reference standard but own databases for the lumbar spine. Furthermore, in men it is recommended use female reference databases for calculation of the T-score and use male reference databases for calculation of Z-score. Copyright © 2017 Elsevier Inc. All rights reserved.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
AN ASSESSMENT OF GROUND TRUTH VARIABILITY USING A "VIRTUAL FIELD REFERENCE DATABASE"
A "Virtual Field Reference Database (VFRDB)" was developed from field measurment data that included location and time, physical attributes, flora inventory, and digital imagery (camera) documentation foy 1,01I sites in the Neuse River basin, North Carolina. The sampling f...
RefPrimeCouch—a reference gene primer CouchApp
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831
RefPrimeCouch--a reference gene primer CouchApp.
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.
Optics survivability support, volume 2
NASA Astrophysics Data System (ADS)
Wild, N.; Simpson, T.; Busdeker, A.; Doft, F.
1993-01-01
This volume of the Optics Survivability Support Final Report contains plots of all the data contained in the computerized Optical Glasses Database. All of these plots are accessible through the Database, but are included here as a convenient reference. The first three pages summarize the types of glass included with a description of the radiation source, test date, and the original data reference. This information is included in the database as a macro button labeled 'LLNL DATABASE'. Following this summary is an Abbe chart showing which glasses are included and where they lie as a function of nu(sub d) and n(sub d). This chart is also callable through the database as a macro button labeled 'ABBEC'.
Wright, T.L.; Takahashi, T.J.
1998-01-01
The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.
Nuclear Science References Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.gov; Běták, E.; Singh, B.
2014-06-15
The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energymore » Agency (http://www-nds.iaea.org/nsr)« less
APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES
An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
(LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...
USDA National Nutrient Database for Standard Reference, release 28
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference, Release 28 contains data for nearly 8,800 food items for up to 150 food components. SR28 replaces the previous release, SR27, originally issued in August 2014. Data in SR28 supersede values in the printed handbooks and previous electronic...
Thermodynamics of Enzyme-Catalyzed Reactions Database
National Institute of Standards and Technology Data Gateway
SRD 74 Thermodynamics of Enzyme-Catalyzed Reactions Database (Web, free access) The Thermodynamics of Enzyme-Catalyzed Reactions Database contains thermodynamic data on enzyme-catalyzed reactions that have been recently published in the Journal of Physical and Chemical Reference Data (JPCRD). For each reaction the following information is provided: the reference for the data, the reaction studied, the name of the enzyme used and its Enzyme Commission number, the method of measurement, the data and an evaluation thereof.
A Relational Database System for Student Use.
ERIC Educational Resources Information Center
Fertuck, Len
1982-01-01
Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
An Online Resource for Flight Test Safety Planning
NASA Technical Reports Server (NTRS)
Lewis, Greg
2007-01-01
A viewgraph presentation describing an online database for flight test safety techniques is shown. The topics include: 1) Goal; 2) Test Hazard Analyses; 3) Online Database Background; 4) Data Gathering; 5) NTPS Role; 6) Organizations; 7) Hazard Titles; 8) FAR Paragraphs; 9) Maneuver Name; 10) Identified Hazard; 11) Matured Hazard Titles; 12) Loss of Control Causes; 13) Mitigations; 14) Database Now Open to the Public; 15) FAR Reference Search; 16) Record Field Search; 17) Keyword Search; and 18) Results of FAR Reference Search.
EPA's Toxicity Reference Databases (ToxRefDB) was developed by the National Center for Computational Toxicology in partnership with EPA's Office of Pesticide Programs, to store data derived from in vivo animal toxicity studies [www.epa.gov/ncct/toxrefdb/]. The initial build of To...
USDA National Nutrient Database for Standard Reference, Release 25
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference, Release 25(SR25)contains data for over 8,100 food items for up to 146 food components. It replaces the previous release, SR24, issued in September 2011. Data in SR25 supersede values in the printed handbooks and previous electronic releas...
USDA National Nutrient Database for Standard Reference, Release 24
USDA-ARS?s Scientific Manuscript database
The USDA Nutrient Database for Standard Reference, Release 24 contains data for over 7,900 food items for up to 146 food components. It replaces the previous release, SR23, issued in September 2010. Data in SR24 supersede values in the printed Handbooks and previous electronic releases of the databa...
Design of a diagnostic encyclopaedia using AIDA.
van Ginneken, A M; Smeulders, A W; Jansen, W
1987-01-01
Diagnostic Encyclopaedia Workstation (DEW) is the name of a digital encyclopaedia constructed to contain reference knowledge with respect to the pathology of the ovary. Comparing DEW with the common sources of reference knowledge (i.e. books) leads to the following advantages of DEW: it contains more verbal knowledge, pictures and case histories, and it offers information adjusted to the needs of the user. Based on an analysis of the structure of this reference knowledge we have chosen AIDA to develop a relational database and we use a video-disc player to contain the pictorial part of the database. The system consists of a database input version and a read-only run version. The design of the database input version is discussed. Reference knowledge for ovary pathology requires 1-3 Mbytes of memory. At present 15% of this amount is available. The design of the run version is based on an analysis of which information must necessarily be specified to the system by the user to access a desired item of information. Finally, the use of AIDA in constructing DEW is evaluated.
GMOMETHODS: the European Union database of reference methods for GMO analysis.
Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre
2012-01-01
In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Tsybovskii, I S; Veremeichik, V M; Kotova, S A; Kritskaya, S V; Evmenenko, S A; Udina, I G
2017-02-01
For the Republic of Belarus, development of a forensic reference database on the basis of 18 autosomal microsatellites (STR) using a population dataset (N = 1040), “familial” genotypic dataset (N = 2550) obtained from expertise performance of paternity testing, and a dataset of genotypes from a criminal registration database (N = 8756) is described. Population samples studied consist of 80% ethnic Belarusians and 20% individuals of other nationality or of mixed origin (by questionnaire data). Genotypes of 12346 inhabitants of the Republic of Belarus from 118 regional samples studied by 18 autosomal microsatellites are included in the sample: 16 tetranucleotide STR (D2S1338, TPOX, D3S1358, CSF1PO, D5S818, D8S1179, D7S820, THO1, vWA, D13S317, D16S539, D18S51, D19S433, D21S11, F13B, and FGA) and two pentanucleotide STR (Penta D and Penta E). The samples studied are in Hardy–Weinberg equilibrium according to distribution of genotypes by 18 STR. Significant differences were not detected between discrete populations or between samples from various historical ethnographic regions of the Republic of Belarus (Western and Eastern Polesie, Podneprovye, Ponemanye, Poozerye, and Center), which indicates the absence of prominent genetic differentiation. Statistically significant differences between the studied genotypic datasets also were not detected, which made it possible to combine the datasets and consider the total sample as a unified forensic reference database for 18 “criminalistic” STR loci. Differences between reference database of the Republic of Belarus and Russians and Ukrainians by the distribution of the range of autosomal STR also were not detected, corresponding to a close genetic relationship of the three Eastern Slavic nations mediated by common origin and intense mutual migrations. Significant differences by separate STR loci between the reference database of Republic of Belarus and populations of Southern and Western Slavs were observed. The necessity of using original reference database for support of forensic expertise practice in the Republic of Belarus was demonstrated.
Diet History Questionnaire: Database Revision History
The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.
A Partnership for Public Health: USDA Branded Food Products Database
USDA-ARS?s Scientific Manuscript database
The importance of comprehensive food composition databases is more critical than ever in helping to address global food security. The USDA National Nutrient Database for Standard Reference is the “gold standard” for food composition databases. The presentation will include new developments in stren...
Analysis of the NMI01 marker for a population database of cannabis seeds.
Shirley, Nicholas; Allgeier, Lindsay; Lanier, Tommy; Coyle, Heather Miller
2013-01-01
We have analyzed the distribution of genotypes at a single hexanucleotide short tandem repeat (STR) locus in a Cannabis sativa seed database along with seed-packaging information. This STR locus is defined by the polymerase chain reaction amplification primers CS1F and CS1R and is referred to as NMI01 (for National Marijuana Initiative) in our study. The population database consists of seed seizures of two categories: seed samples from labeled and unlabeled packages regarding seed bank source. Of a population database of 93 processed seeds including 12 labeled Cannabis varieties, the observed genotypes generated from single seeds exhibited between one and three peaks (potentially six alleles if in homozygous state). The total number of observed genotypes was 54 making this marker highly specific and highly individualizing even among seeds of common lineage. Cluster analysis associated many but not all of the handwritten labeled seed varieties tested to date as well as the National Park seizure to our known reference database containing Mr. Nice Seedbank and Sensi Seeds commercially packaged reference samples. © 2012 American Academy of Forensic Sciences.
The EpiSLI Database: A Publicly Available Database on Speech and Language
ERIC Educational Resources Information Center
Tomblin, J. Bruce
2010-01-01
Purpose: This article describes a database that was created in the process of conducting a large-scale epidemiologic study of specific language impairment (SLI). As such, this database will be referred to as the EpiSLI database. Children with SLI have unexpected and unexplained difficulties learning and using spoken language. Although there is no…
USDA-ARS?s Scientific Manuscript database
USDA National Nutrient Database for Standard Reference Dataset for What We Eat In America, NHANES (Survey-SR) provides the nutrient data for assessing dietary intakes from the national survey What We Eat In America, National Health and Nutrition Examination Survey (WWEIA, NHANES). The current versi...
Gradishar, William; Johnson, KariAnne; Brown, Krystal; Mundt, Erin; Manley, Susan
2017-07-01
There is a growing move to consult public databases following receipt of a genetic test result from a clinical laboratory; however, the well-documented limitations of these databases call into question how often clinicians will encounter discordant variant classifications that may introduce uncertainty into patient management. Here, we evaluate discordance in BRCA1 and BRCA2 variant classifications between a single commercial testing laboratory and a public database commonly consulted in clinical practice. BRCA1 and BRCA2 variant classifications were obtained from ClinVar and compared with the classifications from a reference laboratory. Full concordance and discordance were determined for variants whose ClinVar entries were of the same pathogenicity (pathogenic, benign, or uncertain). Variants with conflicting ClinVar classifications were considered partially concordant if ≥1 of the listed classifications agreed with the reference laboratory classification. Four thousand two hundred and fifty unique BRCA1 and BRCA2 variants were available for analysis. Overall, 73.2% of classifications were fully concordant and 12.3% were partially concordant. The remaining 14.5% of variants had discordant classifications, most of which had a definitive classification (pathogenic or benign) from the reference laboratory compared with an uncertain classification in ClinVar (14.0%). Here, we show that discrepant classifications between a public database and single reference laboratory potentially account for 26.7% of variants in BRCA1 and BRCA2 . The time and expertise required of clinicians to research these discordant classifications call into question the practicality of checking all test results against a database and suggest that discordant classifications should be interpreted with these limitations in mind. With the increasing use of clinical genetic testing for hereditary cancer risk, accurate variant classification is vital to ensuring appropriate medical management. There is a growing move to consult public databases following receipt of a genetic test result from a clinical laboratory; however, we show that up to 26.7% of variants in BRCA1 and BRCA2 have discordant classifications between ClinVar and a reference laboratory. The findings presented in this paper serve as a note of caution regarding the utility of database consultation. © AlphaMed Press 2017.
Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier
2017-06-13
Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.
Vitamin and Mineral Supplement Fact Sheets
... Dictionary of Dietary Supplement Terms Dietary Supplement Label Database (DSLD) Información en español Consumer information in Spanish ... Analytical Methods and Reference Materials Dietary Supplement Label Database (DSLD) Dietary Supplement Ingredient Database (DSID) Computer Access ...
Zhang, Peifen; Dreher, Kate; Karthikeyan, A.; Chi, Anjo; Pujar, Anuradha; Caspi, Ron; Karp, Peter; Kirkup, Vanessa; Latendresse, Mario; Lee, Cynthia; Mueller, Lukas A.; Muller, Robert; Rhee, Seung Yon
2010-01-01
Metabolic networks reconstructed from sequenced genomes or transcriptomes can help visualize and analyze large-scale experimental data, predict metabolic phenotypes, discover enzymes, engineer metabolic pathways, and study metabolic pathway evolution. We developed a general approach for reconstructing metabolic pathway complements of plant genomes. Two new reference databases were created and added to the core of the infrastructure: a comprehensive, all-plant reference pathway database, PlantCyc, and a reference enzyme sequence database, RESD, for annotating metabolic functions of protein sequences. PlantCyc (version 3.0) includes 714 metabolic pathways and 2,619 reactions from over 300 species. RESD (version 1.0) contains 14,187 literature-supported enzyme sequences from across all kingdoms. We used RESD, PlantCyc, and MetaCyc (an all-species reference metabolic pathway database), in conjunction with the pathway prediction software Pathway Tools, to reconstruct a metabolic pathway database, PoplarCyc, from the recently sequenced genome of Populus trichocarpa. PoplarCyc (version 1.0) contains 321 pathways with 1,807 assigned enzymes. Comparing PoplarCyc (version 1.0) with AraCyc (version 6.0, Arabidopsis [Arabidopsis thaliana]) showed comparable numbers of pathways distributed across all domains of metabolism in both databases, except for a higher number of AraCyc pathways in secondary metabolism and a 1.5-fold increase in carbohydrate metabolic enzymes in PoplarCyc. Here, we introduce these new resources and demonstrate the feasibility of using them to identify candidate enzymes for specific pathways and to analyze metabolite profiling data through concrete examples. These resources can be searched by text or BLAST, browsed, and downloaded from our project Web site (http://plantcyc.org). PMID:20522724
Murugaiyan, J; Ahrholdt, J; Kowbel, V; Roesler, U
2012-05-01
The possibility of using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for rapid identification of pathogenic and non-pathogenic species of the genus Prototheca has been recently demonstrated. A unique reference database of MALDI-TOF MS profiles for type and reference strains of the six generally accepted Prototheca species was established. The database quality was reinforced after the acquisition of 27 spectra for selected Prototheca strains, with three biological and technical replicates for each of 18 type and reference strains of Prototheca and four strains of Chlorella. This provides reproducible and unique spectra covering a wide m/z range (2000-20 000 Da) for each of the strains used in the present study. The reproducibility of the spectra was further confirmed by employing composite correlation index calculation and main spectra library (MSP) dendrogram creation, available with MALDI Biotyper software. The MSP dendrograms obtained were comparable with the 18S rDNA sequence-based dendrograms. These reference spectra were successfully added to the Bruker database, and the efficiency of identification was evaluated by cross-reference-based and unknown Prototheca identification. It is proposed that the addition of further strains would reinforce the reference spectra library for rapid identification of Prototheca strains to the genus and species/genotype level. © 2011 The Authors. Clinical Microbiology and Infection © 2011 European Society of Clinical Microbiology and Infectious Diseases.
Annual Review of Database Development: 1992.
ERIC Educational Resources Information Center
Basch, Reva
1992-01-01
Reviews recent trends in databases and online systems. Topics discussed include new access points for established databases; acquisitions, consolidations, and competition between vendors; European coverage; international services; online reference materials, including telephone directories; political and legal materials and public records;…
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Cameron, M; Perry, J; Middleton, J R; Chaffer, M; Lewis, J; Keefe, G P
2018-01-01
This study evaluated MALDI-TOF mass spectrometry and a custom reference spectra expanded database for the identification of bovine-associated coagulase-negative staphylococci (CNS). A total of 861 CNS isolates were used in the study, covering 21 different CNS species. The majority of the isolates were previously identified by rpoB gene sequencing (n = 804) and the remainder were identified by sequencing of hsp60 (n = 56) and tuf (n = 1). The genotypic identification was considered the gold standard identification. Using a direct transfer protocol and the existing commercial database, MALDI-TOF mass spectrometry showed a typeability of 96.5% (831/861) and an accuracy of 99.2% (824/831). Using a custom reference spectra expanded database, which included an additional 13 in-house created reference spectra, isolates were identified by MALDI-TOF mass spectrometry with 99.2% (854/861) typeability and 99.4% (849/854) accuracy. Overall, MALDI-TOF mass spectrometry using the direct transfer method was shown to be a highly reliable tool for the identification of bovine-associated CNS. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Detection and Rectification of Distorted Fingerprints.
Si, Xuanbin; Feng, Jianjiang; Zhou, Jie; Luo, Yuxuan
2015-03-01
Elastic distortion of fingerprints is one of the major causes for false non-match. While this problem affects all fingerprint recognition applications, it is especially dangerous in negative recognition applications, such as watchlist and deduplication applications. In such applications, malicious users may purposely distort their fingerprints to evade identification. In this paper, we proposed novel algorithms to detect and rectify skin distortion based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem, for which the registered ridge orientation map and period map of a fingerprint are used as the feature vector and a SVM classifier is trained to perform the classification task. Distortion rectification (or equivalently distortion field estimation) is viewed as a regression problem, where the input is a distorted fingerprint and the output is the distortion field. To solve this problem, a database (called reference database) of various distorted reference fingerprints and corresponding distortion fields is built in the offline stage, and then in the online stage, the nearest neighbor of the input fingerprint is found in the reference database and the corresponding distortion field is used to transform the input fingerprint into a normal one. Promising results have been obtained on three databases containing many distorted fingerprints, namely FVC2004 DB1, Tsinghua Distorted Fingerprint database, and the NIST SD27 latent fingerprint database.
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
Techniques of Photometry and Astrometry with APASS, Gaia, and Pan-STARRs Results (Abstract)
NASA Astrophysics Data System (ADS)
Green, W.
2017-12-01
(Abstract only) The databases with the APASS DR9, Gaia DR1, and the Pan-STARRs 3pi DR1 data releases are publicly available for use. There is a bit of data-mining involved to download and manage these reference stars. This paper discusses the use of these databases to acquire accurate photometric references as well as techniques for improving results. Images are prepared in the usual way: zero, dark, flat-fields, and WCS solutions with Astrometry.net. Images are then processed with Sextractor to produce an ASCII table of identifying photometric features. The database manages photometics catalogs and images converted to ASCII tables. Scripts convert the files into SQL and assimilate them into database tables. Using SQL techniques, each image star is merged with reference data to produce publishable results. The VYSOS has over 13,000 images of the ONC5 field to process with roughly 100 total fields in the campaign. This paper provides the overview for this daunting task.
Decelle, Johan; Romac, Sarah; Stern, Rowena F; Bendif, El Mahdi; Zingone, Adriana; Audic, Stéphane; Guiry, Michael D; Guillou, Laure; Tessier, Désiré; Le Gall, Florence; Gourvil, Priscillia; Dos Santos, Adriana L; Probert, Ian; Vaulot, Daniel; de Vargas, Colomban; Christen, Richard
2015-11-01
Photosynthetic eukaryotes have a critical role as the main producers in most ecosystems of the biosphere. The ongoing environmental metabarcoding revolution opens the perspective for holistic ecosystems biological studies of these organisms, in particular the unicellular microalgae that often lack distinctive morphological characters and have complex life cycles. To interpret environmental sequences, metabarcoding necessarily relies on taxonomically curated databases containing reference sequences of the targeted gene (or barcode) from identified organisms. To date, no such reference framework exists for photosynthetic eukaryotes. In this study, we built the PhytoREF database that contains 6490 plastidial 16S rDNA reference sequences that originate from a large diversity of eukaryotes representing all known major photosynthetic lineages. We compiled 3333 amplicon sequences available from public databases and 879 sequences extracted from plastidial genomes, and generated 411 novel sequences from cultured marine microalgal strains belonging to different eukaryotic lineages. A total of 1867 environmental Sanger 16S rDNA sequences were also included in the database. Stringent quality filtering and a phylogeny-based taxonomic classification were applied for each 16S rDNA sequence. The database mainly focuses on marine microalgae, but sequences from land plants (representing half of the PhytoREF sequences) and freshwater taxa were also included to broaden the applicability of PhytoREF to different aquatic and terrestrial habitats. PhytoREF, accessible via a web interface (http://phytoref.fr), is a new resource in molecular ecology to foster the discovery, assessment and monitoring of the diversity of photosynthetic eukaryotes using high-throughput sequencing. © 2015 John Wiley & Sons Ltd.
National Vulnerability Database (NVD)
National Institute of Standards and Technology Data Gateway
National Vulnerability Database (NVD) (Web, free access) NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Evaluation of consumer drug information databases.
Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J
1999-01-01
To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.
Plant Reactome: a resource for plant pathways and comparative analysis
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D.; Wu, Guanming; Fabregat, Antonio; Elser, Justin L.; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D.; Ware, Doreen; Jaiswal, Pankaj
2017-01-01
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. PMID:27799469
USDA-ARS?s Scientific Manuscript database
Many species of wild game and fish that are legal to hunt or catch do not have nutrition information in the USDA National Nutrient Database for Standard Reference (SR). Among those species that lack nutrition information are brook trout. The research team worked with the Nutrient Data Laboratory wit...
The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested public to search and download thousands of animal toxicity testing results for hundreds of chemicals that were previously found only in paper documents. Currently, there are 474 chemicals in ToxRefDB, primarily the data rich pesticide active ingredients, but the number will continue to expand.
Online Database Coverage of Pharmaceutical Journals.
ERIC Educational Resources Information Center
Snow, Bonnie
1984-01-01
Describes compilation of data concerning pharmaceutical journal coverage in online databases which aid information providers in collection development and database selection. Methodology, results (a core collection, overlap, timeliness, geographic scope), and implications are discussed. Eight references and a list of 337 journals indexed online in…
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
16 CFR § 1102.6 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Data-Based Decision Making in Education: Challenges and Opportunities
ERIC Educational Resources Information Center
Schildkamp, Kim, Ed.; Lai, Mei Kuin, Ed.; Earl, Lorna, Ed.
2013-01-01
In a context where schools are held more and more accountable for the education they provide, data-based decision making has become increasingly important. This book brings together scholars from several countries to examine data-based decision making. Data-based decision making in this book refers to making decisions based on a broad range of…
Gorman, Sean K; Slavik, Richard S; Lam, Stefanie
2012-01-01
Background: Clinicians commonly rely on tertiary drug information references to guide drug dosages for patients who are receiving continuous renal replacement therapy (CRRT). It is unknown whether the dosage recommendations in these frequently used references reflect the most current evidence. Objective: To determine the presence and accuracy of drug dosage recommendations for patients undergoing CRRT in 4 drug information references. Methods: Medications commonly prescribed during CRRT were identified from an institutional medication inventory database, and evidence-based dosage recommendations for this setting were developed from the primary and secondary literature. The American Hospital Formulary System—Drug Information (AHFS–DI), Micromedex 2.0 (specifically the DRUGDEX and Martindale databases), and the 5th edition of Drug Prescribing in Renal Failure (DPRF5) were assessed for the presence of drug dosage recommendations in the CRRT setting. The dosage recommendations in these tertiary references were compared with the recommendations derived from the primary and secondary literature to determine concordance. Results: Evidence-based drug dosage recommendations were developed for 33 medications administered in patients undergoing CRRT. The AHFS–DI provided no dosage recommendations specific to CRRT, whereas the DPRF5 provided recommendations for 27 (82%) of the medications and the Micromedex 2.0 application for 20 (61%) (13 [39%] in the DRUGDEX database and 16 [48%] in the Martindale database, with 9 medications covered by both). The dosage recommendations were in concordance with evidence-based recommendations for 12 (92%) of the 13 medications in the DRUGDEX database, 26 (96%) of the 27 in the DPRF5, and all 16 (100%) of those in the Martindale database. Conclusions: One prominent tertiary drug information resource provided no drug dosage recommendations for patients undergoing CRRT. However, 2 of the databases in an Internet-based medical information application and the latest edition of a renal specialty drug information resource provided recommendations for a majority of the medications investigated. Most dosage recommendations were similar to those derived from the primary and secondary literature. The most recent edition of the DPRF is the preferred source of information when prescribing dosage regimens for patients receiving CRRT. PMID:22783029
PMAG: Relational Database Definition
NASA Astrophysics Data System (ADS)
Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.
2002-12-01
The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Data tables for the 1993 National Transit Database section 15 report year
DOT National Transportation Integrated Search
1994-12-01
The Data Tables For the 1993 National Transit Database Section 15 Report Year is one of three publications comprising the 1993 Annual Report. Also referred to as the National Transit Database Reporting System, it is administered by the Federal Transi...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FISH AND... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Mikaelyan, Aram; Köhler, Tim; Lampert, Niclas; Rohland, Jeffrey; Boga, Hamadi; Meuser, Katja; Brune, Andreas
2015-10-01
Recent developments in sequencing technology have given rise to a large number of studies that assess bacterial diversity and community structure in termite and cockroach guts based on large amplicon libraries of 16S rRNA genes. Although these studies have revealed important ecological and evolutionary patterns in the gut microbiota, classification of the short sequence reads is limited by the taxonomic depth and resolution of the reference databases used in the respective studies. Here, we present a curated reference database for accurate taxonomic analysis of the bacterial gut microbiota of dictyopteran insects. The Dictyopteran gut microbiota reference Database (DictDb) is based on the Silva database but was significantly expanded by the addition of clones from 11 mostly unexplored termite and cockroach groups, which increased the inventory of bacterial sequences from dictyopteran guts by 26%. The taxonomic depth and resolution of DictDb was significantly improved by a general revision of the taxonomic guide tree for all important lineages, including a detailed phylogenetic analysis of the Treponema and Alistipes complexes, the Fibrobacteres, and the TG3 phylum. The performance of this first documented version of DictDb (v. 3.0) using the revised taxonomic guide tree in the classification of short-read libraries obtained from termites and cockroaches was highly superior to that of the current Silva and RDP databases. DictDb uses an informative nomenclature that is consistent with the literature also for clades of uncultured bacteria and provides an invaluable tool for anyone exploring the gut community structure of termites and cockroaches. Copyright © 2015 Elsevier GmbH. All rights reserved.
CHERNOLITTM. Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caff, F., Jr.; Kennedy, R.A.; Mahaffey, J.A.
1992-03-02
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world`s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Jr, F.; Kennedy, R. A.; Mahaffey, J. A.
1992-05-11
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world''s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2016-06-01
Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.
The Joint Committee for Traceability in Laboratory Medicine (JCTLM) - its history and operation.
Jones, Graham R D; Jackson, Craig
2016-01-30
The Joint Committee for Traceability in Laboratory Medicine (JCTLM) was formed to bring together the sciences of metrology, laboratory medicine and laboratory quality management. The aim of this collaboration is to support worldwide comparability and equivalence of measurement results in clinical laboratories for the purpose of improving healthcare. The JCTLM has its origins in the activities of international metrology treaty organizations, professional societies and federations devoted to improving measurement quality in physical, chemical and medical sciences. The three founding organizations, the International Committee for Weights and Measures (CIPM), the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and the International Laboratory Accreditation Cooperation (ILAC) are the leaders of this activity. The main service of the JCTLM is a web-based database with a list of reference materials, reference methods and reference measurement services meeting appropriate international standards. This database allows manufacturers to select references for assay traceability and provides support for suppliers of these services. As of mid 2015 the database lists 295 reference materials for 162 analytes, 170 reference measurement procedures for 79 analytes and 130 reference measurement services for 39 analytes. There remains a need for the development and implementation of metrological traceability in many areas of laboratory medicine and the JCTLM will continue to promote these activities into the future. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
McMillin, Naomi; Allen, Jerry; Erickson, Gary; Campbell, Jim; Mann, Mike; Kubiatko, Paul; Yingling, David; Mason, Charlie
1999-01-01
The objective was to experimentally evaluate the longitudinal and lateral-directional stability and control characteristics of the Reference H configuration at supersonic and transonic speeds. A series of conventional and alternate control devices were also evaluated at supersonic and transonic speeds. A database on the conventional and alternate control devices was to be created for use in the HSR program.
USDA Branded Food Products Database, Release 2
USDA-ARS?s Scientific Manuscript database
The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
Parkhill, Anne; Hill, Kelvin
2009-03-01
The Australian National Stroke Foundation appointed a search specialist to find the best available evidence for the second edition of its Clinical Guidelines for Acute Stroke Management. To identify the relative effectiveness of differing evidence sources for the guideline update. We searched and reviewed references from five valid evidence sources for clinical and economic questions: (i) electronic databases; (ii) reference lists of relevant systematic reviews, guidelines, and/or primary studies; (iii) table of contents of a number of key journals for the last 6 months; (iv) internet/grey literature; and (v) experts. Reference sources were recorded, quantified, and analysed. In the clinical portion of the guidelines document, there was a greater use of previous knowledge and sources other than electronic databases for evidence, while there was a greater use of electronic databases for the economic section. The results confirmed that searchers need to be aware of the context and range of sources for evidence searches. For best available evidence, searchers cannot rely solely on electronic databases and need to encompass many different media and sources.
Bloch-Mouillet, E
1999-01-01
This paper aims to provide technical and practical advice about finding references using Current Contents on disk (Macintosh or PC) or via the Internet (FTP). Seven editions are published each week. They are all organized in the same way and have the same search engine. The Life Sciences edition, extensively used in medical research, is presented here in detail, as an example. This methodological note explains, in French, how to use this reference database. It is designed to be a practical guide for browsing and searching the database, and particularly for creating search profiles adapted to the needs of researchers.
Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.
ERIC Educational Resources Information Center
Conlon, Sumali Pin-Ngern; And Others
1993-01-01
Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)
Evaluation of Database Coverage: A Comparison of Two Methodologies.
ERIC Educational Resources Information Center
Tenopir, Carol
1982-01-01
Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…
Comprehensive Thematic T-matrix Reference Database: a 2013-2014 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Wriedt, Thomas; Videen, Gorden
2014-01-01
This paper is the sixth update to the comprehensive thematic database of peer-reviewedT-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists several earlier publications not incorporated in the original database and previous updates.
Development and applications of the EntomopathogenID MLSA database for use in agricultural systems
USDA-ARS?s Scientific Manuscript database
The current study reports the development and application of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal was to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identi...
ERIC Educational Resources Information Center
Cotton, P. L.
1987-01-01
Defines two types of online databases: source, referring to those intended to be complete in themselves, whether full-text or abstracts; and bibliographic, meaning those that are not complete. Predictions are made about the future growth rate of these two types of databases, as well as full-text versus abstract databases. (EM)
[A systematic evaluation of application of the web-based cancer database].
Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui
2013-10-01
In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.
NASA Astrophysics Data System's New Data
NASA Astrophysics Data System (ADS)
Eichhorn, G.; Accomazzi, A.; Demleitner, M.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
2000-05-01
The NASA Astrophysics Data System has greatly increased its data holdings. The Physics database now contains almost 900,000 references and the Astronomy database almost 550,000 references. The Instrumentation database has almost 600,000 references. The scanned articles in the ADS Article Service are increasing in number continuously. Almost 1 million pages have been scanned so far. Recently the abstracts books from the Lunar and Planetary Science Conference have been scanned and put on-line. The Monthly Notices of the Royal Astronomical Society are currently being scanned back to Volume 1. This is the last major journal to be completely scanned and on-line. In cooperation with a conservation project of the Harvard libraries, microfilms of historical observatory literature are currently being scanned. This will provide access to an important part of the historical literature. The ADS can be accessed at: http://adswww.harvard.edu This project is funded by NASA under grant NCC5-189.
Plant Reactome: a resource for plant pathways and comparative analysis.
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D; Wu, Guanming; Fabregat, Antonio; Elser, Justin L; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D; Ware, Doreen; Jaiswal, Pankaj
2017-01-04
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J
2015-04-01
To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.
Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.
2015-01-01
Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521
ERIC Educational Resources Information Center
Mason, Marilyn Gell
1998-01-01
Describes developments in Online Computer Library Center (OCLC) electronic reference services. Presents a background on networked cataloging and the initial implementation of reference services by OCLC. Discusses the introduction of OCLC FirstSearch service, which today offers access to over 65 databases, future developments in integrated…
Burnham, Judy F
2006-03-08
The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.
Burnham, Judy F
2006-01-01
The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. PMID:16522216
Parson, W; Gusmão, L; Hares, D R; Irwin, J A; Mayr, W R; Morling, N; Pokorak, E; Prinz, M; Salas, A; Schneider, P M; Parsons, T J
2014-11-01
The DNA Commission of the International Society of Forensic Genetics (ISFG) regularly publishes guidelines and recommendations concerning the application of DNA polymorphisms to the question of human identification. Previous recommendations published in 2000 addressed the analysis and interpretation of mitochondrial DNA (mtDNA) in forensic casework. While the foundations set forth in the earlier recommendations still apply, new approaches to the quality control, alignment and nomenclature of mitochondrial sequences, as well as the establishment of mtDNA reference population databases, have been developed. Here, we describe these developments and discuss their application to both mtDNA casework and mtDNA reference population databasing applications. While the generation of mtDNA for forensic casework has always been guided by specific standards, it is now well-established that data of the same quality are required for the mtDNA reference population data used to assess the statistical weight of the evidence. As a result, we introduce guidelines regarding sequence generation, as well as quality control measures based on the known worldwide mtDNA phylogeny, that can be applied to ensure the highest quality population data possible. For both casework and reference population databasing applications, the alignment and nomenclature of haplotypes is revised here and the phylogenetic alignment proffered as acceptable standard. In addition, the interpretation of heteroplasmy in the forensic context is updated, and the utility of alignment-free database searches for unbiased probability estimates is highlighted. Finally, we discuss statistical issues and define minimal standards for mtDNA database searches. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
LactMed: Drugs and Lactation Database
... App LactMed Record Format Database Creation & Peer Review Process Help Fact Sheet Sample Record TOXNET FAQ Glossary Selected References About Dietary Supplements Breastfeeding Links Get LactMed Widget Contact Us Email: tehip@ ...
Ranking the whole MEDLINE database according to a large training set using text indexing.
Suomela, Brian P; Andrade, Miguel A
2005-03-24
The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particular analyses of the literature or database developments may need the complete ranking of all the references in the MEDLINE database as to their relevance to a topic of interest. This report describes a method that does this ranking using the differences in word content between MEDLINE entries related to a topic and the whole of MEDLINE, in a computational time appropriate for an article search query engine. We tested the capabilities of our system to retrieve MEDLINE references which are relevant to the subject of stem cells. We took advantage of the existing annotation of references with terms from the MeSH hierarchical vocabulary (Medical Subject Headings, developed at the National Library of Medicine). A training set of 81,416 references was constructed by selecting entries annotated with the MeSH term stem cells or some child in its sub tree. Frequencies of all nouns, verbs, and adjectives in the training set were computed and the ratios of word frequencies in the training set to those in the entire MEDLINE were used to score references. Self-consistency of the algorithm, benchmarked with a test set containing the training set and an equal number of references randomly selected from MEDLINE was better using nouns (79%) than adjectives (73%) or verbs (70%). The evaluation of the system with 6,923 references not used for training, containing 204 articles relevant to stem cells according to a human expert, indicated a recall of 65% for a precision of 65%. This strategy appears to be useful for predicting the relevance of MEDLINE references to a given concept. The method is simple and can be used with any user-defined training set. Choice of the part of speech of the words used for classification has important effects on performance. Lists of words, scripts, and additional information are available from the web address http://www.ogic.ca/projects/ks2004/.
Comparison of tiered formularies and reference pricing policies: a systematic review
Morgan, Steve; Hanley, Gillian; Greyson, Devon
2009-01-01
Objectives To synthesize methodologically comparable evidence from the published literature regarding the outcomes of tiered formularies and therapeutic reference pricing of prescription drugs. Methods We searched the following electronic databases: ABI/Inform, CINAHL, Clinical Evidence, Digital Dissertations & Theses, Evidence-Based Medicine Reviews (which incorporates ACP Journal Club, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Cochrane Methodology Register, Database of Abstracts of Reviews of Effectiveness, Health Technology Assessments and NHS Economic Evaluation Database), EconLit, EMBASE, International Pharmaceutical Abstracts, MEDLINE, PAIS International and PAIS Archive, and the Web of Science. We also searched the reference lists of relevant articles and several grey literature sources. We sought English-language studies published from 1986 to 2007 that examined the effects of either therapeutic reference pricing or tiered formularies, reported on outcomes relevant to patient care and cost-effectiveness, and employed quantitative study designs that included concurrent or historical comparison groups. We abstracted and assessed potentially appropriate articles using a modified version of the data abstraction form developed by the Cochrane Effective Practice and Organisation of Care Group. Results From an initial list of 2964 citations, 12 citations (representing 11 studies) were deemed eligible for inclusion in our review: 3 studies (reported in 4 articles) of reference pricing and 8 studies of tiered formularies. The introduction of reference pricing was associated with reduced plan spending, switching to preferred medicines, reduced overall drug utilization and short-term increases in the use of physician services. Reference pricing was not associated with adverse health impacts. The introduction of tiered formularies was associated with reduced plan expenditures, greater patient costs and increased rates of non-compliance with prescribed drug therapy. From the data available, we were unable to examine the hypothesis that tiered formulary policies result in greater use of physician services and potentially worse health outcomes. Conclusion The available evidence does not clearly differentiate between reference pricing and tiered formularies in terms of policy outcomes. Reference pricing appears to have a slight evidentiary advantage, given that patients’ health outcomes under tiered formularies have not been well studied and that tiered formularies are associated with increased rates of medicine discontinuation. PMID:21603047
Search Search Home SH Reference Manual E19 Documentation Program Management Training/Drills Other Dataweb National Water Information System Database SH Reference Manual, E-19 Docs, Program Management
The Consolidated Human Activity Database — Master Version (CHAD-Master) Technical Memorandum
This technical memorandum contains information about the Consolidated Human Activity Database -- Master version, including CHAD contents, inventory of variables: Questionnaire files and Event files, CHAD codes, and references.
Reference Material Kydex(registered trademark)-100 Test Data Message for Flammability Testing
NASA Technical Reports Server (NTRS)
Engel, Carl D.; Richardson, Erin; Davis, Eddie
2003-01-01
The Marshall Space Flight Center (MSFC) Materials and Processes Technical Information System (MAPTIS) database contains, as an engineering resource, a large amount of material test data carefully obtained and recorded over a number of years. Flammability test data obtained using Test 1 of NASA-STD-6001 is a significant component of this database. NASA-STD-6001 recommends that Kydex 100 be used as a reference material for testing certification and for comparison between test facilities in the round-robin certification testing that occurs every 2 years. As a result of these regular activities, a large volume of test data is recorded within the MAPTIS database. The activity described in this technical report was undertaken to mine the database, recover flammability (Test 1) Kydex 100 data, and review the lessons learned from analysis of these data.
USGS launches online database: Lichens in National Parks
Bennett, Jim
2005-01-01
If you are interested in lichens and National Parks, now you can query a lichen database that combines these two elements. Using pull-down menus you can: search by park, specifying either species list or the references used for that area; search by species (a report will show the parks in which species are found); and search by reference codes, which are available from the first query. The reference code search allows you to obtain the complete citation for each lichen species listed in a National Park.The result pages from these queries can be printed directly from the web browser, or can be copied and pasted into a word processor.
The Israel DNA database--the establishment of a rapid, semi-automated analysis system.
Zamir, Ashira; Dell'Ariccia-Carmon, Aviva; Zaken, Neomi; Oz, Carla
2012-03-01
The Israel Police DNA database, also known as IPDIS (Israel Police DNA Index System), has been operating since February 2007. During that time more than 135,000 reference samples have been uploaded and more than 2000 hits reported. We have developed an effective semi-automated system that includes two automated punchers, three liquid handler robots and four genetic analyzers. An inhouse LIMS program enables full tracking of every sample through the entire process of registration, pre-PCR handling, analysis of profiles, uploading to the database, hit reports and ultimately storage. The LIMS is also responsible for the future tracking of samples and their profiles to be expunged from the database according to the Israeli DNA legislation. The database is administered by an in-house developed software program, where reference and evidentiary profiles are uploaded, stored, searched and matched. The DNA database has proven to be an effective investigative tool which has gained the confidence of the Israeli public and on which the Israel National Police force has grown to rely. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
PoMaMo--a comprehensive database for potato genome data.
Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane
2005-01-01
A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes.
PoMaMo—a comprehensive database for potato genome data
Meyer, Svenja; Nagel, Axel; Gebhardt, Christiane
2005-01-01
A database for potato genome data (PoMaMo, Potato Maps and More) was established. The database contains molecular maps of all twelve potato chromosomes with about 1000 mapped elements, sequence data, putative gene functions, results from BLAST analysis, SNP and InDel information from different diploid and tetraploid potato genotypes, publication references, links to other public databases like GenBank (http://www.ncbi.nlm.nih.gov/) or SGN (Solanaceae Genomics Network, http://www.sgn.cornell.edu/), etc. Flexible search and data visualization interfaces enable easy access to the data via internet (https://gabi.rzpd.de/PoMaMo.html). The Java servlet tool YAMB (Yet Another Map Browser) was designed to interactively display chromosomal maps. Maps can be zoomed in and out, and detailed information about mapped elements can be obtained by clicking on an element of interest. The GreenCards interface allows a text-based data search by marker-, sequence- or genotype name, by sequence accession number, gene function, BLAST Hit or publication reference. The PoMaMo database is a comprehensive database for different potato genome data, and to date the only database containing SNP and InDel data from diploid and tetraploid potato genotypes. PMID:15608284
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
NASA Astrophysics Data System (ADS)
Wtv Gmbh
This new CD-ROM is a reference database. It covers almost twenty years of non-military scientific/technical meetings and publications sponsored by the NATO Science Committee. It contains full references (with keywords and/or abstracts) to more than 30,000 contributions from scientists all over the world and is published in more than 1,000 volumes. With the easy-to-follow menu options of the retrieval software, access to the data is simple and fast. Updates are planned on a yearly basis.
Sources and performance criteria of uncertainty of reference measurement procedures.
Mosca, Andrea; Paleari, Renata
2018-05-29
This article wants to focus on the today available Reference Measurement Procedures (RMPs) for the determination of various analytes in Laboratory Medicine and the possible tools to evaluate their performance in the laboratories who are currently using them. A brief review on the RMPs has been performed by investigating the Joint Committee for Traceability in Laboratory Medicine (JCTLM) database. In order to evaluate their performances, we have checked the organization of three international ring trials, i.e. those regularly performed by the IFCC External Quality assessment scheme for Reference Laboratories in Laboratory Medicine (RELA), by the Center for Disease Control and Prevention (CDC) cholesterol network and by the IFCC Network for HbA 1c . Several RMPs are available through the JCTLM database, but the best way to collect information about the RMPs and their uncertainties is to look at the reference measurement service providers (RMS). This part of the database and the background on how to listed in the database is very helpful for the assessment of expanded uncertainty (MU) and performance in general of RMPs. Worldwide, 17 RMS are listed in the database, and for most of the measurands more than one RMS is able to run the relative RMPs, with similar expanded uncertainties. As an example, for a-amylase, 4 SP offer their services with MU between 1.6 and 3.3%. In other cases (such as total cholesterol, the U may span over a broader range, i.e. from 0.02 to 3.6%). With regard to the performance evaluation, the approach is often heterogenous, and it is difficult to compare the performance of laboratories running the same RMP for the same measurand if involved in more than one EQAS. The reference measurement services have been created to help laboratory professionals and manufacturers to implement the correct metrological traceability, and the JCTLM database is the only correct way to retrieve all the necessary important information to this end. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Databases for the Global Dynamics of Multiparameter Nonlinear Systems
2014-03-05
AFRL-OSR-VA-TR-2014-0078 DATABASES FOR THE GLOBAL DYNAMICS OF MULTIPARAMETER NONLINEAR SYSTEMS Konstantin Mischaikow RUTGERS THE STATE UNIVERSITY OF...University of New Jersey ASB III, Rutgers Plaza New Brunswick, NJ 08807 DATABASES FOR THE GLOBAL DYNAMICS OF MULTIPARAMETER NONLINEAR SYSTEMS ...dynamical systems . We refer to the output as a Database for Global Dynamics since it allows the user to query for information about the existence and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning
2007-10-18
Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at http://www.ebi.ac.uk/Tools/picr.
Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning
2007-01-01
Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at . PMID:17945017
The research of network database security technology based on web service
NASA Astrophysics Data System (ADS)
Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin
2013-03-01
Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.
Repetitive Bibliographical Information in Relational Databases.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1988-01-01
Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)
The Relationship between Journal Productivity and Obsolescence.
ERIC Educational Resources Information Center
Wallace, Danny P.
1986-01-01
Examines relationship between journal productivity (number of references to particular journal) and journal obsolescence (median age of references to particular journal) for database of references dealing with desalination. Citation age by Bradford zones, continuous measurement of productivity and citation age, and underlying structure of observed…
A comparative view of the new journal: Assessment.
Blashfield, R K; Archer, G
2001-09-01
The reference sections from all articles in the 1997 volumes of Assessment, Journal of Personality Assessment, and Psychological Assessment were entered into a database and analyzed. An article published in Assessment averaged almost 31 references. An article published in Journal of Personality Assessment contained an average of 33 references. Psychological Assessment averaged 38 references per article. The median age of the references in the three journals was 8 years with an interquartile range of 4 to 14 years. The Journal of Personality Assessment had the largest number of citations in this database of 5,316 references. Each of these received a relatively large number of their citations from articles published in the same journal (self-citations). Randomly selected articles from the 1997 volume of Assessment received fewer citations in the Social Science Citation Index than a similar set of articles from the other two journals. However, the data on Assessment, when compared with data available on other new scientific publications, suggests that Assessment is doing as well as other fledgling journals.
van Baal, Sjozef; Kaimakis, Polynikis; Phommarinh, Manyphong; Koumbi, Daphne; Cuppens, Harry; Riccardino, Francesca; Macek, Milan; Scriver, Charles R; Patrinos, George P
2007-01-01
Frequency of INherited Disorders database (FINDbase) (http://www.findbase.org) is a relational database, derived from the ETHNOS software, recording frequencies of causative mutations leading to inherited disorders worldwide. Database records include the population and ethnic group, the disorder name and the related gene, accompanied by links to any corresponding locus-specific mutation database, to the respective Online Mendelian Inheritance in Man entries and the mutation together with its frequency in that population. The initial information is derived from the published literature, locus-specific databases and genetic disease consortia. FINDbase offers a user-friendly query interface, providing instant access to the list and frequencies of the different mutations. Query outputs can be either in a table or graphical format, accompanied by reference(s) on the data source. Registered users from three different groups, namely administrator, national coordinator and curator, are responsible for database curation and/or data entry/correction online via a password-protected interface. Databaseaccess is free of charge and there are no registration requirements for data querying. FINDbase provides a simple, web-based system for population-based mutation data collection and retrieval and can serve not only as a valuable online tool for molecular genetic testing of inherited disorders but also as a non-profit model for sustainable database funding, in the form of a 'database-journal'.
A nursing qualitative systematic review required MEDLINE and CINAHL for study identification.
Subirana, Mireia; Solá, Ivan; Garcia, Josep M; Gich, Ignasi; Urrútia, Gerard
2005-01-01
Analyze the number and the relevance of references retrieved from CINAHL, MEDLINE, and EMBASE to perform a nursing systematic review. A search strategy for the review topic was designed according to thesaurus terms. The study analyzes (1) references with abstract, (2) overlap between databases, (3) reference relevance, (4) relevance agreement between experts, and (5) reference accessibility. Bibliographic search retrieved 232 references: 16% (37) in CINAHL, 68% (157) in MEDLINE, and 16% (38) in EMBASE. Of these, 72% (164) were references retrieved with an abstract: 14% (23) in CINAHL, 70% (115) in MEDLINE, and 16% (26) in EMBASE. Overlap was observed in 2% (5) of the references. Relevance assessment reduced the number of references to 43 (19%): 12 (34.3%) in CINAHL, 31 (19.7%) in MEDLINE, and none in EMBASE (Z=-1.97; P=.048). Agreement between experts achieved a maximum Cohen's kappa of 0.76 (P < .005). References identified in CINAHL were the most difficult to obtain (chi(2)=3.9; df=1; P=.048). To perform a quality bibliographic search for a systematic review on nursing topics, CINAHL and MEDLINE are essential databases for consultation to maximize the accuracy of the search.
Rice proteome analysis: a step toward functional analysis of the rice genome.
Komatsu, Setsuko; Tanaka, Naoki
2005-03-01
The technique of proteome analysis using 2-DE has the power to monitor global changes that occur in the protein complement of tissues and subcellular compartments. In this review, we describe construction of the rice proteome database, the cataloging of rice proteins, and the functional characterization of some of the proteins identified. Initially, proteins extracted from various tissues and organelles were separated by 2-DE and an image analyzer was used to construct a display or reference map of the proteins. The rice proteome database currently contains 23 reference maps based on 2-DE of proteins from different rice tissues and subcellular compartments. These reference maps comprise 13 129 rice proteins, and the amino acid sequences of 5092 of these proteins are entered in the database. Major proteins involved in growth or stress responses have been identified by using a proteomics approach and some of these proteins have unique functions. Furthermore, initial work has also begun on analyzing the phosphoproteome and protein-protein interactions in rice. The information obtained from the rice proteome database will aid in the molecular cloning of rice genes and in predicting the function of unknown proteins.
Identification of "Known Unknowns" Utilizing Accurate Mass Data and ChemSpider
NASA Astrophysics Data System (ADS)
Little, James L.; Williams, Antony J.; Pshenichnov, Alexey; Tkachenko, Valery
2012-01-01
In many cases, an unknown to an investigator is actually known in the chemical literature, a reference database, or an internet resource. We refer to these types of compounds as "known unknowns." ChemSpider is a very valuable internet database of known compounds useful in the identification of these types of compounds in commercial, environmental, forensic, and natural product samples. The database contains over 26 million entries from hundreds of data sources and is provided as a free resource to the community. Accurate mass mass spectrometry data is used to query the database by either elemental composition or a monoisotopic mass. Searching by elemental composition is the preferred approach. However, it is often difficult to determine a unique elemental composition for compounds with molecular weights greater than 600 Da. In these cases, searching by the monoisotopic mass is advantageous. In either case, the search results are refined by sorting the number of references associated with each compound in descending order. This raises the most useful candidates to the top of the list for further evaluation. These approaches were shown to be successful in identifying "known unknowns" noted in our laboratory and for compounds of interest to others.
Metzendorf, Maria-Inti; Schulz, Manuela; Braun, Volker
2014-10-01
To be able to take well-informed decisions or carry out sound research, clinicians and researchers alike require specific information seeking skills matching their respective information needs. Biomedical information is traditionally available via different literature databases. This article gives an introduction to two diverging sources, PubMed (23 million references) and The Cochrane Library (800,000 references), both of which offer sophisticated instruments for searching an increasing amount of medical publications of varied quality and ambition. Whereas PubMed as an unfiltered source of primary literature comprises all different kinds of publication types occurring in academic journals, The Cochrane Library is a pre-filtered source which offers access to either synthesized publication types or critically appraised and carefully selected references. A search approach has to be carried out deliberately and requires a good knowledge on the scope and features of the databases as well as on the ability to build a search strategy in a structured way. We present a specific and a sensitive search approach, making use of both databases within two application case scenarios in order to identify the evidence on granulocyte transfusions for infections in adult patients with neutropenia.
Proteome reference map and regulation network of neonatal rat cardiomyocyte
Li, Zi-jian; Liu, Ning; Han, Qi-de; Zhang, You-yi
2011-01-01
Aim: To study and establish a proteome reference map and regulation network of neonatal rat cardiomyocyte. Methods: Cultured cardiomyocytes of neonatal rats were used. All proteins expressed in the cardiomyocytes were separated and identified by two-dimensional polyacrylamide gel electrophoresis (2-DE) and matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS). Biological networks and pathways of the neonatal rat cardiomyocytes were analyzed using the Ingenuity Pathway Analysis (IPA) program (www.ingenuity.com). A 2-DE database was made accessible on-line by Make2ddb package on a web server. Results: More than 1000 proteins were separated on 2D gels, and 148 proteins were identified. The identified proteins were used for the construction of an extensible markup language-based database. Biological networks and pathways were constructed to analyze the functions associate with cardiomyocyte proteins in the database. The 2-DE database of rat cardiomyocyte proteins can be accessed at http://2d.bjmu.edu.cn. Conclusion: A proteome reference map and regulation network of the neonatal rat cardiomyocytes have been established, which may serve as an international platform for storage, analysis and visualization of cardiomyocyte proteomic data. PMID:21841810
ERIC Educational Resources Information Center
Miley, David W.
Many reference librarians still rely on manual searches to access vertical files, ready reference files, and other information stored in card files, drawers, and notebooks scattered around the reference department. Automated access to these materials via microcomputers using database management software may speed up the process. This study focuses…
The Spanish National Reference Database for Ionizing Radiations (BANDRRI)
Los Arcos JM; Bailador; Gonzalez; Gonzalez; Gorostiza; Ortiz; Sanchez; Shaw; Williart
2000-03-01
The Spanish National Reference Database for Ionizing Radiations (BANDRRI) is being implemented by a reasearch team in the frame of a joint project between CIEMAT (Unidad de Metrologia de Radiaciones Ionizantes and Direccion de Informatica) and the Universidad Nacional de Educacion a Distancia (UNED, Departamento de Mecanica y Departamento de Fisica de Materiales). This paper presents the main objectives of BANDRRI, its dynamic and relational data base structure, interactive Web accessibility and its main radionuclide-related contents at this moment.
Is Library Database Searching a Language Learning Activity?
ERIC Educational Resources Information Center
Bordonaro, Karen
2010-01-01
This study explores how non-native speakers of English think of words to enter into library databases when they begin the process of searching for information in English. At issue is whether or not language learning takes place when these students use library databases. Language learning in this study refers to the use of strategies employed by…
New data sources and derived products for the SRER digital spatial database
Craig Wissler; Deborah Angell
2003-01-01
The Santa Rita Experimental Range (SRER) digital database was developed to automate and preserve ecological data and increase their accessibility. The digital data holdings include a spatial database that is used to integrate ecological data in a known reference system and to support spatial analyses. Recently, the Advanced Resource Technology (ART) facility has added...
The Protein Information Resource: an integrated public resource of functional annotation of proteins
Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.
2002-01-01
The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247
University Library Online Reference Service Program Plan, 1986/87.
ERIC Educational Resources Information Center
Koga, James S.
This program plan for online reference service--the individualized assistance provided to a library patron using an online system--at California State Polytechnic University, Pomona, covers the areas of funding, eligibility for online services, search request eligibility, database eligibility, management of online services, reference faculty…
CD-ROM + Fax = Shared Reference Resources.
ERIC Educational Resources Information Center
Fitzwater, Diana; Fradkin, Bernard
1988-01-01
Describes the Reference by GammaFax Project, which joined nine area libraries to provide cooperative reference access to optical disk databases. The configuration of disk players, microcomputers, and facsimile equipment used by the libraries is explained, and the improvements in cost effectiveness, provision of service, and librarian expertise are…
National Institute of Standards and Technology Data Gateway
NIST Scoring Package (PC database for purchase) The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.
This page is the starting point for EZ Query. This page describes how to select key data elements from EPA's Facility Information Database and Geospatial Reference Database to build a tabular report or a Comma Separated Value (CSV) files for downloading.
ERIC Educational Resources Information Center
Ertel, Monica M.
1984-01-01
This discussion of current microcomputer technologies available to libraries focuses on software applications in four major classifications: communications (online database searching); word processing; administration; and database management systems. Specific examples of library applications are given and six references are cited. (EJS)
BioPepDB: an integrated data platform for food-derived bioactive peptides.
Li, Qilin; Zhang, Chao; Chen, Hongjun; Xue, Jitong; Guo, Xiaolei; Liang, Ming; Chen, Ming
2018-03-12
Food-derived bioactive peptides play critical roles in regulating most biological processes and have considerable biological, medical and industrial importance. However, a large number of active peptides data, including sequence, function, source, commercial product information, references and other information are poorly integrated. BioPepDB is a searchable database of food-derived bioactive peptides and their related articles, including more than four thousand bioactive peptide entries. Moreover, BioPepDB provides modules of prediction and hydrolysis-simulation for discovering novel peptides. It can serve as a reference database to investigate the function of different bioactive peptides. BioPepDB is available at http://bis.zju.edu.cn/biopepdbr/ . The web page utilises Apache, PHP5 and MySQL to provide the user interface for accessing the database and predict novel peptides. The database itself is operated on a specialised server.
Fazio, Simone; Garraín, Daniel; Mathieux, Fabrice; De la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda
2015-01-01
Under the framework of the European Platform on Life Cycle Assessment, the European Reference Life-Cycle Database (ELCD - developed by the Joint Research Centre of the European Commission), provides core Life Cycle Inventory (LCI) data from front-running EU-level business associations and other sources. The ELCD contains energy-related data on power and fuels. This study describes the methods to be used for the quality analysis of energy data for European markets (available in third-party LC databases and from authoritative sources) that are, or could be, used in the context of the ELCD. The methodology was developed and tested on the energy datasets most relevant for the EU context, derived from GaBi (the reference database used to derive datasets for the ELCD), Ecoinvent, E3 and Gemis. The criteria for the database selection were based on the availability of EU-related data, the inclusion of comprehensive datasets on energy products and services, and the general approval of the LCA community. The proposed approach was based on the quality indicators developed within the International Reference Life Cycle Data System (ILCD) Handbook, further refined to facilitate their use in the analysis of energy systems. The overall Data Quality Rating (DQR) of the energy datasets can be calculated by summing up the quality rating (ranging from 1 to 5, where 1 represents very good, and 5 very poor quality) of each of the quality criteria indicators, divided by the total number of indicators considered. The quality of each dataset can be estimated for each indicator, and then compared with the different databases/sources. The results can be used to highlight the weaknesses of each dataset and can be used to guide further improvements to enhance the data quality with regard to the established criteria. This paper describes the application of the methodology to two exemplary datasets, in order to show the potential of the methodological approach. The analysis helps LCA practitioners to evaluate the usefulness of the ELCD datasets for their purposes, and dataset developers and reviewers to derive information that will help improve the overall DQR of databases.
2004-09-01
Databases 2-2 2.3.1 Translanguage English Database 2-2 2.3.2 Australian National Database of Spoken Language 2-3 2.3.3 Strange Corpus 2-3 2.3.4...some relevance to speech technology research. 2.3.1 Translanguage English Database In a daring plan Joseph Mariani, then at LIMSI-CNRS, proposed to...native speakers. The database is known as the ‘ Translanguage English Database’ but is often referred to as the ‘terrible English database.’ About 28
Mapping the literature of clinical laboratory science.
Delwiche, Frances A
2003-07-01
This paper describes a citation analysis of the literature of clinical laboratory science (medical technology), conducted as part of a project of the Nursing and Allied Health Resources Section of the Medical Library Association. Three source journals widely read by those in the field were identified, from which cited references were collected for a three-year period. Analysis of the references showed that journals were the predominant format of literature cited and the majority of the references were from the last eleven years. Applying Bradford's Law of Scattering to the list of journals cited, three zones were created, each producing approximately one third of the cited references. Thirteen journals were in the first zone, eighty-one in the second, and 849 in the third. A similar list of journals cited was created for four specialty areas in the field: chemistry, hematology, immunohematology, and microbiology. In comparing the indexing coverage of the Zone 1 and 2 journals by four major databases, MEDLINE provided the most comprehensive coverage, while the Cumulative Index to Nursing and Allied Health Literature was the only database that provided complete coverage of the three source journals. However, to obtain complete coverage of the field, it is essential to search multiple databases.
Mapping the literature of clinical laboratory science
Delwiche, Frances A.
2003-01-01
This paper describes a citation analysis of the literature of clinical laboratory science (medical technology), conducted as part of a project of the Nursing and Allied Health Resources Section of the Medical Library Association. Three source journals widely read by those in the field were identified, from which cited references were collected for a three-year period. Analysis of the references showed that journals were the predominant format of literature cited and the majority of the references were from the last eleven years. Applying Bradford's Law of Scattering to the list of journals cited, three zones were created, each producing approximately one third of the cited references. Thirteen journals were in the first zone, eighty-one in the second, and 849 in the third. A similar list of journals cited was created for four specialty areas in the field: chemistry, hematology, immunohematology, and microbiology. In comparing the indexing coverage of the Zone 1 and 2 journals by four major databases, MEDLINE provided the most comprehensive coverage, while the Cumulative Index to Nursing and Allied Health Literature was the only database that provided complete coverage of the three source journals. However, to obtain complete coverage of the field, it is essential to search multiple databases. PMID:12883564
Characteristics and external validity of the German Health Risk Institute (HRI) Database.
Andersohn, Frank; Walker, Jochen
2016-01-01
The aim of this study was to describe characteristics and external validity of the German Health Risk Institute (HRI) Database. The HRI Database is an anonymized healthcare database with longitudinal data from approximately six Mio Germans. In addition to demographic information (gender, age, region of residence), data on persistence of insurants over time, hospitalization rates, mortality rates and drug prescription rates were extracted from the HRI database for 2013. Corresponding national reference data were obtained from official sources. The proportion of men and women was similar in the HRI Database and Germany, but the database population was slightly younger (mean 40.4 vs 43.7 years). The proportion of insurants living in the eastern part of Germany was lower in the HRI Database (10.1% vs 19.7%). There was good accordance to German reference data with respect to hospitalization rates, overall mortality rate and prescription rates for the 20 most often reimbursed drug classes, with the overall burden of morbidity being slightly lower in the HRI database. From insurants insured on 1 January 2009 (N = 6.2 Mio), a total of 70.6% survived and remained continuously insured with the same statutory health insurance until 31 December 2013. This proportion increased to 77.5% if only insurants ≥40 years were considered. There was good overall accordance of the HRI database and the German population in terms of measures of morbidity, mortality and drug usage. Persistence of insurants with the database over time was high, indicating suitability of the data source for longitudinal epidemiological analyses. Copyright © 2015 John Wiley & Sons, Ltd.
NED and SIMBAD Conventions for Bibliographic Reference Coding
NASA Technical Reports Server (NTRS)
Schmitz, M.; Helou, G.; Dubois, P.; LaGue, C.; Madore, B.; Jr., H. G. Corwin; Lesteven, S.
1995-01-01
The primary purpose of the 'reference code' is to provide a unique and traceable representation of a bibliographic reference within the structure of each database. The code is used frequently in the interfaces as a succinct abbreviation of a full bibliographic reference. Since its inception, it has become a standard code not only for NED and SIMBAD, but also for other bibliographic services.
Code of Federal Regulations, 2010 CFR
2010-01-01
...(s) located in Department's public reference room. 221.550 Section 221.550 Aeronautics and Space... public reference room. Copies of information contained in a filer's on-line tariff database may be... Reference Room by the filer. The filer may assess a fee for copying, provided it is reasonable and that no...
[Experience with the reference manager EndNote-EndLink].
Reiss, M; Reiss, G
1998-09-01
A good reference management program should make it easy to record the elements of a reference: author's name, year of publication, title of article, etc. It should offer tools that let you find and retrieve references quickly, and it should be able to produce the bibliography in the format required for a particular publication. There are many computer programs, but very few stand out as truly useful, time saving, and work enhancing. One of them is EndNote-EndLink. We want to report our experience with this database manager. The functions and the use of the software package EndNote 2.3 for Windows are described. You can create your database or you can download batches of references from one of the popular searching services (e.g. MEDLINE). When you want to cite a reference you simply paste the reference wherever you want your in-text citation to appear. To prepare the bibliography, EndNote scans your article, replaces the place holders with citations and prints the list of references at the end of the manuscript, according with the style that you have chosen. Altogether EndNote provides an excellent combination of features and ease of use.
Developing a list of reference chemicals for testing alternatives to whole fish toxicity tests.
Schirmer, Kristin; Tanneberger, Katrin; Kramer, Nynke I; Völker, Doris; Scholz, Stefan; Hafner, Christoph; Lee, Lucy E J; Bols, Niels C; Hermens, Joop L M
2008-11-11
This paper details the derivation of a list of 60 reference chemicals for the development of alternatives to animal testing in ecotoxicology with a particular focus on fish. The chemicals were selected as a prerequisite to gather mechanistic information on the performance of alternative testing systems, namely vertebrate cell lines and fish embryos, in comparison to the fish acute lethality test. To avoid the need for additional experiments with fish, the U.S. EPA fathead minnow database was consulted as reference for whole organism responses. This database was compared to the Halle Registry of Cytotoxicity and a collation of data by the German EPA (UBA) on acute toxicity data derived from zebrafish embryos. Chemicals that were present in the fathead minnow database and in at least one of the other two databases were subject to selection. Criteria included the coverage of a wide range of toxicity and physico-chemical parameters as well as the determination of outliers of the in vivo/in vitro correlations. While the reference list of chemicals now guides our research for improving cell line and fish embryo assays to make them widely applicable, the list could be of benefit to search for alternatives in ecotoxicology in general. One example would be the use of this list to validate structure-activity prediction models, which in turn would benefit from a continuous extension of this list with regard to physico-chemical and toxicological data.
The LANL hemorrhagic fever virus database, a new platform for analyzing biothreat viruses.
Kuiken, Carla; Thurmond, Jim; Dimitrijevic, Mira; Yoon, Hyejin
2012-01-01
Hemorrhagic fever viruses (HFVs) are a diverse set of over 80 viral species, found in 10 different genera comprising five different families: arena-, bunya-, flavi-, filo- and togaviridae. All these viruses are highly variable and evolve rapidly, making them elusive targets for the immune system and for vaccine and drug design. About 55,000 HFV sequences exist in the public domain today. A central website that provides annotated sequences and analysis tools will be helpful to HFV researchers worldwide. The HFV sequence database collects and stores sequence data and provides a user-friendly search interface and a large number of sequence analysis tools, following the model of the highly regarded and widely used Los Alamos HIV database [Kuiken, C., B. Korber, and R.W. Shafer, HIV sequence databases. AIDS Rev, 2003. 5: p. 52-61]. The database uses an algorithm that aligns each sequence to a species-wide reference sequence. The NCBI RefSeq database [Sayers et al. (2011) Database resources of the National Center for Biotechnology Information. Nucleic Acids Res., 39, D38-D51.] is used for this; if a reference sequence is not available, a Blast search finds the best candidate. Using this method, sequences in each genus can be retrieved pre-aligned. The HFV website can be accessed via http://hfv.lanl.gov.
The Physiology Constant Database of Teen-Agers in Beijing
Wei-Qi, Wei; Guang-Jin, Zhu; Cheng-Li, Xu; Shao-Mei, Han; Bao-Shen, Qi; Li, Chen; Shu-Yu, Zu; Xiao-Mei, Zhou; Wen-Feng, Hu; Zheng-Guo, Zhang
2004-01-01
Physiology constants of adolescents are important to understand growing living systems and are a useful reference in clinical and epidemiological research. Until recently, physiology constants were not available in China and therefore most physiologists, physicians, and nutritionists had to use data from abroad for reference. However, the very difference between the Eastern and Western races casts doubt on the usefulness of overseas data. We have therefore created a database system to provide a repository for the storage of physiology constants of teen-agers in Beijing. The several thousands of pieces of data are now divided into hematological biochemistry, lung function, and cardiac function with all data manually checked before being transferred into the database. The database was accomplished through the development of a web interface, scripts, and a relational database. The physiology data were integrated into the relational database system to provide flexible facilities by using combinations of various terms and parameters. A web browser interface was designed for the users to facilitate their searching. The database is available on the web. The statistical table, scatter diagram, and histogram of the data are available for both anonym and user according to queries, while only the user can achieve detail, including download data and advanced search. PMID:15258669
MPD3: a useful medicinal plants database for drug designing.
Mumtaz, Arooj; Ashfaq, Usman Ali; Ul Qamar, Muhammad Tahir; Anwar, Farooq; Gulzar, Faisal; Ali, Muhammad Amjad; Saari, Nazamid; Pervez, Muhammad Tariq
2017-06-01
Medicinal plants are the main natural pools for the discovery and development of new drugs. In the modern era of computer-aided drug designing (CADD), there is need of prompt efforts to design and construct useful database management system that allows proper data storage, retrieval and management with user-friendly interface. An inclusive database having information about classification, activity and ready-to-dock library of medicinal plant's phytochemicals is therefore required to assist the researchers in the field of CADD. The present work was designed to merge activities of phytochemicals from medicinal plants, their targets and literature references into a single comprehensive database named as Medicinal Plants Database for Drug Designing (MPD3). The newly designed online and downloadable MPD3 contains information about more than 5000 phytochemicals from around 1000 medicinal plants with 80 different activities, more than 900 literature references and 200 plus targets. The designed database is deemed to be very useful for the researchers who are engaged in medicinal plants research, CADD and drug discovery/development with ease of operation and increased efficiency. The designed MPD3 is a comprehensive database which provides most of the information related to the medicinal plants at a single platform. MPD3 is freely available at: http://bioinform.info .
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean.
Pagliosa, Paulo R; Doria, João G; Misturini, Dairana; Otegui, Mariana B P; Oortman, Mariana S; Weis, Wilson A; Faroni-Perez, Larisse; Alves, Alexandre P; Camargo, Maurício G; Amaral, A Cecília Z; Marques, Antonio C; Lana, Paulo C
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/.
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean
Pagliosa, Paulo R.; Doria, João G.; Misturini, Dairana; Otegui, Mariana B. P.; Oortman, Mariana S.; Weis, Wilson A.; Faroni-Perez, Larisse; Alves, Alexandre P.; Camargo, Maurício G.; Amaral, A. Cecília Z.; Marques, Antonio C.; Lana, Paulo C.
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/ PMID:24573879
The Novice User and CD-ROM Database Services. ERIC Digest.
ERIC Educational Resources Information Center
Schamber, Linda
This digest answers the following questions that beginning or novice users may have about CD-ROM (a compact disk with read-only memory) database services: (1) What is CD-ROM? (2) What databases are available? (3) Is CD-ROM difficult to use? (4) How much does CD-ROM cost? and (5) What is the future of CD-ROM? (15 references) (MES)
Mapping the literature of transcultural nursing*
Murphy, Sharon C.
2006-01-01
Overview: No bibliometric studies of the literature of the field of transcultural nursing have been published. This paper describes a citation analysis as part of the project undertaken by the Nursing and Allied Health Resources Section of the Medical Library Association to map the literature of nursing. Objective: The purpose of this study was to identify the core literature and determine which databases provided the most complete access to the transcultural nursing literature. Methods: Cited references from essential source journals were analyzed for a three-year period. Eight major databases were compared for indexing coverage of the identified core list of journals. Results: This study identifies 138 core journals. Transcultural nursing relies on journal literature from associated health sciences fields in addition to nursing. Books provide an important format. Nearly all cited references were from the previous 18 years. In comparing indexing coverage among 8 major databases, 3 databases rose to the top. Conclusions: No single database can claim comprehensive indexing coverage for this broad field. It is essential to search multiple databases. Based on this study, PubMed/MEDLINE, Social Sciences Citation Index, and CINAHL provide the best coverage. Collections supporting transcultural nursing require robust access to literature beyond nursing publications. PMID:16710461
GMDD: a database of GMO detection methods.
Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing
2008-06-04
Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, Upendra S.
Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary ofmore » appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/« less
Concentrations of indoor pollutants (CIP) database user's manual (Version 4. 0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, M.G.; Brown, S.R.; Corradi, C.A.
1990-10-01
This is the latest release of the database and the user manual. The user manual is a tutorial and reference for utilizing the CIP Database system. An installation guide is included to cover various hardware configurations. Numerous examples and explanations of the dialogue between the user and the database program are provided. It is hoped that this resource will, along with on-line help and the menu-driven software, make for a quick and easy learning curve. For the purposes of this manual, it is assumed that the user is acquainted with the goals of the CIP Database, which are: (1) tomore » collect existing measurements of concentrations of indoor air pollutants in a user-oriented database and (2) to provide a repository of references citing measured field results openly accessible to a wide audience of researchers, policy makers, and others interested in the issues of indoor air quality. The database software, as distinct from the data, is contained in two files, CIP. EXE and PFIL.COM. CIP.EXE is made up of a number of programs written in dBase III command code and compiled using Clipper into a single, executable file. PFIL.COM is a program written in Turbo Pascal that handles the output of summary text files and is called from CIP.EXE. Version 4.0 of the CIP Database is current through March 1990.« less
The Role of Microcomputers in Libraries.
ERIC Educational Resources Information Center
Lundeen, Gerald
1980-01-01
Describes the functions and characteristics of the microcomputer and discusses library applications including cataloging, circulation, acquisitions, serials control, reference and database systems, administration, current and future trends, and computers as media. Twenty references are listed. (CHC)
Searching CINAHL did not add value to clinical questions posed in NICE guidelines.
Beckles, Zosia; Glover, Sarah; Ashe, Joanna; Stockton, Sarah; Boynton, Janette; Lai, Rosalind; Alderson, Philip
2013-09-01
This study aims to quantify the unique useful yield from the Cumulative Index to Nursing and Allied Health Literature (CINAHL) database to National Institute for Health and Clinical Excellence (NICE) clinical guidelines. A secondary objective is to investigate the relationship between this yield and different clinical question types. It is hypothesized that the unique useful yield from CINAHL is low, and this database can therefore be relegated to selective rather than routine searching. A retrospective sample of 15 NICE guidelines published between 2005 and 2009 was taken. Information on clinical review question type, number of references, and reference source was extracted. Only 0.33% (95% confidence interval: 0.01-0.64%) of references per guideline were unique to CINAHL. Nursing- or allied health (AH)-related questions were nearly three times as likely to have references unique to CINAHL as non-nursing- or AH-related questions (14.89% vs. 5.11%), and this relationship was found to be significant (P<0.05). No significant relationship was found between question type and unique CINAHL yield for drug-related questions. The very low proportion of references unique to CINAHL strongly suggests that this database can be safely relegated to selective rather than routine searching. Nursing- and AH-related questions would benefit from selective searching of CINAHL. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wziontek, Hartmut; Falk, Reinhard; Bonvalot, Sylvain; Rülke, Axel
2017-04-01
After about 10 years of successful joint operation by BGI and BKG, the International Database for Absolute Gravity Measurements "AGrav" (see references hereafter) was under a major revision. The outdated web interface was replaced by a responsive, high level web application framework based on Python and built on top of Pyramid. Functionality was added, like interactive time series plots or a report generator and the interactive map-based station overview was updated completely, comprising now clustering and the classification of stations. Furthermore, the database backend was migrated to PostgreSQL for better support of the application framework and long-term availability. As comparisons of absolute gravimeters (AG) become essential to realize a precise and uniform gravity standard, the database was extended to document the results on international and regional level, including those performed at monitoring stations equipped with SGs. By this it will be possible to link different AGs and to trace their equivalence back to the key comparisons under the auspices of International Committee for Weights and Measures (CIPM) as the best metrological realization of the absolute gravity standard. In this way the new AGrav database accommodates the demands of the new Global Absolute Gravity Reference System as recommended by the IAG Resolution No. 2 adopted in Prague 2015. The new database will be presented with focus on the new user interface and new functionality, calling all institutions involved in absolute gravimetry to participate and contribute with their information to built up a most complete picture of high precision absolute gravimetry and improve its visibility. A Digital Object Identifier (DOI) will be provided by BGI to contributors to give a better traceability and facilitate the referencing of their gravity surveys. Links and references: BGI mirror site : http://bgi.obs-mip.fr/data-products/Gravity-Databases/Absolute-Gravity-data/ BKG mirror site: http://agrav.bkg.bund.de/agrav-meta/ Wilmes, H., H. Wziontek, R. Falk, S. Bonvalot (2009). AGrav - the New Absolute Gravity Database and a Proposed Cooperation with the GGP Project. J. of Geodynamics, 48, pp. 305-309. doi:10.1016/j.jog.2009.09.035. Wziontek, H., H. Wilmes, S. Bonvalot (2011). AGrav: An international database for absolute gravity measurements. In Geodesy for Planet Earth (S. Kenyon at al. eds). IAG Symposia, 136, 1035-1040, Springer, Berlin. 2011. doi:10.1007/978-3-642-20338-1_130.
Online Patent Searching: The Realities.
ERIC Educational Resources Information Center
Kaback, Stuart M.
1983-01-01
Considers patent subject searching capabilities of major online databases, noting patent claims, "deep-indexed" files, test searches, retrieval of related references, multi-database searching, improvements needed in indexing of chemical structures, full text searching, improvements needed in handling numerical data, and augmenting a…
Data tables for the 1994 National Transit Database report year
DOT National Transportation Integrated Search
1995-12-01
The Data Tables For the 1994 National Transit Database Report Year is one of three publications also referred to as the National Transit Databse Reporting System. The report provides detailed summaries of financial and operating data submitted to FTA...
PlantTFDB: a comprehensive plant transcription factor database
Guo, An-Yuan; Chen, Xin; Gao, Ge; Zhang, He; Zhu, Qi-Hui; Liu, Xiao-Chuan; Zhong, Ying-Fu; Gu, Xiaocheng; He, Kun; Luo, Jingchu
2008-01-01
Transcription factors (TFs) play key roles in controlling gene expression. Systematic identification and annotation of TFs, followed by construction of TF databases may serve as useful resources for studying the function and evolution of transcription factors. We developed a comprehensive plant transcription factor database PlantTFDB (http://planttfdb.cbi.pku.edu.cn), which contains 26 402 TFs predicted from 22 species, including five model organisms with available whole genome sequence and 17 plants with available EST sequences. To provide comprehensive information for those putative TFs, we made extensive annotation at both family and gene levels. A brief introduction and key references were presented for each family. Functional domain information and cross-references to various well-known public databases were available for each identified TF. In addition, we predicted putative orthologs of those TFs among the 22 species. PlantTFDB has a simple interface to allow users to search the database by IDs or free texts, to make sequence similarity search against TFs of all or individual species, and to download TF sequences for local analysis. PMID:17933783
A literature search tool for intelligent extraction of disease-associated genes.
Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P
2014-01-01
To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.
VAPEPS user's reference manual, version 5.0
NASA Technical Reports Server (NTRS)
Park, D. M.
1988-01-01
This is the reference manual for the VibroAcoustic Payload Environment Prediction System (VAPEPS). The system consists of a computer program and a vibroacoustic database. The purpose of the system is to collect measurements of vibroacoustic data taken from flight events and ground tests, and to retrieve this data and provide a means of using the data to predict future payload environments. This manual describes the operating language of the program. Topics covered include database commands, Statistical Energy Analysis (SEA) prediction commands, stress prediction command, and general computational commands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kogalovskii, M.R.
This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.
WWW database of optical constants for astronomy
NASA Astrophysics Data System (ADS)
Henning, Th.; Il'In, V. B.; Krivova, N. A.; Michel, B.; Voshchinnikov, N. V.
1999-04-01
The database we announce contains references to the papers, data files and links to the Internet resources related to measurements and calculations of the optical constants of the materials of astronomical interest: different silicates, ices, oxides, sulfides, carbides, carbonaceous species from amorphous carbon to graphite and diamonds, etc. We describe the general structure and content of the database which has now free access via Internet: http://www.astro.spbu.ru/JPDOC/entry.html\\ or \\ http:// www. astro.uni-jena.de/Users/database/entry.html
Schomburg, Ida; Chang, Antje; Placzek, Sandra; Söhngen, Carola; Rother, Michael; Lang, Maren; Munaretto, Cornelia; Ulas, Susanne; Stelzer, Michael; Grote, Andreas; Scheer, Maurice; Schomburg, Dietmar
2013-01-01
The BRENDA (BRaunschweig ENzyme DAtabase) enzyme portal (http://www.brenda-enzymes.org) is the main information system of functional biochemical and molecular enzyme data and provides access to seven interconnected databases. BRENDA contains 2.7 million manually annotated data on enzyme occurrence, function, kinetics and molecular properties. Each entry is connected to a reference and the source organism. Enzyme ligands are stored with their structures and can be accessed via their names, synonyms or via a structure search. FRENDA (Full Reference ENzyme DAta) and AMENDA (Automatic Mining of ENzyme DAta) are based on text mining methods and represent a complete survey of PubMed abstracts with information on enzymes in different organisms, tissues or organelles. The supplemental database DRENDA provides more than 910 000 new EC number-disease relations in more than 510 000 references from automatic search and a classification of enzyme-disease-related information. KENDA (Kinetic ENzyme DAta), a new amendment extracts and displays kinetic values from PubMed abstracts. The integration of the EnzymeDetector offers an automatic comparison, evaluation and prediction of enzyme function annotations for prokaryotic genomes. The biochemical reaction database BKM-react contains non-redundant enzyme-catalysed and spontaneous reactions and was developed to facilitate and accelerate the construction of biochemical models.
Metzendorf, Maria-Inti; Schulz, Manuela; Braun, Volker
2014-01-01
Summary To be able to take well-informed decisions or carry out sound research, clinicians and researchers alike require specific information seeking skills matching their respective information needs. Biomedical information is traditionally available via different literature databases. This article gives an introduction to two diverging sources, PubMed (23 million references) and The Cochrane Library (800,000 references), both of which offer sophisticated instruments for searching an increasing amount of medical publications of varied quality and ambition. Whereas PubMed as an unfiltered source of primary literature comprises all different kinds of publication types occurring in academic journals, The Cochrane Library is a pre-filtered source which offers access to either synthesized publication types or critically appraised and carefully selected references. A search approach has to be carried out deliberately and requires a good knowledge on the scope and features of the databases as well as on the ability to build a search strategy in a structured way. We present a specific and a sensitive search approach, making use of both databases within two application case scenarios in order to identify the evidence on granulocyte transfusions for infections in adult patients with neutropenia. PMID:25538539
Sub-Audible Speech Recognition Based upon Electromyographic Signals
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C. (Inventor); Agabon, Shane T. (Inventor); Lee, Diana D. (Inventor)
2012-01-01
Method and system for processing and identifying a sub-audible signal formed by a source of sub-audible sounds. Sequences of samples of sub-audible sound patterns ("SASPs") for known words/phrases in a selected database are received for overlapping time intervals, and Signal Processing Transforms ("SPTs") are formed for each sample, as part of a matrix of entry values. The matrix is decomposed into contiguous, non-overlapping two-dimensional cells of entries, and neural net analysis is applied to estimate reference sets of weight coefficients that provide sums with optimal matches to reference sets of values. The reference sets of weight coefficients are used to determine a correspondence between a new (unknown) word/phrase and a word/phrase in the database.
Libraries of Peptide Fragmentation Mass Spectra Database
National Institute of Standards and Technology Data Gateway
SRD 1C NIST Libraries of Peptide Fragmentation Mass Spectra Database (Web, free access) The purpose of the library is to provide peptide reference data for laboratories employing mass spectrometry-based proteomics methods for protein analysis. Mass spectral libraries identify these compounds in a more sensitive and robust manner than alternative methods. These databases are freely available for testing and development of new applications.
MARC and Relational Databases.
ERIC Educational Resources Information Center
Llorens, Jose; Trenor, Asuncion
1993-01-01
Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)
Code of Federal Regulations, 2013 CFR
2013-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.2 Purpose. This... establishment and maintenance of a Publicly Available Consumer Product Safety Information Database (also referred to as the “Database”) on the safety of consumer products and other products or substances...
Code of Federal Regulations, 2011 CFR
2011-01-01
... CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions § 1102.2... establishment and maintenance of a Publicly Available Consumer Product Safety Information Database (also referred to as the “Database”) on the safety of consumer products and other products or substances...
Expanding Internationally: OCLC Gears Up.
ERIC Educational Resources Information Center
Chepesiuk, Ron
1997-01-01
Describes the Online Computer Library Center (OCLC) efforts in China, Germany, Canada, Scotland, Jamaica and Brazil. Discusses FirstSearch, an end-user reference service, and WorldCat, a bibliographic database. Highlights international projects developing increased OCLC online availability, database loading software, CD-ROM cataloging,…
MaizeGDB: The Maize Genetics and Genomics Database.
USDA-ARS?s Scientific Manuscript database
MaizeGDB is the community database for biological information about the crop plant Zea mays. Genomic, genetic, sequence, gene product, functional characterization, literature reference, and person/organization contact information are among the datatypes stored at MaizeGDB. At the project’s website...
23 CFR 973.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS MANAGEMENT... system; (2) A process to operate and maintain the management systems and their associated databases; (3... systems shall use databases with a common or coordinated reference system that can be used to geolocate...
23 CFR 973.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS MANAGEMENT... system; (2) A process to operate and maintain the management systems and their associated databases; (3... systems shall use databases with a common or coordinated reference system that can be used to geolocate...
RREL TREATABILITY DATABASE - VERSION 5.0
There is no abstract available for this product. If further information is requested, please refer to the bibliographic citation and contact the person listed under Contact field. This database can be obtained by contacting Tom Holdsworth, U.S. EPA, 26 West Martin Luther King D...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kautsky, Mark; Findlay, Richard C.; Hodges, Rex A.
2013-07-01
Managing technical references for projects that have long histories is hampered by the large collection of documents, each of which might contain discrete pieces of information relevant to the site conceptual model. A database application has been designed to improve the efficiency of retrieving technical information for a project. Although many databases are currently used for accessing analytical and geo-referenced data, applications designed specifically to manage technical reference material for projects are scarce. Retrieving site data from the array of available references becomes an increasingly inefficient use of labor. The electronic-Knowledge Information Tool (e-KIT) is designed as a project-level resourcemore » to access and communicate technical information. The e-KIT is a living tool that grows as new information becomes available, and its value to the project increases as the volume of site information increases. Having all references assembled in one location with complete reference citations and links to elements of the site conceptual model offers a way to enhance communication with outside groups. The published and unpublished references are incorporated into the e-KIT, while the compendium of references serves as a complete bibliography for the project. (authors)« less
Networking consumer health information: bringing the patient into the medical information loop.
Martin, E R; Lanier, D
1996-04-01
The Library of the Health Sciences at the University of Illinois at Chicago obtained a grant from the Illinois State Library to implement a statewide demonstration project that would provide consumer health information (CHI) using InfoTrac's Health Reference Center CD-ROM database. The goals of the project were to cooperate with targeted public libraries and clinics in providing CHI at the earliest point of need; to provide access to the database via a dial-up network server and a toll-free telephone number; and to work with targeted sites on database training, core CHI reference sources, and referral procedures. This paper provides background information about the project; describes the major systems and technical issues encountered; and discusses the outcomes, impact, and envisioned enhancements.
Intrusive Rock Database for the Digital Geologic Map of Utah
Nutt, C.J.; Ludington, Steve
2003-01-01
Digital geologic maps offer the promise of rapid and powerful answers to geologic questions using Geographic Information System software (GIS). Using modern GIS and database methods, a specialized derivative map can be easily prepared. An important limitation can be shortcomings in the information provided in the database associated with the digital map, a database which is often based on the legend of the original map. The purpose of this report is to show how the compilation of additional information can, when prepared as a database that can be used with the digital map, be used to create some types of derivative maps that are not possible with the original digital map and database. This Open-file Report consists of computer files with information about intrusive rocks in Utah that can be linked to the Digital Geologic Map of Utah (Hintze et al., 2000), an explanation of how to link the databases and map, and a list of references for the databases. The digital map, which represents the 1:500,000-scale Geologic Map of Utah (Hintze, 1980), can be obtained from the Utah Geological Survey (Map 179DM). Each polygon in the map has a unique identification number. We selected the polygons identified on the geologic map as intrusive rock, and constructed a database (UT_PLUT.xls) that classifies the polygons into plutonic map units (see tables). These plutonic map units are the key information that is used to relate the compiled information to the polygons on the map. The map includes a few polygons that were coded as intrusive on the state map but are largely volcanic rock; in these cases we note the volcanic rock names (rhyolite and latite) as used in the original sources Some polygons identified on the digital state map as intrusive rock were misidentified; these polygons are noted in a separate table of the database, along with some information about their true character. Fields may be empty because of lack of information from references used or difficulty in finding information. The information in the database is from a variety of sources, including geologic maps at scales ranging from 1:500,000 to 1:24,000, and thesis monographs. The references are shown twice: alphabetically and by region. The digital geologic map of Utah (Hintze and others, 2000) classifies intrusive rocks into only 3 categories, distinguished by age. They are: Ti, Tertiary intrusive rock; Ji, Upper to Middle Jurassic granite to quartz monzonite; and pCi, Early Proterozoic to Late Archean intrusive rock. Use of the tables provided in this report will permit selection and classification of those rocks by lithology and age. This database is a pilot study by the Survey and Analysis Project of the U.S. Geological Survey to characterize igneous rocks and link them to a digital map. The database, and others like it, will evolve as the project continues and other states are completed. We release this version now as an example, as a reference, and for those interested in Utah plutonic rocks.
NASA Astrophysics Data System (ADS)
Aschonitis, Vassilis G.; Papamichail, Dimitris; Demertzi, Kleoniki; Colombani, Nicolo; Mastrocicco, Micol; Ghirardini, Andrea; Castaldelli, Giuseppe; Fano, Elisa-Anna
2017-08-01
The objective of the study is to provide global grids (0.5°) of revised annual coefficients for the Priestley-Taylor (P-T) and Hargreaves-Samani (H-S) evapotranspiration methods after calibration based on the ASCE (American Society of Civil Engineers)-standardized Penman-Monteith method (the ASCE method includes two reference crops: short-clipped grass and tall alfalfa). The analysis also includes the development of a global grid of revised annual coefficients for solar radiation (Rs) estimations using the respective Rs formula of H-S. The analysis was based on global gridded climatic data of the period 1950-2000. The method for deriving annual coefficients of the P-T and H-S methods was based on partial weighted averages (PWAs) of their mean monthly values. This method estimates the annual values considering the amplitude of the parameter under investigation (ETo and Rs) giving more weight to the monthly coefficients of the months with higher ETo values (or Rs values for the case of the H-S radiation formula). The method also eliminates the effect of unreasonably high or low monthly coefficients that may occur during periods where ETo and Rs fall below a specific threshold. The new coefficients were validated based on data from 140 stations located in various climatic zones of the USA and Australia with expanded observations up to 2016. The validation procedure for ETo estimations of the short reference crop showed that the P-T and H-S methods with the new revised coefficients outperformed the standard methods reducing the estimated root mean square error (RMSE) in ETo values by 40 and 25 %, respectively. The estimations of Rs using the H-S formula with revised coefficients reduced the RMSE by 28 % in comparison to the standard H-S formula. Finally, a raster database was built consisting of (a) global maps for the mean monthly ETo values estimated by ASCE-standardized method for both reference crops, (b) global maps for the revised annual coefficients of the P-T and H-S evapotranspiration methods for both reference crops and a global map for the revised annual coefficient of the H-S radiation formula and (c) global maps that indicate the optimum locations for using the standard P-T and H-S methods and their possible annual errors based on reference values. The database can support estimations of ETo and solar radiation for locations where climatic data are limited and it can support studies which require such estimations on larger scales (e.g. country, continent, world). The datasets produced in this study are archived in the PANGAEA database (https://doi.org/10.1594/PANGAEA.868808) and in the ESRN database (http://www.esrn-database.org or http://esrn-database.weebly.com).
Entomopathogen ID: a curated sequence resource for entomopathogenic fungi
USDA-ARS?s Scientific Manuscript database
We report the development of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal is to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identify unknown entomopathogenic fungi...
Reach for Reference. Science Online
ERIC Educational Resources Information Center
Safford, Barbara Ripp
2004-01-01
This brief article describes the database, Science Online, from Facts on File. Science is defined broadly in this database to include archeology, computer technology, medicine, inventions, and mathematics, as well as biology, chemistry, earth sciences, and astronomy. Content also is divided into format categories for browsing purposes:…
USDA-ARS?s Scientific Manuscript database
The Maize Database (MaizeDB) to the Maize Genetics and Genomics Database (MaizeGDB) turns 20 this year, and such a significant milestone must be celebrated! With the release of the B73 reference sequence and more sequenced genomes on the way, the maize community needs to address various opportunitie...
Code of Federal Regulations, 2014 CFR
2014-01-01
... CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.2 Purpose. This part sets... maintenance of a Publicly Available Consumer Product Safety Information Database (also referred to as the “Database”) on the safety of consumer products and other products or substances regulated by the Commission. ...
Code of Federal Regulations, 2012 CFR
2012-01-01
... CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.2 Purpose. This part sets... maintenance of a Publicly Available Consumer Product Safety Information Database (also referred to as the “Database”) on the safety of consumer products and other products or substances regulated by the Commission. ...
National Nutrient Database for Standard Reference - Find Nutrient Value of Common Foods by Nutrient
... grams Household * required field USDA Food Composition Databases Software developed by the National Agricultural Library v.3.9.4.1 2018-06-11 NAL Home | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility Statement | Information Quality | Privacy ...
National Institute of Standards and Technology Data Gateway
SRD 115 Hydrocarbon Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
National Institute of Standards and Technology Data Gateway
SRD 114 Diatomic Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.
National Institute of Standards and Technology Data Gateway
SRD 117 Triatomic Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
47 CFR 64.623 - Administrator requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...
47 CFR 64.623 - Administrator requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...
The EBI SRS server-new features.
Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure
2002-08-01
Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.
Injury profiles related to mortality in patients with a low Injury Severity Score: a case-mix issue?
Joosse, Pieter; Schep, Niels W L; Goslings, J Carel
2012-07-01
Outcome prediction models are widely used to evaluate trauma care. External benchmarking provides individual institutions with a tool to compare survival with a reference dataset. However, these models do have limitations. In this study, the hypothesis was tested whether specific injuries are associated with increased mortality and whether differences in case-mix of these injuries influence outcome comparison. A retrospective study was conducted in a Dutch trauma region. Injury profiles, based on injuries most frequently endured by unexpected death, were determined. The association between these injury profiles and mortality was studied in patients with a low Injury Severity Score by logistic regression. The standardized survival of our population (Ws statistic) was compared with North-American and British reference databases, with and without patients suffering from previously defined injury profiles. In total, 14,811 patients were included. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic injuries were significantly associated with mortality corrected for age, sex, and physiologic derangement in patients with a low injury severity. Odds ratios ranged from 2.42 to 2.92. The Ws statistic for comparison with North-American databases significantly improved after exclusion of patients with these injuries. The Ws statistic for comparison with a British reference database remained unchanged. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic wall injuries are associated with increased mortality. Comparative outcome analysis of a population with a reference database that differs in case-mix with respect to these injuries should be interpreted cautiously. Prognostic study, level II.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Code of Federal Regulations, 2013 CFR
2013-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
The Computerized Reference Department: Buying the Future.
ERIC Educational Resources Information Center
Kriz, Harry M.; Kok, Victoria T.
1985-01-01
Basis for systematic computerization of academic research library's reference, collection development, and collection management functions emphasizes productivity enhancement for librarians and support staff. Use of microcomputer and university's mainframe computer to develop applications of database management systems, electronic spreadsheets,…
The LANL hemorrhagic fever virus database, a new platform for analyzing biothreat viruses
Kuiken, Carla; Thurmond, Jim; Dimitrijevic, Mira; Yoon, Hyejin
2012-01-01
Hemorrhagic fever viruses (HFVs) are a diverse set of over 80 viral species, found in 10 different genera comprising five different families: arena-, bunya-, flavi-, filo- and togaviridae. All these viruses are highly variable and evolve rapidly, making them elusive targets for the immune system and for vaccine and drug design. About 55 000 HFV sequences exist in the public domain today. A central website that provides annotated sequences and analysis tools will be helpful to HFV researchers worldwide. The HFV sequence database collects and stores sequence data and provides a user-friendly search interface and a large number of sequence analysis tools, following the model of the highly regarded and widely used Los Alamos HIV database [Kuiken, C., B. Korber, and R.W. Shafer, HIV sequence databases. AIDS Rev, 2003. 5: p. 52–61]. The database uses an algorithm that aligns each sequence to a species-wide reference sequence. The NCBI RefSeq database [Sayers et al. (2011) Database resources of the National Center for Biotechnology Information. Nucleic Acids Res., 39, D38–D51.] is used for this; if a reference sequence is not available, a Blast search finds the best candidate. Using this method, sequences in each genus can be retrieved pre-aligned. The HFV website can be accessed via http://hfv.lanl.gov. PMID:22064861
The Universal Protein Resource (UniProt): an expanding universe of protein information.
Wu, Cathy H; Apweiler, Rolf; Bairoch, Amos; Natale, Darren A; Barker, Winona C; Boeckmann, Brigitte; Ferro, Serenella; Gasteiger, Elisabeth; Huang, Hongzhan; Lopez, Rodrigo; Magrane, Michele; Martin, Maria J; Mazumder, Raja; O'Donovan, Claire; Redaschi, Nicole; Suzek, Baris
2006-01-01
The Universal Protein Resource (UniProt) provides a central resource on protein sequences and functional annotation with three database components, each addressing a key need in protein bioinformatics. The UniProt Knowledgebase (UniProtKB), comprising the manually annotated UniProtKB/Swiss-Prot section and the automatically annotated UniProtKB/TrEMBL section, is the preeminent storehouse of protein annotation. The extensive cross-references, functional and feature annotations and literature-based evidence attribution enable scientists to analyse proteins and query across databases. The UniProt Reference Clusters (UniRef) speed similarity searches via sequence space compression by merging sequences that are 100% (UniRef100), 90% (UniRef90) or 50% (UniRef50) identical. Finally, the UniProt Archive (UniParc) stores all publicly available protein sequences, containing the history of sequence data with links to the source databases. UniProt databases continue to grow in size and in availability of information. Recent and upcoming changes to database contents, formats, controlled vocabularies and services are described. New download availability includes all major releases of UniProtKB, sequence collections by taxonomic division and complete proteomes. A bibliography mapping service has been added, and an ID mapping service will be available soon. UniProt databases can be accessed online at http://www.uniprot.org or downloaded at ftp://ftp.uniprot.org/pub/databases/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straume, T.; Ricker, Y.; Thut, M.
1988-08-29
This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. Itmore » should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database.« less
Effect of alcohol references in music on alcohol consumption in public drinking places.
Engels, Rutger C M E; Slettenhaar, Gert; ter Bogt, Tom; Scholte, Ron H J
2011-01-01
People are exposed to many references to alcohol, which might influence their consumption of alcohol directly. In a field experiment, we tested whether textual references to alcohol in music played in bars lead to higher revenues of alcoholic beverages. We created two databases: one contained songs referring to alcohol, the parallel database contained songs with matching artists, tempo, and energetic content, but no references to alcohol. Customers of three bars were exposed to either music textually referring to alcohol or to the control condition, resulting in 23 evenings in both conditions. Bartenders were instructed to play songs with references to alcohol (or not) during a period of 2 hours each of the evenings of interest. They were not blind to the experimental condition. The results showed that customers who were exposed to music with textual references to alcohol spent significantly more on alcoholic drinks compared to customers in the control condition. This pilot study provides preliminary evidence that alcohol-related lyrics directly affect alcohol consumption in public drinking places. Since our study is one of the first testing direct effects of music lyrics on consumption, our small-scale, preliminary study needs replication before firm conclusions can be drawn. Copyright © American Academy of Addiction Psychiatry.
Expert Systems for Libraries at SCIL [Small Computers in Libraries]'88.
ERIC Educational Resources Information Center
Kochtanek, Thomas R.; And Others
1988-01-01
Six brief papers on expert systems for libraries cover (1) a knowledge-based approach to database design; (2) getting started in expert systems; (3) using public domain software to develop a business reference system; (4) a music cataloging inquiry system; (5) linguistic analysis of reference transactions; and (6) a model of a reference librarian.…
Resource Delivery and Teaching in Live Chat Reference: Comparing Two Libraries
ERIC Educational Resources Information Center
Dempsey, Paula R.
2017-01-01
This study investigates how reference staff at two libraries balance teaching with resource delivery in live chat reference. Analysis of 410 transcripts from one week shows that one library tends to deliver more resources from a wider range of database suggestions, to take more time in chat interactions, and to incorporate more teaching behavior…
Riparian reference areas in Idaho: A catalog of plant associations and conservation sites
Mabel Jankovsky-Jones; Steven K. Rust; Robert K. Moseley
1999-01-01
Idaho land managers and regulators need knowledge on riparian reference sites. Reference sites are ecological controls that can be used to set meaningful management and regulatory goals. Since 1984, the Idaho Conservation Data Center, Boise, ID, has compiled information in a series of interrelated databases on the distribution and condition of riparian, wetland, and...
The STEP (Safety and Toxicity of Excipients for Paediatrics) database: part 2 - the pilot version.
Salunke, Smita; Brandys, Barbara; Giacoia, George; Tuleu, Catherine
2013-11-30
The screening and careful selection of excipients is a critical step in paediatric formulation development as certain excipients acceptable in adult formulations, may not be appropriate for paediatric use. While there is extensive toxicity data that could help in better understanding and highlighting the gaps in toxicity studies, the data are often scattered around the information sources and saddled with incompatible data types and formats. This paper is the second in a series that presents the update on the Safety and Toxicity of Excipients for Paediatrics ("STEP") database being developed by Eu-US PFIs, and describes the architecture data fields and functions of the database. The STEP database is a user designed resource that compiles the safety and toxicity data of excipients that is scattered over various sources and presents it in one freely accessible source. Currently, in the pilot database data from over 2000 references/10 excipients presenting preclinical, clinical, regulatory information and toxicological reviews, with references and source links. The STEP database allows searching "FOR" excipients and "BY" excipients. This dual nature of the STEP database, in which toxicity and safety information can be searched in both directions, makes it unique from existing sources. If the pilot is successful, the aim is to increase the number of excipients in the existing database so that a database large enough to be of practical research use will be available. It is anticipated that this source will prove to be a useful platform for data management and data exchange of excipient safety information. Copyright © 2013 Elsevier B.V. All rights reserved.
Predicting the performance of fingerprint similarity searching.
Vogt, Martin; Bajorath, Jürgen
2011-01-01
Fingerprints are bit string representations of molecular structure that typically encode structural fragments, topological features, or pharmacophore patterns. Various fingerprint designs are utilized in virtual screening and their search performance essentially depends on three parameters: the nature of the fingerprint, the active compounds serving as reference molecules, and the composition of the screening database. It is of considerable interest and practical relevance to predict the performance of fingerprint similarity searching. A quantitative assessment of the potential that a fingerprint search might successfully retrieve active compounds, if available in the screening database, would substantially help to select the type of fingerprint most suitable for a given search problem. The method presented herein utilizes concepts from information theory to relate the fingerprint feature distributions of reference compounds to screening libraries. If these feature distributions do not sufficiently differ, active database compounds that are similar to reference molecules cannot be retrieved because they disappear in the "background." By quantifying the difference in feature distribution using the Kullback-Leibler divergence and relating the divergence to compound recovery rates obtained for different benchmark classes, fingerprint search performance can be quantitatively predicted.
Methods to Secure Databases Against Vulnerabilities
2015-12-01
for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query
Acoustic Propagation Modeling in Shallow Water
1996-10-01
Oceanography La Jolla, California 92093-0701 (Received April 15, 1996) This paper provides references for the Navy’s existing databases . Various...a compilation of many aspects of high-frequency (OAML) contains a description of Navy models and acoustics. databases . The Navy’s use of shallow...become significant because the propagation path may involve many tens of bounces. A description of a reflectivity database is (b) Geometry for the
Numerical and Physical Aspects of Aerodynamic Flows
1992-01-15
accretion was also measured. detailed description of the IRT can be found in This test program also provided a new database for reference 4. code...Deflection lift flows and to develop a validation database 8 Slat Deflection with practical geometries/conditions for emerging computational methods. This...be substantially improved by their developers in the absence of a quality database at realistic conditions for a practical airfoil. The work reported
sc-PDB-Frag: a database of protein-ligand interaction patterns for Bioisosteric replacements.
Desaphy, Jérémy; Rognan, Didier
2014-07-28
Bioisosteric replacement plays an important role in medicinal chemistry by keeping the biological activity of a molecule while changing either its core scaffold or substituents, thereby facilitating lead optimization and patenting. Bioisosteres are classically chosen in order to keep the main pharmacophoric moieties of the substructure to replace. However, notably when changing a scaffold, no attention is usually paid as whether all atoms of the reference scaffold are equally important for binding to the desired target. We herewith propose a novel database for bioisosteric replacement (scPDBFrag), capitalizing on our recently published structure-based approach to scaffold hopping, focusing on interaction pattern graphs. Protein-bound ligands are first fragmented and the interaction of the corresponding fragments with their protein environment computed-on-the-fly. Using an in-house developed graph alignment tool, interaction patterns graphs can be compared, aligned, and sorted by decreasing similarity to any reference. In the herein presented sc-PDB-Frag database ( http://bioinfo-pharma.u-strasbg.fr/scPDBFrag ), fragments, interaction patterns, alignments, and pairwise similarity scores have been extracted from the sc-PDB database of 8077 druggable protein-ligand complexes and further stored in a relational database. We herewith present the database, its Web implementation, and procedures for identifying true bioisosteric replacements based on conserved interaction patterns.
GMDD: a database of GMO detection methods
Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing
2008-01-01
Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755
An editor for pathway drawing and data visualization in the Biopathways Workbench.
Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar
2009-10-02
Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.
Vocabulary Control and the Humanities: A Case Study of the "MLA International Bibliography."
ERIC Educational Resources Information Center
Stebelman, Scott
1994-01-01
Discussion of research in the humanities focuses on the "MLA International Bibliography," the primary database for literary research. Highlights include comparisons to research in the sciences; humanities vocabulary; database search techniques; contextual indexing; examples of searches; thesauri; and software. (43 references) (LRW)
A New Methodology for Systematic Exploitation of Technology Databases.
ERIC Educational Resources Information Center
Bedecarrax, Chantal; Huot, Charles
1994-01-01
Presents the theoretical aspects of a data analysis methodology that can help transform sequential raw data from a database into useful information, using the statistical analysis of patents as an example. Topics discussed include relational analysis and a technology watch approach. (Contains 17 references.) (LRW)
The Ins and Outs of USDA Nutrient Composition
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference (SR) is the major source of food composition data in the United States, providing the foundation for most food composition databases in the public and private sectors. Sources of data used in SR include analytical studies, food manufacturer...
Online Sources of Competitive Intelligence.
ERIC Educational Resources Information Center
Wagers, Robert
1986-01-01
Presents an approach to using online sources of information for competitor intelligence (i.e., monitoring industry and tracking activities of competitors); identifies principal sources; and suggests some ways of making use of online databases. Types and sources of information and sources and database charts are appended. Eight references are…
ERIC Educational Resources Information Center
Antonucci, Yvonne Lederer; Wozny, Lucy Anne
1996-01-01
Identifies and describes sublevels of novices using a database management package, clustering those whose interaction is effective, partially effective, and totally ineffective. Among assistance documentation, functional tree diagrams (FTDs) were more beneficial to partially effective users than traditional reference material. The results have…
The multiple personalities of Watson and Crick strands.
Cartwright, Reed A; Graur, Dan
2011-02-08
In genetics it is customary to refer to double-stranded DNA as containing a "Watson strand" and a "Crick strand." However, there seems to be no consensus in the literature on the exact meaning of these two terms, and the many usages contradict one another as well as the original definition. Here, we review the history of the terminology and suggest retaining a single sense that is currently the most useful and consistent. The Saccharomyces Genome Database defines the Watson strand as the strand which has its 5'-end at the short-arm telomere and the Crick strand as its complement. The Watson strand is always used as the reference strand in their database. Using this as the basis of our standard, we recommend that Watson and Crick strand terminology only be used in the context of genomics. When possible, the centromere or other genomic feature should be used as a reference point, dividing the chromosome into two arms of unequal lengths. Under our proposal, the Watson strand is standardized as the strand whose 5'-end is on the short arm of the chromosome, and the Crick strand as the one whose 5'-end is on the long arm. Furthermore, the Watson strand should be retained as the reference (plus) strand in a genomic database. This usage not only makes the determination of Watson and Crick unambiguous, but also allows unambiguous selection of reference stands for genomics. This article was reviewed by John M. Logsdon, Igor B. Rogozin (nominated by Andrey Rzhetsky), and William Martin.
Protocol for the E-Area Low Level Waste Facility Disposal Limits Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swingle, R
2006-01-31
A database has been developed to contain the disposal limits for the E-Area Low Level Waste Facility (ELLWF). This database originates in the form of an EXCEL{copyright} workbook. The pertinent sheets are translated to PDF format using Adobe ACROBAT{copyright}. The PDF version of the database is accessible from the Solid Waste Division web page on SHRINE. In addition to containing the various disposal unit limits, the database also contains hyperlinks to the original references for all limits. It is anticipated that database will be revised each time there is an addition, deletion or revision of any of the ELLWF radionuclidemore » disposal limits.« less
Joseph, Gabby B.; McCulloch, Charles E.; Nevitt, Michael C.; Heilmeier, Ursula; Nardo, Lorenzo; Lynch, John A.; Liu, Felix; Baum, Thomas; Link, Thomas M.
2015-01-01
Objective The purpose of this study was 1) to establish a gender- and BMI-specific reference database of cartilage T2 values, and 2) to assess the associations between cartilage T2 values and gender, age, and BMI in knees without radiographic osteoarthritis or MRI-based (WORMS 0/1) evidence of cartilage degeneration. Design 481 subjects between the ages of 45-65 years with Kellgren-Lawrence Scores 0/1 in the study knee were selected from the Osteoarthritis Initiative database. Baseline morphologic cartilage 3T MRI readings (WORMS scoring) and T2 measurements (resolution=0.313mmx0.446mm) were performed in the medial femur, lateral femur, medial tibia, lateral tibia, and patella compartments. In order to create a reference database, a logarithmic transformation was applied to the data to obtain the 5th-95th percentile values for T2. Results Significant differences in mean cartilage T2 values were observed between joint compartments. Although females had slightly higher T2 values than males in a majority of compartments, the differences were only significant in the medial femur (p<0.0001). A weak positive association was seen between age and T2 in all compartments, and was most pronounced in the patella (3.27% increase in median T2/10 years, p=0.009). Significant associations between BMI and T2 were observed, and were most pronounced in the lateral tibia (5.33% increase in median T2/5 kg/m2 increase in BMI, p<0.0001), and medial tibia (4.81% increase in median T2 /5 kg/m2 increase in BMI, p<0.0001). Conclusions This study established the first reference database of T2 values in a large sample of morphologically normal cartilage plates in knees without radiographic knee osteoarthritis. While cartilage T2 values were weakly associated with age and gender, they had the highest correlations with BMI. PMID:25680652
Rice proteome database: a step toward functional analysis of the rice genome.
Komatsu, Setsuko
2005-09-01
The technique of proteome analysis using two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) has the power to monitor global changes that occur in the protein complement of tissues and subcellular compartments. In this study, the proteins of rice were cataloged, a rice proteome database was constructed, and a functional characterization of some of the identified proteins was undertaken. Proteins extracted from various tissues and subcellular compartments in rice were separated by 2D-PAGE and an image analyzer was used to construct a display of the proteins. The Rice Proteome Database contains 23 reference maps based on 2D-PAGE of proteins from various rice tissues and subcellular compartments. These reference maps comprise 13129 identified proteins, and the amino acid sequences of 5092 proteins are entered in the database. Major proteins involved in growth or stress responses were identified using the proteome approach. Some of these proteins, including a beta-tubulin, calreticulin, and ribulose-1,5-bisphosphate carboxylase/oxygenase activase in rice, have unexpected functions. The information obtained from the Rice Proteome Database will aid in cloning the genes for and predicting the function of unknown proteins.
Prolog as a Teaching Tool for Relational Database Interrogation.
ERIC Educational Resources Information Center
Collier, P. A.; Samson, W. B.
1982-01-01
The use of the Prolog programing language is promoted as the language to use by anyone teaching a course in relational databases. A short introduction to Prolog is followed by a series of examples of queries. Several references are noted for anyone wishing to gain a deeper understanding. (MP)
Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...
Administrative Information Systems: The 1980 Profile. CAUSE Monograph Series.
ERIC Educational Resources Information Center
Thomas, Charles R.
The first summaries of the CAUSE National Database, which was established in 1980, are presented. The database is updated annually to provide members with baseline reference information on the status of administrative information systems in colleges and universities. Information is based on responses from 350 CAUSE member campuses, which are…
Update of NDL’s list of key foods based on the 2007-2008 WWEIA-NHANES
USDA-ARS?s Scientific Manuscript database
The Nutrient Data Laboratory is responsible for developing authoritative nutrient databases that contain a wide range of food composition values of the nation's food supply. This requires updating and revising the USDA Nutrient Database for Standard Reference (SR) and developing various special int...
Browsing a Database of Multimedia Learning Material.
ERIC Educational Resources Information Center
Persico, Donatella; And Others
1992-01-01
Describes a project that addressed the problem of courseware reusability by developing a database structure suitable for organizing multimedia learning material in a given content domain. A prototype system that allows browsing a DBLM (Data Base of Learning Material) on earth science is described, and future plans are discussed. (five references)…
CARD 2017: expansion and model-centric curation of the Comprehensive Antibiotic Resistance Database
USDA-ARS?s Scientific Manuscript database
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins, and mutations involved in AMR. CARD is ontologi...
SPIRES Tailored to a Special Library: A Mainframe Answer for a Small Online Catalog.
ERIC Educational Resources Information Center
Newton, Mary
1989-01-01
Describes the design and functions of a technical library database maintained on a mainframe computer and supported by the SPIRES database management system. The topics covered include record structures, vocabulary control, input procedures, searching features, time considerations, and cost effectiveness. (three references) (CLB)
Utilizing the Web in the Classroom: Linking Student Scientists with Professional Data.
ERIC Educational Resources Information Center
Seitz, Kristine; Leake, Devin
1999-01-01
Describes how information gathered from a computer database can be used as a springboard to scientific discovery. Specifies directions for studying the homeobox gene PAX-6 using GenBank, a database maintained by the National Center for Biotechnology Information (NCBI). Contains 16 references. (WRM)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
... support of the scholarship application by academic professors/advisors. NOAA OEd student scholar alumni... are required to update the student tracker database with the required student information. In addition... System database form, 17 hours; undergraduate application form, 8 hours; reference forms, 1 hour; alumni...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-09
... form in support of the scholarship application by academic professors/advisors. NOAA OEd student... grantees are required to update the student tracker database with the required student information. In... Tracking System database form, 17 hours; undergraduate application form, 8 hours; reference forms, 1 hour...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-22
... completion of a NOAA student scholar reference form in support of the scholarship application by academic... internal tracking purposes. NOAA OEd grantees are required to update the student tracker database with the... tracker database form, 16 hours; graduate application form, 8 hours; undergraduate application form, 8...
EPAs ToxCast Program: From Research to Application
A New Paradigm for Toxicity Testing in the 21st Century. In FY 2009, EPA published the toxicity reference database ToxRefDB, which contains results of over 30 years and $2B worth of animal studies for over 400 chemicals. This database is available on EPA’s website, and increases...
ERIC Educational Resources Information Center
Jensen, Chad D.; Cushing, Christopher C.; Aylward, Brandon S.; Craig, James T.; Sorell, Danielle M.; Steele, Ric G.
2011-01-01
Objective: This study was designed to quantitatively evaluate the effectiveness of motivational interviewing (MI) interventions for adolescent substance use behavior change. Method: Literature searches of electronic databases were undertaken in addition to manual reference searches of identified review articles. Databases searched include…
Diabetic retinopathy screening using deep neural network.
Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A
2017-09-07
There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.
Forster, Samuel C; Browne, Hilary P; Kumar, Nitin; Hunt, Martin; Denise, Hubert; Mitchell, Alex; Finn, Robert D; Lawley, Trevor D
2016-01-04
The Human Pan-Microbe Communities (HPMC) database (http://www.hpmcd.org/) provides a manually curated, searchable, metagenomic resource to facilitate investigation of human gastrointestinal microbiota. Over the past decade, the application of metagenome sequencing to elucidate the microbial composition and functional capacity present in the human microbiome has revolutionized many concepts in our basic biology. When sufficient high quality reference genomes are available, whole genome metagenomic sequencing can provide direct biological insights and high-resolution classification. The HPMC database provides species level, standardized phylogenetic classification of over 1800 human gastrointestinal metagenomic samples. This is achieved by combining a manually curated list of bacterial genomes from human faecal samples with over 21000 additional reference genomes representing bacteria, viruses, archaea and fungi with manually curated species classification and enhanced sample metadata annotation. A user-friendly, web-based interface provides the ability to search for (i) microbial groups associated with health or disease state, (ii) health or disease states and community structure associated with a microbial group, (iii) the enrichment of a microbial gene or sequence and (iv) enrichment of a functional annotation. The HPMC database enables detailed analysis of human microbial communities and supports research from basic microbiology and immunology to therapeutic development in human health and disease. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
2016-05-04
IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB) References: See Enclosure 1 1. PURPOSE. In...CJI database mirror image files. (3) Memorandums of understanding with the FBI CJIS as the data broker for DoD organizations that need access ...not for access determinations. (3) Legal restrictions established by the Sex Offender Registration and Notification Act (SORNA) jurisdictions on
SSME environment database development
NASA Technical Reports Server (NTRS)
Reardon, John
1987-01-01
The internal environment of the Space Shuttle Main Engine (SSME) is being determined from hot firings of the prototype engines and from model tests using either air or water as the test fluid. The objectives are to develop a database system to facilitate management and analysis of test measurements and results, to enter available data into the the database, and to analyze available data to establish conventions and procedures to provide consistency in data normalization and configuration geometry references.
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
Inclusive Schools. Topical Bibliography on Inclusive Schools.
ERIC Educational Resources Information Center
Sorenson, Barbara, Comp.; Drill, Janet, Comp.
This abstract bibliography of approximately 200 references looks at various aspects of inclusive schools. References are a result of computer searches of three databases: the Educational Resources Information Center (ERIC), Exceptional Child Education Resources, and the Western Regional Resources Center. Preliminary information includes directions…
Selected Reference Books of 1993-1994.
ERIC Educational Resources Information Center
McIlvaine, Eileen
1994-01-01
Offers brief, critical reviews of recent scholarly and general works of interest to reference workers in university libraries. Titles covered include dictionaries, databases, religion, literature, music, dance, art and architecture, business, political science, social issues, and history. Brief descriptions of new editions and supplements for…
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
Database systems for knowledge-based discovery.
Jagarlapudi, Sarma A R P; Kishan, K V Radha
2009-01-01
Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
The Geochemical Databases GEOROC and GeoReM - What's New?
NASA Astrophysics Data System (ADS)
Sarbas, B.; Jochum, K. P.; Nohl, U.; Weis, U.
2017-12-01
The geochemical databases GEOROC (http: georoc.mpch-mainz.gwdg.de) and GeoReM (http: georem.mpch-mainz.gwdg.de) are maintained by the Max Planck Institute for Chemistry in Mainz, Germany. Both online databases became crucial tools for geoscientists from different research areas. They are regularly upgraded by new tools and new data from recent publications obtained from a wide range of international journals. GEOROC is a collection of published analyses of volcanic rocks and mantle xenoliths. Since recently, data for plutonic rocks are added. The analyses include major and trace element concentrations, radiogenic and non-radiogenic isotope ratios as well as analytical ages for whole rocks, glasses, minerals and inclusions. Samples come from eleven geological settings and span the whole geological age scale from Archean to Recent. Metadata include, among others, geographic location, rock class and rock type, geological age, degree of alteration, analytical method, laboratory, and reference. The GEOROC web page allows selection of samples by geological setting, geography, chemical criteria, rock or sample name, and bibliographic criteria. In addition, it provides a large number of precompiled files for individual locations, minerals and rock classes. GeoReM is a database collecting information about reference materials of geological and environmental interest, such as rock powders, synthetic and natural glasses as well as mineral, isotopic, biological, river water and seawater reference materials. It contains published data and compilation values (major and trace element concentrations and mass fractions, radiogenic and stable isotope ratios). Metadata comprise, among others, uncertainty, analytical method and laboratory. Reference materials are important for calibration, method validation, quality control and to establish metrological traceability. GeoReM offers six different search strategies: samples or materials (published values), samples (GeoReM preferred values), chemical criteria, chemical criteria based on bibliography, bibliography, as well as methods and institutions.
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.
Example of monitoring measurements in a virtual eye clinic using 'big data'.
Jones, Lee; Bryan, Susan R; Miranda, Marco A; Crabb, David P; Kotecha, Aachal
2017-10-26
To assess the equivalence of measurement outcomes between patients attending a standard glaucoma care service, where patients see an ophthalmologist in a face-to-face setting, and a glaucoma monitoring service (GMS). The average mean deviation (MD) measurement on the visual field (VF) test for 250 patients attending a GMS were compared with a 'big data' repository of patients attending a standard glaucoma care service (reference database). In addition, the speed of VF progression between GMS patients and reference database patients was compared. Reference database patients were used to create expected outcomes that GMS patients could be compared with. For GMS patients falling outside of the expected limits, further analysis was carried out on the clinical management decisions for these patients. The average MD of patients in the GMS ranged from +1.6 dB to -18.9 dB between two consecutive appointments at the clinic. In the first analysis, 12 (4.8%; 95% CI 2.5% to 8.2%) GMS patients scored outside the 90% expected values based on the reference database. In the second analysis, 1.9% (95% CI 0.4% to 5.4%) GMS patients had VF changes outside of the expected 90% limits. Using 'big data' collected in the standard glaucoma care service, we found that patients attending a GMS have equivalent outcomes on the VF test. Our findings provide support for the implementation of virtual healthcare delivery in the hospital eye service. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Jekova, Irena; Krasteva, Vessela; Schmid, Ramun
2018-01-27
Human identification (ID) is a biometric task, comparing single input sample to many stored templates to identify an individual in a reference database. This paper aims to present the perspectives of personalized heartbeat pattern for reliable ECG-based identification. The investigations are using a database with 460 pairs of 12-lead resting electrocardiograms (ECG) with 10-s durations recorded at time-instants T1 and T2 > T1 + 1 year. Intra-subject long-term ECG stability and inter-subject variability of personalized PQRST (500 ms) and QRS (100 ms) patterns is quantified via cross-correlation, amplitude ratio and pattern matching between T1 and T2 using 7 features × 12-leads. Single and multi-lead ID models are trained on the first 230 ECG pairs. Their validation on 10, 20, ... 230 reference subjects (RS) from the remaining 230 ECG pairs shows: (i) two best single-lead ID models using lead II for a small population RS = (10-140) with identification accuracy AccID = (89.4-67.2)% and aVF for a large population RS = (140-230) with AccID = (67.2-63.9)%; (ii) better performance of the 6-lead limb vs. the 6-lead chest ID model-(91.4-76.1)% vs. (90.9-70)% for RS = (10-230); (iii) best performance of the 12-lead ID model-(98.4-87.4)% for RS = (10-230). The tolerable reference database size, keeping AccID > 80%, is RS = 30 in the single-lead ID scenario (II); RS = 50 (6 chest leads); RS = 100 (6 limb leads), RS > 230-maximal population in this study (12-lead ECG).
Genome-wide network-based pathway analysis of CSF t-tau/Aβ1-42 ratio in the ADNI cohort.
Cong, Wang; Meng, Xianglian; Li, Jin; Zhang, Qiushi; Chen, Feng; Liu, Wenjie; Wang, Ying; Cheng, Sipu; Yao, Xiaohui; Yan, Jingwen; Kim, Sungeun; Saykin, Andrew J; Liang, Hong; Shen, Li
2017-05-30
The cerebrospinal fluid (CSF) levels of total tau (t-tau) and Aβ 1-42 are potential early diagnostic markers for probable Alzheimer's disease (AD). The influence of genetic variation on these CSF biomarkers has been investigated in candidate or genome-wide association studies (GWAS). However, the investigation of statistically modest associations in GWAS in the context of biological networks is still an under-explored topic in AD studies. The main objective of this study is to gain further biological insights via the integration of statistical gene associations in AD with physical protein interaction networks. The CSF and genotyping data of 843 study subjects (199 CN, 85 SMC, 239 EMCI, 207 LMCI, 113 AD) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) were analyzed. PLINK was used to perform GWAS on the t-tau/Aβ 1-42 ratio using quality controlled genotype data, including 563,980 single nucleotide polymorphisms (SNPs), with age, sex and diagnosis as covariates. Gene-level p-values were obtained by VEGAS2. Genes with p-value ≤ 0.05 were mapped on to a protein-protein interaction (PPI) network (9,617 nodes, 39,240 edges, from the HPRD Database). We integrated a consensus model strategy into the iPINBPA network analysis framework, and named it as CM-iPINBPA. Four consensus modules (CMs) were discovered by CM-iPINBPA, and were functionally annotated using the pathway analysis tool Enrichr. The intersection of four CMs forms a common subnetwork of 29 genes, including those related to tau phosphorylation (GSK3B, SUMO1, AKAP5, CALM1 and DLG4), amyloid beta production (CASP8, PIK3R1, PPA1, PARP1, CSNK2A1, NGFR, and RHOA), and AD (BCL3, CFLAR, SMAD1, and HIF1A). This study coupled a consensus module (CM) strategy with the iPINBPA network analysis framework, and applied it to the GWAS of CSF t-tau/Aβ1-42 ratio in an AD study. The genome-wide network analysis yielded 4 enriched CMs that share not only genes related to tau phosphorylation or amyloid beta production but also multiple genes enriching several KEGG pathways such as Alzheimer's disease, colorectal cancer, gliomas, renal cell carcinoma, Huntington's disease, and others. This study demonstrated that integration of gene-level associations with CMs could yield statistically significant findings to offer valuable biological insights (e.g., functional interaction among the protein products of these genes) and suggest high confidence candidates for subsequent analyses.
Normand, A C; Packeu, A; Cassagne, C; Hendrickx, M; Ranque, S; Piarroux, R
2018-05-01
Conventional dermatophyte identification is based on morphological features. However, recent studies have proposed to use the nucleotide sequences of the rRNA internal transcribed spacer (ITS) region as an identification barcode of all fungi, including dermatophytes. Several nucleotide databases are available to compare sequences and thus identify isolates; however, these databases often contain mislabeled sequences that impair sequence-based identification. We evaluated five of these databases on a clinical isolate panel. We selected 292 clinical dermatophyte strains that were prospectively subjected to an ITS2 nucleotide sequence analysis. Sequences were analyzed against the databases, and the results were compared to clusters obtained via DNA alignment of sequence segments. The DNA tree served as the identification standard throughout the study. According to the ITS2 sequence identification, the majority of strains (255/292) belonged to the genus Trichophyton , mainly T. rubrum complex ( n = 184), T. interdigitale ( n = 40), T. tonsurans ( n = 26), and T. benhamiae ( n = 5). Other genera included Microsporum (e.g., M. canis [ n = 21], M. audouinii [ n = 10], Nannizzia gypsea [ n = 3], and Epidermophyton [ n = 3]). Species-level identification of T. rubrum complex isolates was an issue. Overall, ITS DNA sequencing is a reliable tool to identify dermatophyte species given that a comprehensive and correctly labeled database is consulted. Since many inaccurate identification results exist in the DNA databases used for this study, reference databases must be verified frequently and amended in line with the current revisions of fungal taxonomy. Before describing a new species or adding a new DNA reference to the available databases, its position in the phylogenetic tree must be verified. Copyright © 2018 American Society for Microbiology.
Riviere, Guillaume; Klopp, Christophe; Ibouniyamine, Nabihoudine; Huvet, Arnaud; Boudry, Pierre; Favrel, Pascal
2015-12-02
The Pacific oyster, Crassostrea gigas, is one of the most important aquaculture shellfish resources worldwide. Important efforts have been undertaken towards a better knowledge of its genome and transcriptome, which makes now C. gigas becoming a model organism among lophotrochozoans, the under-described sister clade of ecdysozoans within protostomes. These massive sequencing efforts offer the opportunity to assemble gene expression data and make such resource accessible and exploitable for the scientific community. Therefore, we undertook this assembly into an up-to-date publicly available transcriptome database: the GigaTON (Gigas TranscriptOme pipeliNe) database. We assembled 2204 million sequences obtained from 114 publicly available RNA-seq libraries that were realized using all embryo-larval development stages, adult organs, different environmental stressors including heavy metals, temperature, salinity and exposure to air, which were mostly performed as part of the Crassostrea gigas genome project. This data was analyzed in silico and resulted into 56621 newly assembled contigs that were deposited into a publicly available database, the GigaTON database. This database also provides powerful and user-friendly request tools to browse and retrieve information about annotation, expression level, UTRs, splice and polymorphism, and gene ontology associated to all the contigs into each, and between all libraries. The GigaTON database provides a convenient, potent and versatile interface to browse, retrieve, confront and compare massive transcriptomic information in an extensive range of conditions, tissues and developmental stages in Crassostrea gigas. To our knowledge, the GigaTON database constitutes the most extensive transcriptomic database to date in marine invertebrates, thereby a new reference transcriptome in the oyster, a highly valuable resource to physiologists and evolutionary biologists.
Wimmer, Helge; Gundacker, Nina C; Griss, Johannes; Haudek, Verena J; Stättner, Stefan; Mohr, Thomas; Zwickl, Hannes; Paulitschke, Verena; Baron, David M; Trittner, Wolfgang; Kubicek, Markus; Bayer, Editha; Slany, Astrid; Gerner, Christopher
2009-06-01
Interpretation of proteome data with a focus on biomarker discovery largely relies on comparative proteome analyses. Here, we introduce a database-assisted interpretation strategy based on proteome profiles of primary cells. Both 2-D-PAGE and shotgun proteomics are applied. We obtain high data concordance with these two different techniques. When applying mass analysis of tryptic spot digests from 2-D gels of cytoplasmic fractions, we typically identify several hundred proteins. Using the same protein fractions, we usually identify more than thousand proteins by shotgun proteomics. The data consistency obtained when comparing these independent data sets exceeds 99% of the proteins identified in the 2-D gels. Many characteristic differences in protein expression of different cells can thus be independently confirmed. Our self-designed SQL database (CPL/MUW - database of the Clinical Proteomics Laboratories at the Medical University of Vienna accessible via www.meduniwien.ac.at/proteomics/database) facilitates (i) quality management of protein identification data, which are based on MS, (ii) the detection of cell type-specific proteins and (iii) of molecular signatures of specific functional cell states. Here, we demonstrate, how the interpretation of proteome profiles obtained from human liver tissue and hepatocellular carcinoma tissue is assisted by the Clinical Proteomics Laboratories at the Medical University of Vienna-database. Therefore, we suggest that the use of reference experiments supported by a tailored database may substantially facilitate data interpretation of proteome profiling experiments.
Goodacre, Norman; Aljanahi, Aisha; Nandakumar, Subhiksha; Mikailov, Mike
2018-01-01
ABSTRACT Detection of distantly related viruses by high-throughput sequencing (HTS) is bioinformatically challenging because of the lack of a public database containing all viral sequences, without abundant nonviral sequences, which can extend runtime and obscure viral hits. Our reference viral database (RVDB) includes all viral, virus-related, and virus-like nucleotide sequences (excluding bacterial viruses), regardless of length, and with overall reduced cellular sequences. Semantic selection criteria (SEM-I) were used to select viral sequences from GenBank, resulting in a first-generation viral database (VDB). This database was manually and computationally reviewed, resulting in refined, semantic selection criteria (SEM-R), which were applied to a new download of updated GenBank sequences to create a second-generation VDB. Viral entries in the latter were clustered at 98% by CD-HIT-EST to reduce redundancy while retaining high viral sequence diversity. The viral identity of the clustered representative sequences (creps) was confirmed by BLAST searches in NCBI databases and HMMER searches in PFAM and DFAM databases. The resulting RVDB contained a broad representation of viral families, sequence diversity, and a reduced cellular content; it includes full-length and partial sequences and endogenous nonretroviral elements, endogenous retroviruses, and retrotransposons. Testing of RVDBv10.2, with an in-house HTS transcriptomic data set indicated a significantly faster run for virus detection than interrogating the entirety of the NCBI nonredundant nucleotide database, which contains all viral sequences but also nonviral sequences. RVDB is publically available for facilitating HTS analysis, particularly for novel virus detection. It is meant to be updated on a regular basis to include new viral sequences added to GenBank. IMPORTANCE To facilitate bioinformatics analysis of high-throughput sequencing (HTS) data for the detection of both known and novel viruses, we have developed a new reference viral database (RVDB) that provides a broad representation of different virus species from eukaryotes by including all viral, virus-like, and virus-related sequences (excluding bacteriophages), regardless of their size. In particular, RVDB contains endogenous nonretroviral elements, endogenous retroviruses, and retrotransposons. Sequences were clustered to reduce redundancy while retaining high viral sequence diversity. A particularly useful feature of RVDB is the reduction of cellular sequences, which can enhance the run efficiency of large transcriptomic and genomic data analysis and increase the specificity of virus detection. PMID:29564396
Goodacre, Norman; Aljanahi, Aisha; Nandakumar, Subhiksha; Mikailov, Mike; Khan, Arifa S
2018-01-01
Detection of distantly related viruses by high-throughput sequencing (HTS) is bioinformatically challenging because of the lack of a public database containing all viral sequences, without abundant nonviral sequences, which can extend runtime and obscure viral hits. Our reference viral database (RVDB) includes all viral, virus-related, and virus-like nucleotide sequences (excluding bacterial viruses), regardless of length, and with overall reduced cellular sequences. Semantic selection criteria (SEM-I) were used to select viral sequences from GenBank, resulting in a first-generation viral database (VDB). This database was manually and computationally reviewed, resulting in refined, semantic selection criteria (SEM-R), which were applied to a new download of updated GenBank sequences to create a second-generation VDB. Viral entries in the latter were clustered at 98% by CD-HIT-EST to reduce redundancy while retaining high viral sequence diversity. The viral identity of the clustered representative sequences (creps) was confirmed by BLAST searches in NCBI databases and HMMER searches in PFAM and DFAM databases. The resulting RVDB contained a broad representation of viral families, sequence diversity, and a reduced cellular content; it includes full-length and partial sequences and endogenous nonretroviral elements, endogenous retroviruses, and retrotransposons. Testing of RVDBv10.2, with an in-house HTS transcriptomic data set indicated a significantly faster run for virus detection than interrogating the entirety of the NCBI nonredundant nucleotide database, which contains all viral sequences but also nonviral sequences. RVDB is publically available for facilitating HTS analysis, particularly for novel virus detection. It is meant to be updated on a regular basis to include new viral sequences added to GenBank. IMPORTANCE To facilitate bioinformatics analysis of high-throughput sequencing (HTS) data for the detection of both known and novel viruses, we have developed a new reference viral database (RVDB) that provides a broad representation of different virus species from eukaryotes by including all viral, virus-like, and virus-related sequences (excluding bacteriophages), regardless of their size. In particular, RVDB contains endogenous nonretroviral elements, endogenous retroviruses, and retrotransposons. Sequences were clustered to reduce redundancy while retaining high viral sequence diversity. A particularly useful feature of RVDB is the reduction of cellular sequences, which can enhance the run efficiency of large transcriptomic and genomic data analysis and increase the specificity of virus detection.
The Pfam protein families database: towards a more sustainable future.
Finn, Robert D; Coggill, Penelope; Eberhardt, Ruth Y; Eddy, Sean R; Mistry, Jaina; Mitchell, Alex L; Potter, Simon C; Punta, Marco; Qureshi, Matloob; Sangrador-Vegas, Amaia; Salazar, Gustavo A; Tate, John; Bateman, Alex
2016-01-04
In the last two years the Pfam database (http://pfam.xfam.org) has undergone a substantial reorganisation to reduce the effort involved in making a release, thereby permitting more frequent releases. Arguably the most significant of these changes is that Pfam is now primarily based on the UniProtKB reference proteomes, with the counts of matched sequences and species reported on the website restricted to this smaller set. Building families on reference proteomes sequences brings greater stability, which decreases the amount of manual curation required to maintain them. It also reduces the number of sequences displayed on the website, whilst still providing access to many important model organisms. Matches to the full UniProtKB database are, however, still available and Pfam annotations for individual UniProtKB sequences can still be retrieved. Some Pfam entries (1.6%) which have no matches to reference proteomes remain; we are working with UniProt to see if sequences from them can be incorporated into reference proteomes. Pfam-B, the automatically-generated supplement to Pfam, has been removed. The current release (Pfam 29.0) includes 16 295 entries and 559 clans. The facility to view the relationship between families within a clan has been improved by the introduction of a new tool. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Technical Reports Server (NTRS)
Decker, Ryan K.; Burns, Lee; Merry, Carl; Harrington, Brian
2008-01-01
Atmospheric parameters are essential in assessing the flight performance of aerospace vehicles. The effects of the Earth's atmosphere on aerospace vehicles influence various aspects of the vehicle during ascent ranging from its flight trajectory to the structural dynamics and aerodynamic heatmg on the vehicle. Atmospheric databases charactenzing the wind and thermodynamic environments, known as Range Reference Atmospheres (RRA), have been developed at space launch ranges by a governmental interagency working group for use by aerospace vehicle programs. The National Aeronantics and Space Administration's (NASA) Space Shuttle Program (SSP), which launches from Kennedy Space Center, utilizes atmosphenc statistics derived from the Cape Canaveral Air Force Station Range Reference Atmosphere (CCAFS RRA) database to evaluate environmental constraints on various aspects of the vehlcle during ascent.
Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1
2013-05-14
include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric
Pathology Imagebase-a reference image database for standardization of pathology.
Egevad, Lars; Cheville, John; Evans, Andrew J; Hörnblad, Jonas; Kench, James G; Kristiansen, Glen; Leite, Katia R M; Magi-Galluzzi, Cristina; Pan, Chin-Chen; Samaratunga, Hemamali; Srigley, John R; True, Lawrence; Zhou, Ming; Clements, Mark; Delahunt, Brett
2017-11-01
Despite efforts to standardize histopathology practice through the development of guidelines, the interpretation of morphology is still hampered by subjectivity. We here describe Pathology Imagebase, a novel mechanism for establishing an international standard for the interpretation of pathology specimens. The International Society of Urological Pathology (ISUP) established a reference image database through the input of experts in the field. Three panels were formed, one each for prostate, urinary bladder and renal pathology, consisting of 24 international experts. Each of the panel members uploaded microphotographs of cases into a non-public database. The remaining 23 experts were asked to vote from a multiple-choice menu. Prior to and while voting, panel members were unable to access the results of voting by the other experts. When a consensus level of at least two-thirds or 16 votes was reached, cases were automatically transferred to the main database. Consensus was reached in a total of 287 cases across five projects on the grading of prostate, bladder and renal cancer and the classification of renal tumours and flat lesions of the bladder. The full database is available to all ISUP members at www.isupweb.org. Non-members may access a selected number of cases. It is anticipated that the database will assist pathologists in calibrating their grading, and will also promote consistency in the diagnosis of difficult cases. © 2017 John Wiley & Sons Ltd.
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-13
... Practice, and Local Effort (BPPPLE) Form.'' Need and Use of Information Collection: The IHS goal is to.../Disease Prevention, Nursing, and Dental) have developed a centralized program database of best practices, promising Practices and local efforts and resources. This database was previously referred as OSCAR, but the...
Protein, fat, moisture, and cooking yields from a nationwide study of retail beef cuts.
USDA-ARS?s Scientific Manuscript database
Nutrient data from the U.S. Department of Agriculture (USDA) are an important resource for U.S. and international databases. To ensure the data for retail beef cuts in USDA’s National Nutrient Database for Standard Reference (SR) are current, a comprehensive, nationwide, multiyear study was conducte...
High-throughput non-targeted analyses (NTA) rely on chemical reference databases for tentative identification of observed chemical features. Many of these databases and online resources incorporate chemical structure data not in a form that is readily observed by mass spectromet...
Hypercat: A Database for Extragalactic Astronomy
NASA Astrophysics Data System (ADS)
Prugniel, Ph.; Maubon, G.
The Hypercat Database is developed at Observatoire de Lyon and is distributed on the WEB(www-obs.univ-lyon1.fr/hypercat) through different mirrors in Europe. The goal of Hypercat is to gather data necessary for studying the evolution of galaxies (dynamics and stellar contains) and particularly for providing a z = 0 reference for these studies.
Creating Smarter Classrooms: Data-Based Decision Making for Effective Classroom Management
ERIC Educational Resources Information Center
Gage, Nicholas A.; McDaniel, Sara
2012-01-01
The term "data-based decision making" (DBDM) has become pervasive in education and typically refers to the use of data to make decisions in schools, from assessment of an individual student's academic progress to whole-school reform efforts. Research suggests that special education teachers who use progress monitoring data (a DBDM…
Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.
ERIC Educational Resources Information Center
Filby, A. M. Iliana
1996-01-01
Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…
Online Public Access Catalogs. ERIC Fact Sheet.
ERIC Educational Resources Information Center
Cochrane, Pauline A.
A listing is presented of 17 documents in the ERIC database concerning the Online Catalog (sometimes referred to as OPAC or Online Public Access Catalog), a computer-based and supported library catalog designed for patron use. The database usually represents recent acquisitions and often contains information about books on order and items in…
Splendore, Alessandra; Fanganiello, Roberto D; Masotti, Cibele; Morganti, Lucas S C; Passos-Bueno, M Rita
2005-05-01
Recently, a novel exon was described in TCOF1 that, although alternatively spliced, is included in the major protein isoform. In addition, most published mutations in this gene do not conform to current mutation nomenclature guidelines. Given these observations, we developed an online database of TCOF1 mutations in which all the reported mutations are renamed according to standard recommendations and in reference to the genomic and novel cDNA reference sequences (www.genoma.ib.usp.br/TCOF1_database). We also report in this work: 1) results of the first screening for large deletions in TCOF1 by Southern blot in patients without mutation detected by direct sequencing; 2) the identification of the first pathogenic mutation in the newly described exon 6A; and 3) statistical analysis of pathogenic mutations and polymorphism distribution throughout the gene.
The multiple personalities of Watson and Crick strands
2011-01-01
Background In genetics it is customary to refer to double-stranded DNA as containing a "Watson strand" and a "Crick strand." However, there seems to be no consensus in the literature on the exact meaning of these two terms, and the many usages contradict one another as well as the original definition. Here, we review the history of the terminology and suggest retaining a single sense that is currently the most useful and consistent. Proposal The Saccharomyces Genome Database defines the Watson strand as the strand which has its 5'-end at the short-arm telomere and the Crick strand as its complement. The Watson strand is always used as the reference strand in their database. Using this as the basis of our standard, we recommend that Watson and Crick strand terminology only be used in the context of genomics. When possible, the centromere or other genomic feature should be used as a reference point, dividing the chromosome into two arms of unequal lengths. Under our proposal, the Watson strand is standardized as the strand whose 5'-end is on the short arm of the chromosome, and the Crick strand as the one whose 5'-end is on the long arm. Furthermore, the Watson strand should be retained as the reference (plus) strand in a genomic database. This usage not only makes the determination of Watson and Crick unambiguous, but also allows unambiguous selection of reference stands for genomics. Reviewers This article was reviewed by John M. Logsdon, Igor B. Rogozin (nominated by Andrey Rzhetsky), and William Martin. PMID:21303550
Compilation of geothermal information: exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-01-01
The Database for Geothermal Energy Exploration and Evaluation is a printout of selected references to publications covering the development of geothermal resources from the identification of an area to the production of elecric power. This annotated bibliography contains four sections: references, author index, author affiliation index, and descriptor index.
ERIC Educational Resources Information Center
Pizlo, Zygmunt
2008-01-01
This paper presents a bibliography of more than 200 references related to human problem solving, arranged by subject matter. The references were taken from PsycInfo database. Journal papers, book chapters, books and dissertations are included. The topics include human development, education, neuroscience, research in applied settings, as well as…
ERIC Educational Resources Information Center
Pizlo, Zygmunt
2007-01-01
This paper presents a bibliography of a little more than 100 references related to human problem solving, arranged by subject matter. The references were taken from PsycInfo and Compendex databases. Only journal papers, books and dissertations are included. The topics include human development, education, neuroscience, research in applied…
ERIC Educational Resources Information Center
Funke, Joachim
2013-01-01
This paper presents a bibliography of 263 references related to human problem solving, arranged by subject matter. The references were taken from PsycInfo and Academic Premier data-base. Journal papers, book chapters, and dissertations are included. The topics include human development, education, neuroscience, and research in applied settings. It…
Reference Manual for Machine-Readable Bibliographic Descriptions. Second Revised Edition.
ERIC Educational Resources Information Center
Dierickx, H., Ed.; Hopkinson, A., Ed.
A product of the UNISIST International Centre for Bibliographic Descriptions (UNIBIB), this reference manual presents a standardized communication format for the exchange of machine-readable bibliographic information between bibliographic databases or other types of bibliographic information services, including libraries. The manual is produced in…
Reference System of DNA and Protein Sequences on CD-ROM
NASA Astrophysics Data System (ADS)
Nasu, Hisanori; Ito, Toshiaki
DNASIS-DBREF31 is a database for DNA and Protein sequences in the form of optical Compact Disk (CD) ROM, developed and commercialized by Hitachi Software Engineering Co., Ltd. Both nucleic acid base sequences and protein amino acid sequences can be retrieved from a single CD-ROM. Existing database is offered in the form of on-line service, floppy disks, or magnetic tape, all of which have some problems or other, such as usability or storage capacity. DNASIS-DBREF31 newly adopt a CD-ROM as a database device to realize a mass storage and personal use of the database.
Aerodynamic Characteristics, Database Development and Flight Simulation of the X-34 Vehicle
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Brauckmann, Gregory J.; Ruth, Michael J.; Fuhrmann, Henri D.
2000-01-01
An overview of the aerodynamic characteristics, development of the preflight aerodynamic database and flight simulation of the NASA/Orbital X-34 vehicle is presented in this paper. To develop the aerodynamic database, wind tunnel tests from subsonic to hypersonic Mach numbers including ground effect tests at low subsonic speeds were conducted in various facilities at the NASA Langley Research Center. Where wind tunnel test data was not available, engineering level analysis is used to fill the gaps in the database. Using this aerodynamic data, simulations have been performed for typical design reference missions of the X-34 vehicle.
Comprehensive Thematic T-Matrix Reference Database: A 2015-2017 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2017-01-01
The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.
Comprehensive thematic T-matrix reference database: A 2015-2017 update
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas
2017-11-01
The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.
Mapping the core journals of the physical therapy literature*
Fell, Dennis W; Buchanan, Melanie J; Horchen, Heidi A; Scherr, Joel A
2011-01-01
Objectives: The purpose of this study was to identify (1) core journals in the literature of physical therapy, (2) currency of references cited in that literature, and (3) online databases providing the highest coverage rate of core journals. Method: Data for each cited reference in each article of four source journals for three years were recorded, including type of literature, year of publication, and journal title. The journal titles were ranked in descending order according to the frequency of citations and divided into three zones using Bradford's Law of Scattering. Four databases were analyzed for coverage rates of articles published in the Zone 1 and Zone 2 journals in 2007. Results: Journal articles were the most frequently cited type of literature, with sixteen journals supplying one-third of the cited journal references. Physical Therapy was the most commonly cited title. There were more cited articles published from 2000 to 2007 than in any previous full decade. Of the databases analyzed, CINAHL provided the highest coverage rate for Zone 1 2007 publications. Conclusions: Results were similar to a previous study, except for changes in the order of Zone 1 journals. Results can help physical therapists and librarians determine important journals in this discipline. PMID:21753912
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
Gioutlakis, Aris; Klapa, Maria I.
2017-01-01
It has been acknowledged that source databases recording experimentally supported human protein-protein interactions (PPIs) exhibit limited overlap. Thus, the reconstruction of a comprehensive PPI network requires appropriate integration of multiple heterogeneous primary datasets, presenting the PPIs at various genetic reference levels. Existing PPI meta-databases perform integration via normalization; namely, PPIs are merged after converted to a certain target level. Hence, the node set of the integrated network depends each time on the number and type of the combined datasets. Moreover, the irreversible a priori normalization process hinders the identification of normalization artifacts in the integrated network, which originate from the nonlinearity characterizing the genetic information flow. PICKLE (Protein InteraCtion KnowLedgebasE) 2.0 implements a new architecture for this recently introduced human PPI meta-database. Its main novel feature over the existing meta-databases is its approach to primary PPI dataset integration via genetic information ontology. Building upon the PICKLE principles of using the reviewed human complete proteome (RHCP) of UniProtKB/Swiss-Prot as the reference protein interactor set, and filtering out protein interactions with low probability of being direct based on the available evidence, PICKLE 2.0 first assembles the RHCP genetic information ontology network by connecting the corresponding genes, nucleotide sequences (mRNAs) and proteins (UniProt entries) and then integrates PPI datasets by superimposing them on the ontology network without any a priori transformations. Importantly, this process allows the resulting heterogeneous integrated network to be reversibly normalized to any level of genetic reference without loss of the original information, the latter being used for identification of normalization biases, and enables the appraisal of potential false positive interactions through PPI source database cross-checking. The PICKLE web-based interface (www.pickle.gr) allows for the simultaneous query of multiple entities and provides integrated human PPI networks at either the protein (UniProt) or the gene level, at three PPI filtering modes. PMID:29023571
Winsor, Geoffrey L; Van Rossum, Thea; Lo, Raymond; Khaira, Bhavjinder; Whiteside, Matthew D; Hancock, Robert E W; Brinkman, Fiona S L
2009-01-01
Pseudomonas aeruginosa is a well-studied opportunistic pathogen that is particularly known for its intrinsic antimicrobial resistance, diverse metabolic capacity, and its ability to cause life threatening infections in cystic fibrosis patients. The Pseudomonas Genome Database (http://www.pseudomonas.com) was originally developed as a resource for peer-reviewed, continually updated annotation for the Pseudomonas aeruginosa PAO1 reference strain genome. In order to facilitate cross-strain and cross-species genome comparisons with other Pseudomonas species of importance, we have now expanded the database capabilities to include all Pseudomonas species, and have developed or incorporated methods to facilitate high quality comparative genomics. The database contains robust assessment of orthologs, a novel ortholog clustering method, and incorporates five views of the data at the sequence and annotation levels (Gbrowse, Mauve and custom views) to facilitate genome comparisons. A choice of simple and more flexible user-friendly Boolean search features allows researchers to search and compare annotations or sequences within or between genomes. Other features include more accurate protein subcellular localization predictions and a user-friendly, Boolean searchable log file of updates for the reference strain PAO1. This database aims to continue to provide a high quality, annotated genome resource for the research community and is available under an open source license.
Barroso, João; Pfannenbecker, Uwe; Adriaens, Els; Alépée, Nathalie; Cluzel, Magalie; De Smedt, Ann; Hibatallah, Jalila; Klaric, Martina; Mewes, Karsten R; Millet, Marion; Templier, Marie; McNamee, Pauline
2017-02-01
A thorough understanding of which of the effects assessed in the in vivo Draize eye test are responsible for driving UN GHS/EU CLP classification is critical for an adequate selection of chemicals to be used in the development and/or evaluation of alternative methods/strategies and for properly assessing their predictive capacity and limitations. For this reason, Cosmetics Europe has compiled a database of Draize data (Draize eye test Reference Database, DRD) from external lists that were created to support past validation activities. This database contains 681 independent in vivo studies on 634 individual chemicals representing a wide range of chemical classes. A description of all the ocular effects observed in vivo, i.e. degree of severity and persistence of corneal opacity (CO), iritis, and/or conjunctiva effects, was added for each individual study in the database, and the studies were categorised according to their UN GHS/EU CLP classification and the main effect driving the classification. An evaluation of the various in vivo drivers of classification compiled in the database was performed to establish which of these are most important from a regulatory point of view. These analyses established that the most important drivers for Cat 1 Classification are (1) CO mean ≥ 3 (days 1-3) (severity) and (2) CO persistence on day 21 in the absence of severity, and those for Cat 2 classification are (3) CO mean ≥ 1 and (4) conjunctival redness mean ≥ 2. Moreover, it is shown that all classifiable effects (including persistence and CO = 4) should be present in ≥60 % of the animals to drive a classification. As a consequence, our analyses suggest the need for a critical revision of the UN GHS/EU CLP decision criteria for the Cat 1 classification of chemicals. Finally, a number of key criteria are identified that should be taken into consideration when selecting reference chemicals for the development, evaluation and/or validation of alternative methods and/or strategies for serious eye damage/eye irritation testing. Most important, the DRD is an invaluable tool for any future activity involving the selection of reference chemicals.
OCLC Looks to an Online Future: An Interview with K. Wayne Smith.
ERIC Educational Resources Information Center
Arnold, Stephen
1993-01-01
Provides an interview with K. Wayne Smith, chief executive officer of OCLC, that focuses on OCLC's online reference services. Topics include the ratio between technical and online reference services, how OCLC fits into the online industry, telecommunications, electronic publishing, pricing, database tape leases, and CD-ROM. (EAM)
Serials Information on CD-ROM: A Reference Perspective.
ERIC Educational Resources Information Center
Karch, Linda S.
1990-01-01
Describes Ulrich's PLUS (a CD-ROM version of Ulrich's serials directories) and EBSCO's CD-ROM version of "The Serials Directory," and compares the two in terms of their use as reference tools. Areas discussed include database content, user aids, system features, search features, and a comparison of search results. Equipment requirements…
In vivo studies provide reference data to evaluate alternative methods for predicting toxicity. However, the reproducibility and variance of effects observed across multiple in vivo studies is not well understood. The US EPA’s Toxicity Reference Database (ToxRefDB) stores d...
Locating and parsing bibliographic references in HTML medical articles
Zou, Jie; Le, Daniel; Thoma, George R.
2010-01-01
The set of references that typically appear toward the end of journal articles is sometimes, though not always, a field in bibliographic (citation) databases. But even if references do not constitute such a field, they can be useful as a preprocessing step in the automated extraction of other bibliographic data from articles, as well as in computer-assisted indexing of articles. Automation in data extraction and indexing to minimize human labor is key to the affordable creation and maintenance of large bibliographic databases. Extracting the components of references, such as author names, article title, journal name, publication date and other entities, is therefore a valuable and sometimes necessary task. This paper describes a two-step process using statistical machine learning algorithms, to first locate the references in HTML medical articles and then to parse them. Reference locating identifies the reference section in an article and then decomposes it into individual references. We formulate this step as a two-class classification problem based on text and geometric features. An evaluation conducted on 500 articles drawn from 100 medical journals achieves near-perfect precision and recall rates for locating references. Reference parsing identifies the components of each reference. For this second step, we implement and compare two algorithms. One relies on sequence statistics and trains a Conditional Random Field. The other focuses on local feature statistics and trains a Support Vector Machine to classify each individual word, followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The overall performance of these two reference-parsing algorithms is about the same: above 99% accuracy at the word level, and over 97% accuracy at the chunk level. PMID:20640222
Locating and parsing bibliographic references in HTML medical articles.
Zou, Jie; Le, Daniel; Thoma, George R
2010-06-01
The set of references that typically appear toward the end of journal articles is sometimes, though not always, a field in bibliographic (citation) databases. But even if references do not constitute such a field, they can be useful as a preprocessing step in the automated extraction of other bibliographic data from articles, as well as in computer-assisted indexing of articles. Automation in data extraction and indexing to minimize human labor is key to the affordable creation and maintenance of large bibliographic databases. Extracting the components of references, such as author names, article title, journal name, publication date and other entities, is therefore a valuable and sometimes necessary task. This paper describes a two-step process using statistical machine learning algorithms, to first locate the references in HTML medical articles and then to parse them. Reference locating identifies the reference section in an article and then decomposes it into individual references. We formulate this step as a two-class classification problem based on text and geometric features. An evaluation conducted on 500 articles drawn from 100 medical journals achieves near-perfect precision and recall rates for locating references. Reference parsing identifies the components of each reference. For this second step, we implement and compare two algorithms. One relies on sequence statistics and trains a Conditional Random Field. The other focuses on local feature statistics and trains a Support Vector Machine to classify each individual word, followed by a search algorithm that systematically corrects low confidence labels if the label sequence violates a set of predefined rules. The overall performance of these two reference-parsing algorithms is about the same: above 99% accuracy at the word level, and over 97% accuracy at the chunk level.
National Institute of Standards and Technology Data Gateway
SRD 106 IUPAC-NIST Solubility Database (Web, free access) These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.
Comprehensive T-matrix Reference Database: A 2009-2011 Update
NASA Technical Reports Server (NTRS)
Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.
2012-01-01
The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.
Creating a VAPEPS database: A VAPEPS tutorial
NASA Technical Reports Server (NTRS)
Graves, George
1989-01-01
A procedural method is outlined for creating a Vibroacoustic Payload Environment Prediction System (VAPEPS) Database. The method of presentation employs flowcharts of sequential VAPEPS Commands used to create a VAPEPS Database. The commands are accompanied by explanatory text to the right of the command in order to minimize the need for repetitive reference to the VAPEPS user's manual. The method is demonstrated by examples of varying complexity. It is assumed that the reader has acquired a basic knowledge of the VAPEPS software program.
O'Leary, Nuala A; Wright, Mathew W; Brister, J Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S; Kodali, Vamsi K; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M; Murphy, Michael R; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H; Rausch, Daniel; Riddick, Lillian D; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E; Vatsan, Anjana R; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D; Pruitt, Kim D
2016-01-04
The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55,000 organisms (>4800 viruses, >40,000 prokaryotes and >10,000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro
2016-06-01
Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
The Reliability of Methodological Ratings for speechBITE Using the PEDro-P Scale
ERIC Educational Resources Information Center
Murray, Elizabeth; Power, Emma; Togher, Leanne; McCabe, Patricia; Munro, Natalie; Smith, Katherine
2013-01-01
Background: speechBITE (http://www.speechbite.com) is an online database established in order to help speech and language therapists gain faster access to relevant research that can used in clinical decision-making. In addition to containing more than 3000 journal references, the database also provides methodological ratings on the PEDro-P (an…
ERIC Educational Resources Information Center
Fagan, Judy Condit
2001-01-01
Discusses the need for libraries to routinely redesign their Web sites, and presents a case study that describes how a Perl-driven database at Southern Illinois University's library improved Web site organization and patron access, simplified revisions, and allowed staff unfamiliar with HTML to update content. (Contains 56 references.) (Author/LRW)
ERIC Educational Resources Information Center
Evans, John; Park, Betsy
This planning proposal recommends that Memphis State University Libraries make information on CD-ROM (compact disc--read only memory) available in the Reference Department by establishing an Information Retrieval Center (IRC). Following a brief introduction and statement of purpose, the library's databases, users, staffing, facilities, and…
Data mining the EXFOR database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David A.; Hirdt, John; Herman, Michal
2013-12-13
The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as this data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the connections of various types between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. Analysing this abstract graph, we are able to address very specific questions such as 1) what observables are being used as reference measurements by the experimental community? 2) are thesemore » observables given the attention needed by various standards organisations? 3) are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross section) observables that should be evaluated and made into reaction reference standards.« less
Agustini, Bruna Carla; Silva, Luciano Paulino; Bloch, Carlos; Bonfim, Tania M B; da Silva, Gildo Almeida
2014-06-01
Yeast identification using traditional methods which employ morphological, physiological, and biochemical characteristics can be considered a hard task as it requires experienced microbiologists and a rigorous control in culture conditions that could implicate in different outcomes. Considering clinical or industrial applications, the fast and accurate identification of microorganisms is a crescent demand. Hence, molecular biology approaches has been extensively used and, more recently, protein profiling using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has proved to be an even more efficient tool for taxonomic purposes. Nonetheless, concerning to mass spectrometry, data available for the differentiation of yeast species for industrial purpose is limited and reference databases commercially available comprise almost exclusively clinical microorganisms. In this context, studies focusing on environmental isolates are required to extend the existing databases. The development of a supplementary database and the assessment of a commercial database for taxonomic identifications of environmental yeast are the aims of this study. We challenge MALDI-TOF MS to create protein profiles for 845 yeast strains isolated from grape must and 67.7 % of the strains were successfully identified according to previously available manufacturer database. The remaining 32.3 % strains were not identified due to the absence of a reference spectrum. After matching the correct taxon for these strains by using molecular biology approaches, the spectra concerning the missing species were added in a supplementary database. This new library was able to accurately predict unidentified species at first instance by MALDI-TOF MS, proving it is a powerful tool for the identification of environmental yeasts.
Vidal-Acuña, M Reyes; Ruiz-Pérez de Pipaón, Maite; Torres-Sánchez, María José; Aznar, Javier
2017-12-08
An expanded library of matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) has been constructed using the spectra generated from 42 clinical isolates and 11 reference strains, including 23 different species from 8 sections (16 cryptic plus 7 noncryptic species). Out of a total of 379 strains of Aspergillus isolated from clinical samples, 179 strains were selected to be identified by sequencing of beta-tubulin or calmodulin genes. Protein spectra of 53 strains, cultured in liquid medium, were used to construct an in-house reference database in the MALDI-TOF MS. One hundred ninety strains (179 clinical isolates previously identified by sequencing and the 11 reference strains), cultured on solid medium, were blindy analyzed by the MALDI-TOF MS technology to validate the generated in-house reference database. A 100% correlation was obtained with both identification methods, gene sequencing and MALDI-TOF MS, and no discordant identification was obtained. The HUVR database provided species level (score of ≥2.0) identification in 165 isolates (86.84%) and for the remaining 25 (13.16%) a genus level identification (score between 1.7 and 2.0) was obtained. The routine MALDI-TOF MS analysis with the new database, was then challenged with 200 Aspergillus clinical isolates grown on solid medium in a prospective evaluation. A species identification was obtained in 191 strains (95.5%), and only nine strains (4.5%) could not be identified at the species level. Among the 200 strains, A. tubingensis was the only cryptic species identified. We demonstrated the feasibility and usefulness of the new HUVR database in MALDI-TOF MS by the use of a standardized procedure for the identification of Aspergillus clinical isolates, including cryptic species, grown either on solid or liquid media. © The Author 2017. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
TMDB: a literature-curated database for small molecular compounds found from tea.
Yue, Yi; Chu, Gang-Xiu; Liu, Xue-Shi; Tang, Xing; Wang, Wei; Liu, Guang-Jin; Yang, Tao; Ling, Tie-Jun; Wang, Xiao-Gang; Zhang, Zheng-Zhu; Xia, Tao; Wan, Xiao-Chun; Bao, Guan-Hu
2014-09-16
Tea is one of the most consumed beverages worldwide. The healthy effects of tea are attributed to a wealthy of different chemical components from tea. Thousands of studies on the chemical constituents of tea had been reported. However, data from these individual reports have not been collected into a single database. The lack of a curated database of related information limits research in this field, and thus a cohesive database system should necessarily be constructed for data deposit and further application. The Tea Metabolome database (TMDB), a manually curated and web-accessible database, was developed to provide detailed, searchable descriptions of small molecular compounds found in Camellia spp. esp. in the plant Camellia sinensis and compounds in its manufactured products (different kinds of tea infusion). TMDB is currently the most complete and comprehensive curated collection of tea compounds data in the world. It contains records for more than 1393 constituents found in tea with information gathered from 364 published books, journal articles, and electronic databases. It also contains experimental 1H NMR and 13C NMR data collected from the purified reference compounds or collected from other database resources such as HMDB. TMDB interface allows users to retrieve tea compounds entries by keyword search using compound name, formula, occurrence, and CAS register number. Each entry in the TMDB contains an average of 24 separate data fields including its original plant species, compound structure, formula, molecular weight, name, CAS registry number, compound types, compound uses including healthy benefits, reference literatures, NMR, MS data, and the corresponding ID from databases such as HMDB and Pubmed. Users can also contribute novel regulatory entries by using a web-based submission page. The TMDB database is freely accessible from the URL of http://pcsb.ahau.edu.cn:8080/TCDB/index.jsp. The TMDB is designed to address the broad needs of tea biochemists, natural products chemists, nutritionists, and members of tea related research community. The TMDB database provides a solid platform for collection, standardization, and searching of compounds information found in tea. As such this database will be a comprehensive repository for tea biochemistry and tea health research community.
Dicken, Connie L.; Dunlap, Pamela; Parks, Heather L.; Hammarstrom, Jane M.; Zientek, Michael L.; Zientek, Michael L.; Hammarstrom, Jane M.; Johnson, Kathleen M.
2016-07-13
As part of the first-ever U.S. Geological Survey global assessment of undiscovered copper resources, data common to several regional spatial databases published by the U.S. Geological Survey, including one report from Finland and one from Greenland, were standardized, updated, and compiled into a global copper resource database. This integrated collection of spatial databases provides location, geologic and mineral resource data, and source references for deposits, significant prospects, and areas permissive for undiscovered deposits of both porphyry copper and sediment-hosted copper. The copper resource database allows for efficient modeling on a global scale in a geographic information system (GIS) and is provided in an Esri ArcGIS file geodatabase format.
The PMDB Protein Model Database
Castrignanò, Tiziana; De Meo, Paolo D'Onorio; Cozzetto, Domenico; Talamo, Ivano Giuseppe; Tramontano, Anna
2006-01-01
The Protein Model Database (PMDB) is a public resource aimed at storing manually built 3D models of proteins. The database is designed to provide access to models published in the scientific literature, together with validating experimental data. It is a relational database and it currently contains >74 000 models for ∼240 proteins. The system is accessible at and allows predictors to submit models along with related supporting evidence and users to download them through a simple and intuitive interface. Users can navigate in the database and retrieve models referring to the same target protein or to different regions of the same protein. Each model is assigned a unique identifier that allows interested users to directly access the data. PMID:16381873
De Carolis, E; Posteraro, B; Lass-Flörl, C; Vella, A; Florio, A R; Torelli, R; Girmenia, C; Colozza, C; Tortorano, A M; Sanguinetti, M; Fadda, G
2012-05-01
Accurate species discrimination of filamentous fungi is essential, because some species have specific antifungal susceptibility patterns, and misidentification may result in inappropriate therapy. We evaluated matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for species identification through direct surface analysis of the fungal culture. By use of culture collection strains representing 55 species of Aspergillus, Fusarium and Mucorales, a reference database was established for MALDI-TOF MS-based species identification according to the manufacturer's recommendations for microflex measurements and MALDI BioTyper 2.0 software. The profiles of young and mature colonies were analysed for each of the reference strains, and species-specific spectral fingerprints were obtained. To evaluate the database, 103 blind-coded fungal isolates collected in the routine clinical microbiology laboratory were tested. As a reference method for species designation, multilocus sequencing was used. Eighty-five isolates were unequivocally identified to the species level (≥99% sequence similarity); 18 isolates producing ambiguous results at this threshold were initially rated as identified to the genus level only. Further molecular analysis definitively assigned these isolates to the species Aspergillus oryzae (17 isolates) and Aspergillus flavus (one isolate), concordant with the MALDI-TOF MS results. Excluding nine isolates that belong to the fungal species not included in our reference database, 91 (96.8%) of 94 isolates were identified by MALDI-TOF MS to the species level, in agreement with the results of the reference method; three isolates were identified to the genus level. In conclusion, MALDI-TOF MS is suitable for the routine identification of filamentous fungi in a medical microbiology laboratory. © 2011 The Authors. Clinical Microbiology and Infection © 2011 European Society of Clinical Microbiology and Infectious Diseases.
University Library Virtual Reference Services: Best Practices and Continuous Improvement
ERIC Educational Resources Information Center
Shaw, Kate; Spink, Amanda
2009-01-01
The inclusion or not of chat services within Virtual Reference (VR) is an important topic for university libraries. Increasingly, email supported by a Frequently Asked Questions (FAQ) database is suggested in the scholarly literature as the preferred, cost-effective means for providing university VR services. This paper examines these issues and…
ERIC Educational Resources Information Center
Profeta, Patricia C.
2007-01-01
The provision of equitable library services to distance learning students emerged as a critical area during the 1990s. Library services available to distance learning students included digital reference and instructional services, remote access to online research tools, database and research tutorials, interlibrary loan, and document delivery.…
Reference Manual for Machine-Readable Descriptions of Research Projects and Institutions.
ERIC Educational Resources Information Center
Dierickx, Harold; Hopkinson, Alan
This reference manual presents a standardized communication format for the exchange between databases or other information services of machine-readable information on research in progress. The manual is produced in loose-leaf format to facilitate updating. Its first section defines in broad outline the format and content of applicable records. A…
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
Lux, Markus; Kruger, Jan; Rinke, Christian; ...
2016-12-20
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lux, Markus; Kruger, Jan; Rinke, Christian
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.
Mapping Research in the Field of Special Education on the Island of Ireland since 2000
ERIC Educational Resources Information Center
Travers, Joseph; Savage, Rosie; Butler, Cathal; O'Donnell, Margaret
2018-01-01
This paper describes the process of building a database mapping research and policy in the field of special education on the island of Ireland from 2000 to 2013. The field of study includes special educational needs, disability and inclusion. The database contains 3188 references organised thematically and forms a source for researchers to access…
USDA-ARS?s Scientific Manuscript database
Beef nutrition is very important to the worldwide beef industry and its consumers. The objective of this study was to analyze nutrient composition of eight beef rib and plate cuts to update the nutrient data in the USDA National Nutrient Database for Standard Reference (SR). Seventy-two carcasses ...
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng
2013-03-01
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
Rapid identification of oral Actinomyces species cultivated from subgingival biofilm by MALDI-TOF-MS
Stingu, Catalina S.; Borgmann, Toralf; Rodloff, Arne C.; Vielkind, Paul; Jentsch, Holger; Schellenberger, Wolfgang; Eschrich, Klaus
2015-01-01
Background Actinomyces are a common part of the residential flora of the human intestinal tract, genitourinary system and skin. Isolation and identification of Actinomyces by conventional methods is often difficult and time consuming. In recent years, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has become a rapid and simple method to identify bacteria. Objective The present study evaluated a new in-house algorithm using MALDI-TOF-MS for rapid identification of different species of oral Actinomyces cultivated from subgingival biofilm. Design Eleven reference strains and 674 clinical strains were used in this study. All the strains were preliminarily identified using biochemical methods and then subjected to MALDI-TOF-MS analysis using both similarity-based analysis and classification methods (support vector machine [SVM]). The genotype of the reference strains and of 232 clinical strains was identified by sequence analysis of the 16S ribosomal RNA (rRNA). Results The sequence analysis of the 16S rRNA gene of all references strains confirmed their previous identification. The MALDI-TOF-MS spectra obtained from the reference strains and the other clinical strains undoubtedly identified as Actinomyces by 16S rRNA sequencing were used to create the mass spectra reference database. Already a visual inspection of the mass spectra of different species reveals both similarities and differences. However, the differences between them are not large enough to allow a reliable differentiation by similarity analysis. Therefore, classification methods were applied as an alternative approach for differentiation and identification of Actinomyces at the species level. A cross-validation of the reference database representing 14 Actinomyces species yielded correct results for all species which were represented by more than two strains in the database. Conclusions Our results suggest that a combination of MALDI-TOF-MS with powerful classification algorithms, such as SVMs, provide a useful tool for the differentiation and identification of oral Actinomyces. PMID:25597306
SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access.
Amigo, Jorge; Salas, Antonio; Phillips, Christopher; Carracedo, Angel
2008-10-10
In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs) in widespread use in human population genetics: SPSmart (SNPs for Population Studies). A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 x 10(9) genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full numerical description of the data is output in statistical results panels that include common population genetics metrics such as heterozygosity, Fst and In.
Identifying known unknowns using the US EPA's CompTox ...
Chemical features observed using high-resolution mass spectrometry can be tentatively identified using online chemical reference databases by searching molecular formulae and monoisotopic masses and then rank-ordering of the hits using appropriate relevance criteria. The most likely candidate “known unknowns,” which are those chemicals unknown to an investigator but contained within a reference database or literature source, rise to the top of a chemical list when rank-ordered by the number of associated data sources. The U.S. EPA’s CompTox Chemistry Dashboard is a curated and freely available resource for chemistry and computational toxicology research, containing more than 720,000 chemicals of relevance to environmental health science. In this research, the performance of the Dashboard for identifying “known unknowns” was evaluated against that of the online ChemSpider database, one of the primary resources used by mass spectrometrists, using multiple previously studied datasets reported in the peer-reviewed literature totaling 162 chemicals. These chemicals were examined using both applications via molecular formula and monoisotopic mass searches followed by rank-ordering of candidate compounds by associated references or data sources. A greater percentage of chemicals ranked in the top position when using the Dashboard, indicating an advantage of this application over ChemSpider for identifying known unknowns using data source ranking. Addition
Ontological interpretation of biomedical database content.
Santana da Silva, Filipe; Jansen, Ludger; Freitas, Fred; Schulz, Stefan
2017-06-26
Biological databases store data about laboratory experiments, together with semantic annotations, in order to support data aggregation and retrieval. The exact meaning of such annotations in the context of a database record is often ambiguous. We address this problem by grounding implicit and explicit database content in a formal-ontological framework. By using a typical extract from the databases UniProt and Ensembl, annotated with content from GO, PR, ChEBI and NCBI Taxonomy, we created four ontological models (in OWL), which generate explicit, distinct interpretations under the BioTopLite2 (BTL2) upper-level ontology. The first three models interpret database entries as individuals (IND), defined classes (SUBC), and classes with dispositions (DISP), respectively; the fourth model (HYBR) is a combination of SUBC and DISP. For the evaluation of these four models, we consider (i) database content retrieval, using ontologies as query vocabulary; (ii) information completeness; and, (iii) DL complexity and decidability. The models were tested under these criteria against four competency questions (CQs). IND does not raise any ontological claim, besides asserting the existence of sample individuals and relations among them. Modelling patterns have to be created for each type of annotation referent. SUBC is interpreted regarding maximally fine-grained defined subclasses under the classes referred to by the data. DISP attempts to extract truly ontological statements from the database records, claiming the existence of dispositions. HYBR is a hybrid of SUBC and DISP and is more parsimonious regarding expressiveness and query answering complexity. For each of the four models, the four CQs were submitted as DL queries. This shows the ability to retrieve individuals with IND, and classes in SUBC and HYBR. DISP does not retrieve anything because the axioms with disposition are embedded in General Class Inclusion (GCI) statements. Ambiguity of biological database content is addressed by a method that identifies implicit knowledge behind semantic annotations in biological databases and grounds it in an expressive upper-level ontology. The result is a seamless representation of database structure, content and annotations as OWL models.
Comprehensive T-Matrix Reference Database: A 2007-2009 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2010-01-01
The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.
RNAcentral: an international database of ncRNA sequences
Williams, Kelly Porter
2014-10-28
The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.
Aquatic information and retrieval (AQUIRE) database system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, R.; Niemi, G.; Pilli, A.
The AQUIRE database system is one of the foremost international resources for finding aquatic toxicity information. Information in the system is organized around the concept of an 'aquatic toxicity test.' A toxicity test record contains information about the chemical, species, endpoint, endpoint concentrations, and test conditions under which the toxicity test was conducted. For the past 10 years aquatic literature has been reviewed and entered into the system. Currently, the AQUIRE database system contains data on more than 2,400 species, 160 endpoints, 5,000 chemicals, 6,000 references, and 104,000 toxicity tests.
NASA Astrophysics Data System (ADS)
Masuyama, Keiichi
CD-ROM has rapidly evolved as a new information medium with large capacity, In the U.S. it is predicted that it will become two hundred billion yen market in three years, and thus CD-ROM is strategic target of database industry. Here in Japan the movement toward its commercialization has been active since this year. Shall CD-ROM bussiness ever conquer information market as an on-disk database or electronic publication? Referring to some cases of the applications in the U.S. the author views marketability and the future trend of this new optical disk medium.
The new on-line Czech Food Composition Database.
Machackova, Marie; Holasova, Marie; Maskova, Eva
2013-10-01
The new on-line Czech Food Composition Database (FCDB) was launched on http://www.czfcdb.cz in December 2010 as a main freely available channel for dissemination of Czech food composition data. The application is based on a complied FCDB documented according to the EuroFIR standardised procedure for full value documentation and indexing of foods by the LanguaL™ Thesaurus. A content management system was implemented for administration of the website and performing data export (comma-separated values or EuroFIR XML transport package formats) by a compiler. Reference/s are provided for each published value with linking to available freely accessible on-line sources of data (e.g. full texts, EuroFIR Document Repository, on-line national FCDBs). LanguaL™ codes are displayed within each food record as searchable keywords of the database. A photo (or a photo gallery) is used as a visual descriptor of a food item. The application is searchable on foods, components, food groups, alphabet and a multi-field advanced search. Copyright © 2013 Elsevier Ltd. All rights reserved.
MetaboLights: An Open-Access Database Repository for Metabolomics Data.
Kale, Namrata S; Haug, Kenneth; Conesa, Pablo; Jayseelan, Kalaivani; Moreno, Pablo; Rocca-Serra, Philippe; Nainala, Venkata Chandrasekhar; Spicer, Rachel A; Williams, Mark; Li, Xuefei; Salek, Reza M; Griffin, Julian L; Steinbeck, Christoph
2016-03-24
MetaboLights is the first general purpose, open-access database repository for cross-platform and cross-species metabolomics research at the European Bioinformatics Institute (EMBL-EBI). Based upon the open-source ISA framework, MetaboLights provides Metabolomics Standard Initiative (MSI) compliant metadata and raw experimental data associated with metabolomics experiments. Users can upload their study datasets into the MetaboLights Repository. These studies are then automatically assigned a stable and unique identifier (e.g., MTBLS1) that can be used for publication reference. The MetaboLights Reference Layer associates metabolites with metabolomics studies in the archive and is extensively annotated with data fields such as structural and chemical information, NMR and MS spectra, target species, metabolic pathways, and reactions. The database is manually curated with no specific release schedules. MetaboLights is also recommended by journals for metabolomics data deposition. This unit provides a guide to using MetaboLights, downloading experimental data, and depositing metabolomics datasets using user-friendly submission tools. Copyright © 2016 John Wiley & Sons, Inc.
STRBase: a short tandem repeat DNA database for the human identity testing community
Ruitberg, Christian M.; Reeder, Dennis J.; Butler, John M.
2001-01-01
The National Institute of Standards and Technology (NIST) has compiled and maintained a Short Tandem Repeat DNA Internet Database (http://www.cstl.nist.gov/biotech/strbase/) since 1997 commonly referred to as STRBase. This database is an information resource for the forensic DNA typing community with details on commonly used short tandem repeat (STR) DNA markers. STRBase consolidates and organizes the abundant literature on this subject to facilitate on-going efforts in DNA typing. Observed alleles and annotated sequence for each STR locus are described along with a review of STR analysis technologies. Additionally, commercially available STR multiplex kits are described, published polymerase chain reaction (PCR) primer sequences are reported, and validation studies conducted by a number of forensic laboratories are listed. To supplement the technical information, addresses for scientists and hyperlinks to organizations working in this area are available, along with the comprehensive reference list of over 1300 publications on STRs used for DNA typing purposes. PMID:11125125
Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders.
Hamosh, Ada; Scott, Alan F; Amberger, Joanna S; Bocchini, Carol A; McKusick, Victor A
2005-01-01
Online Mendelian Inheritance in Man (OMIM) is a comprehensive, authoritative and timely knowledgebase of human genes and genetic disorders compiled to support human genetics research and education and the practice of clinical genetics. Started by Dr Victor A. McKusick as the definitive reference Mendelian Inheritance in Man, OMIM (http://www.ncbi.nlm.nih.gov/omim/) is now distributed electronically by the National Center for Biotechnology Information, where it is integrated with the Entrez suite of databases. Derived from the biomedical literature, OMIM is written and edited at Johns Hopkins University with input from scientists and physicians around the world. Each OMIM entry has a full-text summary of a genetically determined phenotype and/or gene and has numerous links to other genetic databases such as DNA and protein sequence, PubMed references, general and locus-specific mutation databases, HUGO nomenclature, MapViewer, GeneTests, patient support groups and many others. OMIM is an easy and straightforward portal to the burgeoning information in human genetics.
Non-animal methods to predict skin sensitization (I): the Cosmetics Europe database.
Hoffmann, Sebastian; Kleinstreuer, Nicole; Alépée, Nathalie; Allen, David; Api, Anne Marie; Ashikaga, Takao; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Goebel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Lalko, Jon F; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Parakhia, Rahul; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk
2018-05-01
Cosmetics Europe, the European Trade Association for the cosmetics and personal care industry, is conducting a multi-phase program to develop regulatory accepted, animal-free testing strategies enabling the cosmetics industry to conduct safety assessments. Based on a systematic evaluation of test methods for skin sensitization, five non-animal test methods (DPRA (Direct Peptide Reactivity Assay), KeratinoSens TM , h-CLAT (human cell line activation test), U-SENS TM , SENS-IS) were selected for inclusion in a comprehensive database of 128 substances. Existing data were compiled and completed with newly generated data, the latter amounting to one-third of all data. The database was complemented with human and local lymph node assay (LLNA) reference data, physicochemical properties and use categories, and thoroughly curated. Focused on the availability of human data, the substance selection resulted nevertheless resulted in a high diversity of chemistries in terms of physico-chemical property ranges and use categories. Predictivities of skin sensitization potential and potency, where applicable, were calculated for the LLNA as compared to human data and for the individual test methods compared to both human and LLNA reference data. In addition, various aspects of applicability of the test methods were analyzed. Due to its high level of curation, comprehensiveness, and completeness, we propose our database as a point of reference for the evaluation and development of testing strategies, as done for example in the associated work of Kleinstreuer et al. We encourage the community to use it to meet the challenge of conducting skin sensitization safety assessment without generating new animal data.
Barron, Andrew D.; Ramsey, David W.; Smith, James G.
2014-01-01
This digital database contains information used to produce the geologic map published as Sheet 1 in U.S. Geological Survey Miscellaneous Investigations Series Map I-2005. (Sheet 2 of Map I-2005 shows sources of geologic data used in the compilation and is available separately). Sheet 1 of Map I-2005 shows the distribution and relations of volcanic and related rock units in the Cascade Range of Washington at a scale of 1:500,000. This digital release is produced from stable materials originally compiled at 1:250,000 scale that were used to publish Sheet 1. The database therefore contains more detailed geologic information than is portrayed on Sheet 1. This is most noticeable in the database as expanded polygons of surficial units and the presence of additional strands of concealed faults. No stable compilation materials exist for Sheet 1 at 1:500,000 scale. The main component of this digital release is a spatial database prepared using geographic information systems (GIS) applications. This release also contains links to files to view or print the map sheet, main report text, and accompanying mapping reference sheet from Map I-2005. For more information on volcanoes in the Cascade Range in Washington, Oregon, or California, please refer to the U.S. Geological Survey Volcano Hazards Program website.
The reference ballistic imaging database revisited.
De Ceuster, Jan; Dujardin, Sylvain
2015-03-01
A reference ballistic image database (RBID) contains images of cartridge cases fired in firearms that are in circulation: a ballistic fingerprint database. The performance of an RBID was investigated a decade ago by De Kinder et al. using IBIS(®) Heritage™ technology. The results of that study were published in this journal, issue 214. Since then, technologies have evolved quite significantly and novel apparatus have become available on the market. The current research article investigates the efficiency of another automated ballistic imaging system, Evofinder(®) using the same database as used by De Kinder et al. The results demonstrate a significant increase in correlation efficiency: 38% of all matches were on first position of the Evofinder correlation list in comparison to IBIS(®) Heritage™ where only 19% were on the first position. Average correlation times are comparable to the IBIS(®) Heritage™ system. While Evofinder(®) demonstrates specific improvement for mutually correlating different ammunition brands, ammunition dependence of the markings is still strongly influencing the correlation result because the markings may vary considerably. As a consequence a great deal of potential hits (36%) was still far down in the correlation lists (positions 31 and lower). The large database was used to examine the probability of finding a match as a function of correlation list verification. As an example, the RBID study on Evofinder(®) demonstrates that to find at least 90% of all potential matches, at least 43% of the items in the database need to be compared on screen and this for breech face markings and firing pin impression separately. These results, although a clear improvement to the original RBID study, indicate that the implementation of such a database should still not be considered nowadays. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
System and method for authentication
Duerksen, Gary L.; Miller, Seth A.
2015-12-29
Described are methods and systems for determining authenticity. For example, the method may include providing an object of authentication, capturing characteristic data from the object of authentication, deriving authentication data from the characteristic data of the object of authentication, and comparing the authentication data with an electronic database comprising reference authentication data to provide an authenticity score for the object of authentication. The reference authentication data may correspond to one or more reference objects of authentication other than the object of authentication.
Functionally Graded Materials Database
NASA Astrophysics Data System (ADS)
Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki
2008-02-01
Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.
The Magnetics Information Consortium (MagIC)
NASA Astrophysics Data System (ADS)
Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.
2003-12-01
The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.
An Approach for Selecting a Theoretical Framework for the Evaluation of Training Programs
ERIC Educational Resources Information Center
Tasca, Jorge Eduardo; Ensslin, Leonardo; Ensslin, Sandra Rolim; Alves, Maria Bernardete Martins
2010-01-01
Purpose: This research paper proposes a method for selecting references related to a research topic, and seeks to exemplify it for the case of a study evaluating training programs. The method is designed to identify references with high academic relevance in databases accessed via the internet, using a bibliometric analysis to sift the selected…
NASA Technical Reports Server (NTRS)
1997-01-01
The bibliography contains citations concerning analytical techniques using constitutive equations, applied to materials under stress. The properties explored with these techniques include viscoelasticity, thermoelasticity, and plasticity. While many of the references are general as to material type, most refer to specific metals or composites, or to specific shapes, such as flat plate or spherical vessels.
Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid
2014-03-01
National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.
Construction of 3-D Earth Models for Station Specific Path Corrections by Dynamic Ray Tracing
2001-10-01
the numerical eikonal solution method of Vidale (1988) being used by the MIT led consortium. The model construction described in this report relies...assembled. REFERENCES Barazangi, M., Fielding, E., Isacks, B. & Seber, D., (1996), Geophysical And Geological Databases And Ctbt...preprint download6). Fielding, E., Isacks, B.L., and Baragangi. M. (1992), A Network Accessible Geological and Geophysical Database for
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254
R-Syst::diatom: an open-access and curated barcode database for diatoms and freshwater monitoring.
Rimet, Frédéric; Chaumeil, Philippe; Keck, François; Kermarrec, Lenaïg; Vasselon, Valentin; Kahlert, Maria; Franc, Alain; Bouchez, Agnès
2016-01-01
Diatoms are micro-algal indicators of freshwater pollution. Current standardized methodologies are based on microscopic determinations, which is time consuming and prone to identification uncertainties. The use of DNA-barcoding has been proposed as a way to avoid these flaws. Combining barcoding with next-generation sequencing enables collection of a large quantity of barcodes from natural samples. These barcodes are identified as certain diatom taxa by comparing the sequences to a reference barcoding library using algorithms. Proof of concept was recently demonstrated for synthetic and natural communities and underlined the importance of the quality of this reference library. We present an open-access and curated reference barcoding database for diatoms, called R-Syst::diatom, developed in the framework of R-Syst, the network of systematic supported by INRA (French National Institute for Agricultural Research), see http://www.rsyst.inra.fr/en. R-Syst::diatom links DNA-barcodes to their taxonomical identifications, and is dedicated to identify barcodes from natural samples. The data come from two sources, a culture collection of freshwater algae maintained in INRA in which new strains are regularly deposited and barcoded and from the NCBI (National Center for Biotechnology Information) nucleotide database. Two kinds of barcodes were chosen to support the database: 18S (18S ribosomal RNA) and rbcL (Ribulose-1,5-bisphosphate carboxylase/oxygenase), because of their efficiency. Data are curated using innovative (Declic) and classical bioinformatic tools (Blast, classical phylogenies) and up-to-date taxonomy (Catalogues and peer reviewed papers). Every 6 months R-Syst::diatom is updated. The database is available through the R-Syst microalgae website (http://www.rsyst.inra.fr/) and a platform dedicated to next-generation sequencing data analysis, virtual_BiodiversityL@b (https://galaxy-pgtp.pierroton.inra.fr/). We present here the content of the library regarding the number of barcodes and diatom taxa. In addition to these information, morphological features (e.g. biovolumes, chloroplasts…), life-forms (mobility, colony-type) or ecological features (taxa preferenda to pollution) are indicated in R-Syst::diatom. Database URL: http://www.rsyst.inra.fr/. © The Author(s) 2016. Published by Oxford University Press.
R-Syst::diatom: an open-access and curated barcode database for diatoms and freshwater monitoring
Rimet, Frédéric; Chaumeil, Philippe; Keck, François; Kermarrec, Lenaïg; Vasselon, Valentin; Kahlert, Maria; Franc, Alain; Bouchez, Agnès
2016-01-01
Diatoms are micro-algal indicators of freshwater pollution. Current standardized methodologies are based on microscopic determinations, which is time consuming and prone to identification uncertainties. The use of DNA-barcoding has been proposed as a way to avoid these flaws. Combining barcoding with next-generation sequencing enables collection of a large quantity of barcodes from natural samples. These barcodes are identified as certain diatom taxa by comparing the sequences to a reference barcoding library using algorithms. Proof of concept was recently demonstrated for synthetic and natural communities and underlined the importance of the quality of this reference library. We present an open-access and curated reference barcoding database for diatoms, called R-Syst::diatom, developed in the framework of R-Syst, the network of systematic supported by INRA (French National Institute for Agricultural Research), see http://www.rsyst.inra.fr/en. R-Syst::diatom links DNA-barcodes to their taxonomical identifications, and is dedicated to identify barcodes from natural samples. The data come from two sources, a culture collection of freshwater algae maintained in INRA in which new strains are regularly deposited and barcoded and from the NCBI (National Center for Biotechnology Information) nucleotide database. Two kinds of barcodes were chosen to support the database: 18S (18S ribosomal RNA) and rbcL (Ribulose-1,5-bisphosphate carboxylase/oxygenase), because of their efficiency. Data are curated using innovative (Declic) and classical bioinformatic tools (Blast, classical phylogenies) and up-to-date taxonomy (Catalogues and peer reviewed papers). Every 6 months R-Syst::diatom is updated. The database is available through the R-Syst microalgae website (http://www.rsyst.inra.fr/) and a platform dedicated to next-generation sequencing data analysis, virtual_BiodiversityL@b (https://galaxy-pgtp.pierroton.inra.fr/). We present here the content of the library regarding the number of barcodes and diatom taxa. In addition to these information, morphological features (e.g. biovolumes, chloroplasts…), life-forms (mobility, colony-type) or ecological features (taxa preferenda to pollution) are indicated in R-Syst::diatom. Database URL: http://www.rsyst.inra.fr/ PMID:26989149
Reference manual for data base on Nevada water-rights permits
Cartier, K.D.; Bauer, E.M.; Farnham, J.L.
1995-01-01
The U.S. Geological Survey and Nevada Division of Water Resources have cooperatively developed and implemented a data-base system for managing water-rights permit information for the State of Nevada. The Water-Rights Permit data base is part of an integrated system of computer data bases using the Ingres Relational Data-Base Manage-ment System, which allows efficient storage and access to water information from the State Engineer's office. The data base contains a main table, three ancillary tables, and five lookup tables, as well as a menu-driven system for entering, updating, and reporting on the data. This reference guide outlines the general functions of the system and provides a brief description of data tables and data-entry screens.
Improved bacteriophage genome data is necessary for integrating viral and bacterial ecology.
Bibby, Kyle
2014-02-01
The recent rise in "omics"-enabled approaches has lead to improved understanding in many areas of microbial ecology. However, despite the importance that viruses play in a broad microbial ecology context, viral ecology remains largely not integrated into high-throughput microbial ecology studies. A fundamental hindrance to the integration of viral ecology into omics-enabled microbial ecology studies is the lack of suitable reference bacteriophage genomes in reference databases-currently, only 0.001% of bacteriophage diversity is represented in genome sequence databases. This commentary serves to highlight this issue and to promote bacteriophage genome sequencing as a valuable scientific undertaking to both better understand bacteriophage diversity and move towards a more holistic view of microbial ecology.
Marklin, Richard W; Saginus, Kyle A; Seeley, Patricia; Freier, Stephen H
2010-12-01
The primary purpose of this study was to determine whether conventional anthropometric databases of the U.S. general population are applicable to the population of U.S. electric utility field-workers. On the basis of anecdotal observations, field-workers for electric power utilities were thought to be generally taller and larger than the general population. However, there were no anthropometric data available on this population, and it was not known whether the conventional anthropometric databases could be used to design for this population. For this study, 3 standing and II sitting anthropometric measurements were taken from 187 male field-workers from three electric power utilities located in the upper Midwest of the United States and Southern California. The mean and percentile anthropometric data from field-workers were compared with seven well-known conventional anthropometric databases for North American males (United States, Canada, and Mexico). In general, the male field-workers were taller and heavier than the people in the reference databases for U.S. males. The field-workers were up to 2.3 cm taller and 10 kg to 18 kg heavier than the averages of the reference databases. This study was justified, as it showed that the conventional anthropometric databases of the general population underestimated the size of electric utility field-workers, particularly with respect to weight. When designing vehicles and tools for electric utility field-workers, designers and ergonomists should consider the population being designed for and the data from this study to maximize safety, minimize risk of injuries, and optimize performance.
Weirick, Tyler; John, David; Uchida, Shizuka
2017-03-01
Maintaining the consistency of genomic annotations is an increasingly complex task because of the iterative and dynamic nature of assembly and annotation, growing numbers of biological databases and insufficient integration of annotations across databases. As information exchange among databases is poor, a 'novel' sequence from one reference annotation could be annotated in another. Furthermore, relationships to nearby or overlapping annotated transcripts are even more complicated when using different genome assemblies. To better understand these problems, we surveyed current and previous versions of genomic assemblies and annotations across a number of public databases containing long noncoding RNA. We identified numerous discrepancies of transcripts regarding their genomic locations, transcript lengths and identifiers. Further investigation showed that the positional differences between reference annotations of essentially the same transcript could lead to differences in its measured expression at the RNA level. To aid in resolving these problems, we present the algorithm 'Universal Genomic Accession Hash (UGAHash)' and created an open source web tool to encourage the usage of the UGAHash algorithm. The UGAHash web tool (http://ugahash.uni-frankfurt.de) can be accessed freely without registration. The web tool allows researchers to generate Universal Genomic Accessions for genomic features or to explore annotations deposited in the public databases of the past and present versions. We anticipate that the UGAHash web tool will be a valuable tool to check for the existence of transcripts before judging the newly discovered transcripts as novel. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Mouse IDGenes: a reference database for genetic interactions in the developing mouse brain
Matthes, Michaela; Preusse, Martin; Zhang, Jingzhong; Schechter, Julia; Mayer, Daniela; Lentes, Bernd; Theis, Fabian; Prakash, Nilima; Wurst, Wolfgang; Trümbach, Dietrich
2014-01-01
The study of developmental processes in the mouse and other vertebrates includes the understanding of patterning along the anterior–posterior, dorsal–ventral and medial– lateral axis. Specifically, neural development is also of great clinical relevance because several human neuropsychiatric disorders such as schizophrenia, autism disorders or drug addiction and also brain malformations are thought to have neurodevelopmental origins, i.e. pathogenesis initiates during childhood and adolescence. Impacts during early neurodevelopment might also predispose to late-onset neurodegenerative disorders, such as Parkinson’s disease. The neural tube develops from its precursor tissue, the neural plate, in a patterning process that is determined by compartmentalization into morphogenetic units, the action of local signaling centers and a well-defined and locally restricted expression of genes and their interactions. While public databases provide gene expression data with spatio-temporal resolution, they usually neglect the genetic interactions that govern neural development. Here, we introduce Mouse IDGenes, a reference database for genetic interactions in the developing mouse brain. The database is highly curated and offers detailed information about gene expressions and the genetic interactions at the developing mid-/hindbrain boundary. To showcase the predictive power of interaction data, we infer new Wnt/β-catenin target genes by machine learning and validate one of them experimentally. The database is updated regularly. Moreover, it can easily be extended by the research community. Mouse IDGenes will contribute as an important resource to the research on mouse brain development, not exclusively by offering data retrieval, but also by allowing data input. Database URL: http://mouseidgenes.helmholtz-muenchen.de. PMID:25145340
Mathis, Alexander; Depaquit, Jérôme; Dvořák, Vit; Tuten, Holly; Bañuls, Anne-Laure; Halada, Petr; Zapata, Sonia; Lehrter, Véronique; Hlavačková, Kristýna; Prudhomme, Jorian; Volf, Petr; Sereno, Denis; Kaufmann, Christian; Pflüger, Valentin; Schaffner, Francis
2015-05-10
Rapid, accurate and high-throughput identification of vector arthropods is of paramount importance in surveillance programmes that are becoming more common due to the changing geographic occurrence and extent of many arthropod-borne diseases. Protein profiling by MALDI-TOF mass spectrometry fulfils these requirements for identification, and reference databases have recently been established for several vector taxa, mostly with specimens from laboratory colonies. We established and validated a reference database containing 20 phlebotomine sand fly (Diptera: Psychodidae, Phlebotominae) species by using specimens from colonies or field-collections that had been stored for various periods of time. Identical biomarker mass patterns ('superspectra') were obtained with colony- or field-derived specimens of the same species. In the validation study, high quality spectra (i.e. more than 30 evaluable masses) were obtained with all fresh insects from colonies, and with 55/59 insects deep-frozen (liquid nitrogen/-80 °C) for up to 25 years. In contrast, only 36/52 specimens stored in ethanol could be identified. This resulted in an overall sensitivity of 87 % (140/161); specificity was 100 %. Duration of storage impaired data counts in the high mass range, and thus cluster analyses of closely related specimens might reflect their storage conditions rather than phenotypic distinctness. A major drawback of MALDI-TOF MS is the restricted availability of in-house databases and the fact that mass spectrometers from 2 companies (Bruker, Shimadzu) are widely being used. We have analysed fingerprints of phlebotomine sand flies obtained by automatic routine procedure on a Bruker instrument by using our database and the software established on a Shimadzu system. The sensitivity with 312 specimens from 8 sand fly species from laboratory colonies when evaluating only high quality spectra was 98.3 %; the specificity was 100 %. The corresponding diagnostic values with 55 field-collected specimens from 4 species were 94.7 % and 97.4 %, respectively. A centralized high-quality database (created by expert taxonomists and experienced users of mass spectrometers) that is easily amenable to customer-oriented identification services is a highly desirable resource. As shown in the present work, spectra obtained from different specimens with different instruments can be analysed using a centralized database, which should be available in the near future via an online platform in a cost-efficient manner.
Coupling GIS and multivariate approaches to reference site selection for wadeable stream monitoring.
Collier, Kevin J; Haigh, Andy; Kelly, Johlene
2007-04-01
Geographic Information System (GIS) was used to identify potential reference sites for wadeable stream monitoring, and multivariate analyses were applied to test whether invertebrate communities reflected a priori spatial and stream type classifications. We identified potential reference sites in segments with unmodified vegetation cover adjacent to the stream and in >85% of the upstream catchment. We then used various landcover, amenity and environmental impact databases to eliminate sites that had potential anthropogenic influences upstream and that fell into a range of access classes. Each site identified by this process was coded by four dominant stream classes and seven zones, and 119 candidate sites were randomly selected for follow-up assessment. This process yielded 16 sites conforming to reference site criteria using a conditional-probabilistic design, and these were augmented by an additional 14 existing or special interest reference sites. Non-metric multidimensional scaling (NMS) analysis of percent abundance invertebrate data indicated significant differences in community composition among some of the zones and stream classes identified a priori providing qualified support for this framework in reference site selection. NMS analysis of a range standardised condition and diversity metrics derived from the invertebrate data indicated a core set of 26 closely related sites, and four outliers that were considered atypical of reference site conditions and subsequently dropped from the network. Use of GIS linked to stream typology, available spatial databases and aerial photography greatly enhanced the objectivity and efficiency of reference site selection. The multi-metric ordination approach reduced variability among stream types and bias associated with non-random site selection, and provided an effective way to identify representative reference sites.
NASA Astrophysics Data System (ADS)
Truckenbrodt, Sina C.; Schmullius, Christiane C.
2018-03-01
Ground reference data are a prerequisite for the calibration, update, and validation of retrieval models facilitating the monitoring of land parameters based on Earth Observation data. Here, we describe the acquisition of a comprehensive ground reference database which was created to test and validate the recently developed Earth Observation Land Data Assimilation System (EO-LDAS) and products derived from remote sensing observations in the visible and infrared range. In situ data were collected for seven crop types (winter barley, winter wheat, spring wheat, durum, winter rape, potato, and sugar beet) cultivated on the agricultural Gebesee test site, central Germany, in 2013 and 2014. The database contains information on hyperspectral surface reflectance factors, the evolution of biophysical and biochemical plant parameters, phenology, surface conditions, atmospheric states, and a set of ground control points. Ground reference data were gathered at an approximately weekly resolution and on different spatial scales to investigate variations within and between acreages. In situ data collected less than 1 day apart from satellite acquisitions (RapidEye, SPOT 5, Landsat-7 and -8) with a cloud coverage ≤ 25 % are available for 10 and 15 days in 2013 and 2014, respectively. The measurements show that the investigated growing seasons were characterized by distinct meteorological conditions causing interannual variations in the parameter evolution. Here, the experimental design of the field campaigns, and methods employed in the determination of all parameters, are described in detail. Insights into the database are provided and potential fields of application are discussed. The data will contribute to a further development of crop monitoring methods based on remote sensing techniques. The database is freely available at PANGAEA (https://doi.org/10.1594/PANGAEA.874251).
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2011-02-15
Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings. Conclusions: The LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice.« less
MODEL-BASED HYDROACOUSTIC BLOCKAGE ASSESSMENT AND DEVELOPMENT OF AN EXPLOSIVE SOURCE DATABASE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzel, E; Ramirez, A; Harben, P
2005-07-11
We are continuing the development of the Hydroacoustic Blockage Assessment Tool (HABAT) which is designed for use by analysts to predict which hydroacoustic monitoring stations can be used in discrimination analysis for any particular event. The research involves two approaches (1) model-based assessment of blockage, and (2) ground-truth data-based assessment of blockage. The tool presents the analyst with a map of the world, and plots raypath blockages from stations to sources. The analyst inputs source locations and blockage criteria, and the tool returns a list of blockage status from all source locations to all hydroacoustic stations. We are currently usingmore » the tool in an assessment of blockage criteria for simple direct-path arrivals. Hydroacoustic data, predominantly from earthquake sources, are read in and assessed for blockage at all available stations. Several measures are taken. First, can the event be observed at a station above background noise? Second, can we establish backazimuth from the station to the source. Third, how large is the decibel drop at one station relative to other stations. These observational results are then compared with model estimates to identify the best set of blockage criteria and used to create a set of blockage maps for each station. The model-based estimates are currently limited by the coarse bathymetry of existing databases and by the limitations inherent in the raytrace method. In collaboration with BBN Inc., the Hydroacoustic Coverage Assessment Model (HydroCAM) that generates the blockage files that serve as input to HABAT, is being extended to include high-resolution bathymetry databases in key areas that increase model-based blockage assessment reliability. An important aspect of this capability is to eventually include reflected T-phases where they reliably occur and to identify the associated reflectors. To assess how well any given hydroacoustic discriminant works in separating earthquake and in-water explosion populations it is necessary to have both a database of reference earthquake events and of reference in-water explosive events. Although reference earthquake events are readily available, explosive reference events are not. Consequently, building an in-water explosion reference database requires the compilation of events from many sources spanning a long period of time. We have developed a database of small implosive and explosive reference events from the 2003 Indian Ocean Cruise data. These events were recorded at some or all of the IMS Indian Ocean hydroacoustic stations: Diego Garcia, Cape Leeuwin, and Crozet Island. We have also reviewed many historical large in-water explosions and identified five that have adequate source information and can be positively associated to the hydrophone recordings. The five events are: Cannekin, Longshot, CHASE-3, CHASE-5, and IITRI-1. Of these, the first two are nuclear tests on land but near water. The latter three are in-water conventional explosive events with yields from ten to hundreds of tons TNT equivalent. The objective of this research is to enhance discrimination capabilities for events located in the world's oceans. Two research and development efforts are needed to achieve this: (1) improvement in discrimination algorithms and their joint statistical application to events, and (2) development of an automated and accurate blockage prediction capability that will identify all stations and phases (direct and reflected) from a given event that will have adequate signal to be used in a discrimination analysis. The strategy for improving blockage prediction in the world's oceans is to improve model-based prediction of blockage and to develop a ground-truth database of reference events to assess blockage. Currently, research is focused on the development of a blockage assessment software tool. The tool is envisioned to develop into a sophisticated and unifying package that optimally and automatically assesses both model and data based blockage predictions in all ocean basins, for all NDC stations, and accounting for reflected phases (Pulli et al., 2000). Currently, we have focused our efforts on the Diego Garcia, Cape Leeuwin and Crozet Island hydroacoustic stations in the Indian Ocean.« less
Black, J A; Waggamon, K A
1992-01-01
An isoelectric focusing method using thin-layer agarose gel has been developed for wheat gliadin. Using flat-bed units with a third electrode, up to 72 samples per gel may be analyzed. Advantages over traditional acid polyacrylamide gel electrophoresis methodology include: faster run times, nontoxic media, and greater sample capacity. The method is suitable for fingerprinting or purity testing of wheat varieties. Using digital images captured by a flat-bed scanner, a 4-band reference system using isoelectric points was devised. Software enables separated bands to be assigned pI values based upon reference tracks. Precision of assigned isoelectric points is shown to be on the order of 0.02 pH units. Captured images may be stored in a computer database and compared to unknown patterns to enable an identification. Parameters for a match with a stored pattern may be adjusted for pI interval required for a match, and number of best matches.
Reduced reference image quality assessment via sub-image similarity based redundancy measurement
NASA Astrophysics Data System (ADS)
Mou, Xuanqin; Xue, Wufeng; Zhang, Lei
2012-03-01
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.
CORUM: the comprehensive resource of mammalian protein complexes
Ruepp, Andreas; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Stransky, Michael; Waegele, Brigitte; Schmidt, Thorsten; Doudieu, Octave Noubibou; Stümpflen, Volker; Mewes, H. Werner
2008-01-01
Protein complexes are key molecular entities that integrate multiple gene products to perform cellular functions. The CORUM (http://mips.gsf.de/genre/proj/corum/index.html) database is a collection of experimentally verified mammalian protein complexes. Information is manually derived by critical reading of the scientific literature from expert annotators. Information about protein complexes includes protein complex names, subunits, literature references as well as the function of the complexes. For functional annotation, we use the FunCat catalogue that enables to organize the protein complex space into biologically meaningful subsets. The database contains more than 1750 protein complexes that are built from 2400 different genes, thus representing 12% of the protein-coding genes in human. A web-based system is available to query, view and download the data. CORUM provides a comprehensive dataset of protein complexes for discoveries in systems biology, analyses of protein networks and protein complex-associated diseases. Comparable to the MIPS reference dataset of protein complexes from yeast, CORUM intends to serve as a reference for mammalian protein complexes. PMID:17965090
Shao, Wei; Shan, Jigui; Kearney, Mary F; Wu, Xiaolin; Maldarelli, Frank; Mellors, John W; Luke, Brian; Coffin, John M; Hughes, Stephen H
2016-07-04
The NCI Retrovirus Integration Database is a MySql-based relational database created for storing and retrieving comprehensive information about retroviral integration sites, primarily, but not exclusively, HIV-1. The database is accessible to the public for submission or extraction of data originating from experiments aimed at collecting information related to retroviral integration sites including: the site of integration into the host genome, the virus family and subtype, the origin of the sample, gene exons/introns associated with integration, and proviral orientation. Information about the references from which the data were collected is also stored in the database. Tools are built into the website that can be used to map the integration sites to UCSC genome browser, to plot the integration site patterns on a chromosome, and to display provirus LTRs in their inserted genome sequence. The website is robust, user friendly, and allows users to query the database and analyze the data dynamically. https://rid.ncifcrf.gov ; or http://home.ncifcrf.gov/hivdrp/resources.htm .
NASA Astrophysics Data System (ADS)
Geyer, Adelina; Marti, Joan
2015-04-01
Collapse calderas are one of the most important volcanic structures not only because of their hazard implications, but also because of their high geothermal energy potential and their association with mineral deposits of economic interest. In 2008 we presented a new general worldwide Collapse Caldera DataBase (CCDB), in order to provide a useful and accessible tool for studying and understanding caldera collapse processes. The principal aim of the CCDB is to update the current field based knowledge on calderas, merging together the existing databases and complementing them with new examples found in the bibliography, and leaving it open for the incorporation of new data from future studies. Currently, the database includes over 450 documented calderas around the world, trying to be representative enough to promote further studies and analyses. We have performed a comprehensive compilation of published field studies of collapse calderas including more than 500 references, and their information has been summarized in a database linked to a Geographical Information System (GIS) application. Thus, it is possible to visualize the selected calderas on a world map and to filter them according to different features recorded in the database (e.g. age, structure). The information recorded in the CCDB can be grouped in seven main information classes: caldera features, properties of the caldera-forming deposits, magmatic system, geodynamic setting, pre-caldera volcanism,caldera-forming eruption sequence and post-caldera activity. Additionally, we have added two extra classes. The first records the references consulted for each caldera. The second allows users to introduce comments on the caldera sample such as possible controversies concerning the caldera origin. During the last seven years, the database has been available on-line at http://www.gvb-csic.es/CCDB.htm previous registration. This year, the CCDB webpage will be updated and improved so the database content can be queried on-line. This research was partially funded by the research fellowship RYC-2012-11024.
ERIC Educational Resources Information Center
Balcazar, Fabricio E.; Oberoi, Ashmeet K.; Suarez-Balcazar, Yolanda; Alvarado, Francisco
2012-01-01
A review of vocational rehabilitation (VR) data from a Midwestern state was conducted to identify predictors of rehabilitation outcomes for African American consumers. The database included 37,404 African Americans who were referred or self-referred over a period of five years. Logistic regression analysis indicated that except for age and…
ERIC Educational Resources Information Center
Blank, Lindsay; Baxter, Susan K.; Payne, Nick; Guillaume, Louise R.; Squires, Hazel
2012-01-01
A systematic review and narrative synthesis to determine the effectiveness of contraception service interventions for young people delivered in health care premises was undertaken. We searched 12 key health and medical databases, reference lists of included papers and systematic reviews and cited reference searches on included articles. All…
National Software Reference Library (NSRL)
National Institute of Standards and Technology Data Gateway
National Software Reference Library (NSRL) (PC database for purchase) A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.
[Current status of DNA databases in the forensic field: new progress, new legal needs].
Baeta, Miriam; Martínez-Jarreta, Begoña
2009-01-01
One of the most polemic issues regarding the use of deoxyribonucleic acid (DNA) in the legal sphere, refers to the creation of DNA databases. Until relatively recently, Spain did not have a law to support the establishment of a national DNA profile bank for forensic purposes, and preserve the fundamental rights of subjects whose data are archived therein. The regulatory law of police databases regarding identifiers obtained from DNA approved in 2007, covers this void in the Spanish legislation and responds to the incessant need to adapt the laws to continuous scientific and technological progress.
Qualitative Comparison of IGRA and ESRL Radiosonde Archived Databases
NASA Technical Reports Server (NTRS)
Walker, John R.
2014-01-01
Multiple databases of atmospheric profile information are freely available to individuals and groups such as the Natural Environments group. Two of the primary database archives provided by NOAA that are most frequently used are those from the Earth Science Research Laboratory (ESRL) and the Integrated Global Radiosonde Archive (IGRA). Inquiries have been made as to why one database is used as opposed to the other, yet to the best of knowledge, no formal comparison has been performed. The goal of this study is to provide a qualitative comparison of the ESRL and IGRA radiosonde databases. For part of this analyses, 14 upper air observation sites were selected. These sites all have the common attribute of having been used or are planned for use in the development of Range Reference Atmospheres (RRAs) in support of NASA's and DOD's current and future goals.
The Battle Command Sustainment Support System: Initial Analysis Report
2016-09-01
diagnostic monitoring, asynchronous commits, and others. The other components of the NEDP include a main forwarding gateway /web server and one or more...NATIONAL ENTERPRISE DATA PORTAL ANALYSIS The NEDP is comprised of an Oracle Database 10g referred to as the National Data Server and several other...data forwarding gateways (DFG). Together, with the Oracle Database 10g, these components provide a heterogeneous data source that aligns various data
A Methodology for Benchmarking Relational Database Machines,
1984-01-01
user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey
Improvements in the Protein Identifier Cross-Reference service.
Wein, Samuel P; Côté, Richard G; Dumousseau, Marine; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan A
2012-07-01
The Protein Identifier Cross-Reference (PICR) service is a tool that allows users to map protein identifiers, protein sequences and gene identifiers across over 100 different source databases. PICR takes input through an interactive website as well as Representational State Transfer (REST) and Simple Object Access Protocol (SOAP) services. It returns the results as HTML pages, XLS and CSV files. It has been in production since 2007 and has been recently enhanced to add new functionality and increase the number of databases it covers. Protein subsequences can be Basic Local Alignment Search Tool (BLAST) against the UniProt Knowledgebase (UniProtKB) to provide an entry point to the standard PICR mapping algorithm. In addition, gene identifiers from UniProtKB and Ensembl can now be submitted as input or mapped to as output from PICR. We have also implemented a 'best-guess' mapping algorithm for UniProt. In this article, we describe the usefulness of PICR, how these changes have been implemented, and the corresponding additions to the web services. Finally, we explain that the number of source databases covered by PICR has increased from the initial 73 to the current 102. New resources include several new species-specific Ensembl databases as well as the Ensembl Genome ones. PICR can be accessed at http://www.ebi.ac.uk/Tools/picr/.
A manual for a laboratory information management system (LIMS) for light stable isotopes
Coplen, Tyler B.
1997-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
A manual for a Laboratory Information Management System (LIMS) for light stable isotopes
Coplen, Tyler B.
1998-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
Coverage of Google Scholar, Scopus, and Web of Science: a case study of the h-index in nursing.
De Groote, Sandra L; Raszewski, Rebecca
2012-01-01
This study compares the articles cited in CINAHL, Scopus, Web of Science (WOS), and Google Scholar and the h-index ratings provided by Scopus, WOS, and Google Scholar. The publications of 30 College of Nursing faculty at a large urban university were examined. Searches by author name were executed in Scopus, WOS, and POP (Publish or Perish, which searches Google Scholar), and the h-index for each author from each database was recorded. In addition, the citing articles of their published articles were imported into a bibliographic management program. This data was used to determine an aggregated h-index for each author. Scopus, WOS, and Google Scholar provided different h-index ratings for authors and each database found unique and duplicate citing references. More than one tool should be used to calculate the h-index for nursing faculty because one tool alone cannot be relied on to provide a thorough assessment of a researcher's impact. If researchers are interested in a comprehensive h-index, they should aggregate the citing references located by WOS and Scopus. Because h-index rankings differ among databases, comparisons between researchers should be done only within a specified database. Copyright © 2012 Elsevier Inc. All rights reserved.
Network meta-analyses could be improved by searching more sources and by involving a librarian.
Li, Lun; Tian, Jinhui; Tian, Hongliang; Moher, David; Liang, Fuxiang; Jiang, Tongxiao; Yao, Liang; Yang, Kehu
2014-09-01
Network meta-analyses (NMAs) aim to rank the benefits (or harms) of interventions, based on all available randomized controlled trials. Thus, the identification of relevant data is critical. We assessed the conduct of the literature searches in NMAs. Published NMAs were retrieved by searching electronic bibliographic databases and other sources. Two independent reviewers selected studies and five trained reviewers abstracted data regarding literature searches, in duplicate. Search method details were examined using descriptive statistics. Two hundred forty-nine NMAs were included. Eight used previous systematic reviews to identify primary studies without further searching, and five did not report any literature searches. In the 236 studies that used electronic databases to identify primary studies, the median number of databases was 3 (interquartile range: 3-5). MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials were the most commonly used databases. The most common supplemental search methods included reference lists of included studies (48%), reference lists of previous systematic reviews (40%), and clinical trial registries (32%). None of these supplemental methods was conducted in more than 50% of the NMAs. Literature searches in NMAs could be improved by searching more sources, and by involving a librarian or information specialist. Copyright © 2014 Elsevier Inc. All rights reserved.
Hydroponics Database and Handbook for the Advanced Life Support Test Bed
NASA Technical Reports Server (NTRS)
Nash, Allen J.
1999-01-01
During the summer 1998, I did student assistance to Dr. Daniel J. Barta, chief plant growth expert at Johnson Space Center - NASA. We established the preliminary stages of a hydroponic crop growth database for the Advanced Life Support Systems Integration Test Bed, otherwise referred to as BIO-Plex (Biological Planetary Life Support Systems Test Complex). The database summarizes information from published technical papers by plant growth experts, and it includes bibliographical, environmental and harvest information based on plant growth under varying environmental conditions. I collected 84 lettuce entries, 14 soybean, 49 sweet potato, 16 wheat, 237 white potato, and 26 mix crop entries. The list will grow with the publication of new research. This database will be integrated with a search and systems analysis computer program that will cross-reference multiple parameters to determine optimum edible yield under varying parameters. Also, we have made preliminary effort to put together a crop handbook for BIO-Plex plant growth management. It will be a collection of information obtained from experts who provided recommendations on a particular crop's growing conditions. It includes bibliographic, environmental, nutrient solution, potential yield, harvest nutritional, and propagation procedure information. This handbook will stand as the baseline growth conditions for the first set of experiments in the BIO-Plex facility.
Mapping selected general literature of international nursing.
Shams, Marie-Lise Antoun; Dixon, Lana S
2007-01-01
This study, part of a wider project to map the literature of nursing, identifies core journals cited in non-US nursing journals and determines the extent of their coverage by indexing services. Four general English-language journals were analyzed for format types and publication dates. Core titles were identified and nine bibliographic databases were scanned for indexing coverage. Findings show that 57.5% (13,391/23,271) of the cited references from the 4 core journals were to journal articles, 27.8% (6,471/23,271) to books, 9.5% (2,208/23,271) to government documents, 4.9% (1,131/23,271) to miscellaneous sources, and less than 1% (70/23,271) to Internet resources. Eleven journals produced one-third of the citations; the next third included 146 journals, followed by a dispersion of 1,622 titles. PubMed received the best database coverage scores, followed by CINAHL and Science Citation Index. None of the databases provided complete coverage of all 11 core titles. The four source journals contain a diverse group of cited references. The currency of citations to government documents makes these journals a good source for regulatory and legislative awareness. Nurses consult nursing and biomedical journals and must search both nursing and biomedical databases to cover the literature.
Impact of training sets on classification of high-throughput bacterial 16s rRNA gene surveys
Werner, Jeffrey J; Koren, Omry; Hugenholtz, Philip; DeSantis, Todd Z; Walters, William A; Caporaso, J Gregory; Angenent, Largus T; Knight, Rob; Ley, Ruth E
2012-01-01
Taxonomic classification of the thousands–millions of 16S rRNA gene sequences generated in microbiome studies is often achieved using a naïve Bayesian classifier (for example, the Ribosomal Database Project II (RDP) classifier), due to favorable trade-offs among automation, speed and accuracy. The resulting classification depends on the reference sequences and taxonomic hierarchy used to train the model; although the influence of primer sets and classification algorithms have been explored in detail, the influence of training set has not been characterized. We compared classification results obtained using three different publicly available databases as training sets, applied to five different bacterial 16S rRNA gene pyrosequencing data sets generated (from human body, mouse gut, python gut, soil and anaerobic digester samples). We observed numerous advantages to using the largest, most diverse training set available, that we constructed from the Greengenes (GG) bacterial/archaeal 16S rRNA gene sequence database and the latest GG taxonomy. Phylogenetic clusters of previously unclassified experimental sequences were identified with notable improvements (for example, 50% reduction in reads unclassified at the phylum level in mouse gut, soil and anaerobic digester samples), especially for phylotypes belonging to specific phyla (Tenericutes, Chloroflexi, Synergistetes and Candidate phyla TM6, TM7). Trimming the reference sequences to the primer region resulted in systematic improvements in classification depth, and greatest gains at higher confidence thresholds. Phylotypes unclassified at the genus level represented a greater proportion of the total community variation than classified operational taxonomic units in mouse gut and anaerobic digester samples, underscoring the need for greater diversity in existing reference databases. PMID:21716311
Karabulut, Nevzat
2017-03-01
The aim of this study is to investigate the frequency of incorrect citations and its effects on the impact factor of a specific biomedical journal: the American Journal of Roentgenology. The Cited Reference Search function of Thomson Reuters' Web of Science database (formerly the Institute for Scientific Information's Web of Knowledge database) was used to identify erroneous citations. This was done by entering the journal name into the Cited Work field and entering "2011-2012" into the Cited Year(s) field. The errors in any part of the inaccurately cited references (e.g., author names, title, year, volume, issue, and page numbers) were recorded, and the types of errors (i.e., absent, deficient, or mistyped) were analyzed. Erroneous citations were corrected using the Suggest a Correction function of the Web of Science database. The effect of inaccurate citations on the impact factor of the AJR was calculated. Overall, 183 of 1055 citable articles published in 2011-2012 were inaccurately cited 423 times (mean [± SD], 2.31 ± 4.67 times; range, 1-44 times). Of these 183 articles, 110 (60.1%) were web-only articles and 44 (24.0%) were print articles. The most commonly identified errors were page number errors (44.8%) and misspelling of an author's name (20.2%). Incorrect citations adversely affected the impact factor of the AJR by 0.065 in 2012 and by 0.123 in 2013. Inaccurate citations are not infrequent in biomedical journals, yet they can be detected and corrected using the Web of Science database. Although the accuracy of references is primarily the responsibility of authors, the journal editorial office should also define a periodic inaccurate citation check task and correct erroneous citations to reclaim unnecessarily lost credit.
LeishCyc: a guide to building a metabolic pathway database and visualization of metabolomic data.
Saunders, Eleanor C; MacRae, James I; Naderer, Thomas; Ng, Milica; McConville, Malcolm J; Likić, Vladimir A
2012-01-01
The complexity of the metabolic networks in even the simplest organisms has raised new challenges in organizing metabolic information. To address this, specialized computer frameworks have been developed to capture, manage, and visualize metabolic knowledge. The leading databases of metabolic information are those organized under the umbrella of the BioCyc project, which consists of the reference database MetaCyc, and a number of pathway/genome databases (PGDBs) each focussed on a specific organism. A number of PGDBs have been developed for bacterial, fungal, and protozoan pathogens, greatly facilitating dissection of the metabolic potential of these organisms and the identification of new drug targets. Leishmania are protozoan parasites belonging to the family Trypanosomatidae that cause a broad spectrum of diseases in humans. In this work we use the LeishCyc database, the BioCyc database for Leishmania major, to describe how to build a BioCyc database from genomic sequences and associated annotations. By using metabolomic data generated in our group, we show how such databases can be utilized to elucidate specific changes in parasite metabolism.
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
Improving accuracy and power with transfer learning using a meta-analytic database.
Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand
2012-01-01
Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
Schoch, Conrad L; Robbertse, Barbara; Robert, Vincent; Vu, Duong; Cardinali, Gianluigi; Irinyi, Laszlo; Meyer, Wieland; Nilsson, R Henrik; Hughes, Karen; Miller, Andrew N; Kirk, Paul M; Abarenkov, Kessy; Aime, M Catherine; Ariyawansa, Hiran A; Bidartondo, Martin; Boekhout, Teun; Buyck, Bart; Cai, Qing; Chen, Jie; Crespo, Ana; Crous, Pedro W; Damm, Ulrike; De Beer, Z Wilhelm; Dentinger, Bryn T M; Divakar, Pradeep K; Dueñas, Margarita; Feau, Nicolas; Fliegerova, Katerina; García, Miguel A; Ge, Zai-Wei; Griffith, Gareth W; Groenewald, Johannes Z; Groenewald, Marizeth; Grube, Martin; Gryzenhout, Marieka; Gueidan, Cécile; Guo, Liangdong; Hambleton, Sarah; Hamelin, Richard; Hansen, Karen; Hofstetter, Valérie; Hong, Seung-Beom; Houbraken, Jos; Hyde, Kevin D; Inderbitzin, Patrik; Johnston, Peter R; Karunarathna, Samantha C; Kõljalg, Urmas; Kovács, Gábor M; Kraichak, Ekaphan; Krizsan, Krisztina; Kurtzman, Cletus P; Larsson, Karl-Henrik; Leavitt, Steven; Letcher, Peter M; Liimatainen, Kare; Liu, Jian-Kui; Lodge, D Jean; Luangsa-ard, Janet Jennifer; Lumbsch, H Thorsten; Maharachchikumbura, Sajeewa S N; Manamgoda, Dimuthu; Martín, María P; Minnis, Andrew M; Moncalvo, Jean-Marc; Mulè, Giuseppina; Nakasone, Karen K; Niskanen, Tuula; Olariaga, Ibai; Papp, Tamás; Petkovits, Tamás; Pino-Bodas, Raquel; Powell, Martha J; Raja, Huzefa A; Redecker, Dirk; Sarmiento-Ramirez, J M; Seifert, Keith A; Shrestha, Bhushan; Stenroos, Soili; Stielow, Benjamin; Suh, Sung-Oui; Tanaka, Kazuaki; Tedersoo, Leho; Telleria, M Teresa; Udayanga, Dhanushka; Untereiner, Wendy A; Diéguez Uribeondo, Javier; Subbarao, Krishna V; Vágvölgyi, Csaba; Visagie, Cobus; Voigt, Kerstin; Walker, Donald M; Weir, Bevan S; Weiß, Michael; Wijayawardene, Nalin N; Wingfield, Michael J; Xu, J P; Yang, Zhu L; Zhang, Ning; Zhuang, Wen-Ying; Federhen, Scott
2014-01-01
DNA phylogenetic comparisons have shown that morphology-based species recognition often underestimates fungal diversity. Therefore, the need for accurate DNA sequence data, tied to both correct taxonomic names and clearly annotated specimen data, has never been greater. Furthermore, the growing number of molecular ecology and microbiome projects using high-throughput sequencing require fast and effective methods for en masse species assignments. In this article, we focus on selecting and re-annotating a set of marker reference sequences that represent each currently accepted order of Fungi. The particular focus is on sequences from the internal transcribed spacer region in the nuclear ribosomal cistron, derived from type specimens and/or ex-type cultures. Re-annotated and verified sequences were deposited in a curated public database at the National Center for Biotechnology Information (NCBI), namely the RefSeq Targeted Loci (RTL) database, and will be visible during routine sequence similarity searches with NR_prefixed accession numbers. A set of standards and protocols is proposed to improve the data quality of new sequences, and we suggest how type and other reference sequences can be used to improve identification of Fungi. Database URL: http://www.ncbi.nlm.nih.gov/bioproject/PRJNA177353. Published by Oxford University Press 2013. This work is written by US Government employees and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Wolery, Thomas J.; Jové Colón, Carlos F.
2017-09-01
Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data. Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO2, water, and aqueous species such as Na+ and Cl-. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15 K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of "links" to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare "key" or "reference" datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.
78 FR 70020 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-22
...; citizenship; physical characteristics; employment and military service history; credit references and credit... digital images, and in electronic databases. Background investigation forms are maintained in the...
filltex: Automatic queries to ADS and INSPIRE databases to fill LaTex bibliography
NASA Astrophysics Data System (ADS)
Gerosa, Davide; Vallisneri, Michele
2017-05-01
filltex is a simple tool to fill LaTex reference lists with records from the ADS and INSPIRE databases. ADS and INSPIRE are the most common databases used among the theoretical physics and astronomy scientific communities, respectively. filltex automatically looks for all citation labels present in a tex document and, by means of web-scraping, downloads all the required citation records from either of the two databases. filltex significantly speeds up the LaTex scientific writing workflow, as all required actions (compile the tex file, fill the bibliography, compile the bibliography, compile the tex file again) are automated in a single command. We also provide an integration of filltex for the macOS LaTex editor TexShop.
Efficient bibliographic searches on allergy using ISI databases.
Sáez Gómez, J M; Annan, J W; Negro Alvarez, J M; Guillen-Grima, F; Bozzola, C M; Ivancevich, J C; Aguinaga Ontoso, E
2008-01-01
The aim of this article is to provide an introduction to using databases from the Thomson ISI Web of Knowledge, with special reference to Citation Indexes as an analysis tool for publications, and also to explain the meaning of the well-known Impact Factor. We present the partially modified new Consultation Interface to enhance information search routines of these databases. It introduces distinctive methods in search bibliography, including the correct application of analysis tools, paying particular attention to Journal Citation Reports and Impact Factor. We finish this article with comment on the consequences of using the Impact Factor as a quality indicator for the assessment of journals and publications, and how to ensure measures for indexing in the Thomson ISI Databases.
PGSB/MIPS PlantsDB Database Framework for the Integration and Analysis of Plant Genome Data.
Spannagl, Manuel; Nussbaumer, Thomas; Bader, Kai; Gundlach, Heidrun; Mayer, Klaus F X
2017-01-01
Plant Genome and Systems Biology (PGSB), formerly Munich Institute for Protein Sequences (MIPS) PlantsDB, is a database framework for the integration and analysis of plant genome data, developed and maintained for more than a decade now. Major components of that framework are genome databases and analysis resources focusing on individual (reference) genomes providing flexible and intuitive access to data. Another main focus is the integration of genomes from both model and crop plants to form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny). Data exchange and integrated search functionality with/over many plant genome databases is provided within the transPLANT project.
Higgins, Victoria; Chan, Man Khun; Nieuwesteeg, Michelle; Hoffman, Barry R; Bromberg, Irvin L; Gornall, Doug; Randell, Edward; Adeli, Khosrow
2016-01-01
The Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) has recently established pediatric age- and sex-specific reference intervals for over 85 biochemical markers on the Abbott Architect system. Previously, CALIPER reference intervals for several biochemical markers were successfully transferred from Abbott assays to Roche, Beckman, Ortho, and Siemens assays. This study further broadens the CALIPER database by performing transference and verification for 52 biochemical assays on the Roche cobas 6000 and the Roche Modular P. Using CLSI C28-A3 and EP9-A2 guidelines, transference of the CALIPER reference intervals was attempted for 16 assays on the Roche cobas 6000 and 36 on the Modular P. Calculated reference intervals were further verified using 100 healthy CALIPER samples. Most assays showed strong correlation between assay systems and were transferable from Abbott to the Roche cobas 6000 (81%) and the Modular P (86%). Bicarbonate and magnesium were not transferable on either system and calcium and prealbumin were not transferable to the Modular P. Of the transferable analytes, 62% and 61% were verified on the cobas 6000 and the Modular P, respectively. This study extends the utility of the CALIPER database to two additional analytical systems, which facilitates the broad application of CALIPER reference intervals at pediatric centers utilizing Roche biochemical assays. Transference studies across different analytical platforms can later be collectively analyzed in an attempt to develop common reference intervals across all clinical chemistry instruments to harmonize laboratory test interpretation in diagnosis and monitoring of pediatric disease. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Moreno, Lilliana I; Brown, Alice L; Callaghan, Thomas F
2017-07-01
Rapid DNA platforms are fully integrated systems capable of producing and analyzing short tandem repeat (STR) profiles from reference sample buccal swabs in less than two hours. The technology requires minimal user interaction and experience making it possible for high quality profiles to be generated outside an accredited laboratory. The automated production of point of collection reference STR profiles could eliminate the time delay for shipment and analysis of arrestee samples at centralized laboratories. Furthermore, point of collection analysis would allow searching against profiles from unsolved crimes during the normal booking process once the infrastructure to immediately search the Combined DNA Index System (CODIS) database from the booking station is established. The DNAscan/ANDE™ Rapid DNA Analysis™ System developed by Network Biosystems was evaluated for robustness and reliability in the production of high quality reference STR profiles for database enrollment and searching applications. A total of 193 reference samples were assessed for concordance of the CODIS 13 loci. Studies to evaluate contamination, reproducibility, precision, stutter, peak height ratio, noise and sensitivity were also performed. The system proved to be robust, consistent and dependable. Results indicated an overall success rate of 75% for the 13 CODIS core loci and more importantly no incorrect calls were identified. The DNAscan/ANDE™ could be confidently used without human interaction in both laboratory and non-laboratory settings to generate reference profiles. Published by Elsevier B.V.
Rep. Conyers, John, Jr. [D-MI-13
2014-04-10
House - 06/09/2014 Referred to the Subcommittee on Crime, Terrorism, Homeland Security, and Investigations. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Age-specific MRI brain and head templates for healthy adults from 20 through 89 years of age
Fillmore, Paul T.; Phillips-Meek, Michelle C.; Richards, John E.
2015-01-01
This study created and tested a database of adult, age-specific MRI brain and head templates. The participants included healthy adults from 20 through 89 years of age. The templates were done in five-year, 10-year, and multi-year intervals from 20 through 89 years, and consist of average T1W for the head and brain, and segmenting priors for gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). It was found that age-appropriate templates provided less biased tissue classification estimates than age-inappropriate reference data and reference data based on young adult templates. This database is available for use by other investigators and clinicians for their MRI studies, as well as other types of neuroimaging and electrophysiological research.1 PMID:25904864
Development of forensic-quality full mtGenome haplotypes: success rates with low template specimens.
Just, Rebecca S; Scheible, Melissa K; Fast, Spence A; Sturk-Andreaggi, Kimberly; Higginbotham, Jennifer L; Lyons, Elizabeth A; Bush, Jocelyn M; Peck, Michelle A; Ring, Joseph D; Diegoli, Toni M; Röck, Alexander W; Huber, Gabriela E; Nagl, Simone; Strobl, Christina; Zimmermann, Bettina; Parson, Walther; Irwin, Jodi A
2014-05-01
Forensic mitochondrial DNA (mtDNA) testing requires appropriate, high quality reference population data for estimating the rarity of questioned haplotypes and, in turn, the strength of the mtDNA evidence. Available reference databases (SWGDAM, EMPOP) currently include information from the mtDNA control region; however, novel methods that quickly and easily recover mtDNA coding region data are becoming increasingly available. Though these assays promise to both facilitate the acquisition of mitochondrial genome (mtGenome) data and maximize the general utility of mtDNA testing in forensics, the appropriate reference data and database tools required for their routine application in forensic casework are lacking. To address this deficiency, we have undertaken an effort to: (1) increase the large-scale availability of high-quality entire mtGenome reference population data, and (2) improve the information technology infrastructure required to access/search mtGenome data and employ them in forensic casework. Here, we describe the application of a data generation and analysis workflow to the development of more than 400 complete, forensic-quality mtGenomes from low DNA quantity blood serum specimens as part of a U.S. National Institute of Justice funded reference population databasing initiative. We discuss the minor modifications made to a published mtGenome Sanger sequencing protocol to maintain a high rate of throughput while minimizing manual reprocessing with these low template samples. The successful use of this semi-automated strategy on forensic-like samples provides practical insight into the feasibility of producing complete mtGenome data in a routine casework environment, and demonstrates that large (>2kb) mtDNA fragments can regularly be recovered from high quality but very low DNA quantity specimens. Further, the detailed empirical data we provide on the amplification success rates across a range of DNA input quantities will be useful moving forward as PCR-based strategies for mtDNA enrichment are considered for targeted next-generation sequencing workflows. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
[Construction of chemical information database based on optical structure recognition technique].
Lv, C Y; Li, M N; Zhang, L R; Liu, Z M
2018-04-18
To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.
NASA Astrophysics Data System (ADS)
Davis, Justin; Howard, Hillari; Hoover, Richard B.; Sabanayagam, Chandran R.
2010-09-01
Extremophiles are microorganisms that have adapted to severe conditions that were once considered devoid of life. The extreme settings in which these organisms flourish on Earth resemble many extraterrestrial environments. Identification and classification of extremophiles in situ (without the requirement for excessive handling and processing) can provide a basis for designing remotely operated instruments for extraterrestrial life exploration. An important consideration when designing such experiments is to prevent contamination of the environments. We are developing a reference spectral database of autofluorescence from microbial extremophiles using long-UV excitation (408 nm). Aromatic compounds are essential components of living systems, and biological molecules such as aromatic amino acids, nucleotides, porphyrins and vitamins can also exhibit fluorescence under long-UV excitation conditions. Autofluorescence spectra were obtained from a light microscope that additionally allowed observations of microbial geometry and motility. It was observed that all extremophiles studied displayed an autofluorescence peak at around 470 nm, followed by a long decay that was species specific. The autofluorescence database can potentially be used as a reference to identify and classify past or present microbial life in our solar system.
Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice
2015-01-01
The aim of this study is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) fuel datasets. The revision is based on the data quality indicators described by the ILCD Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD fuel datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the fuel-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD fuel datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall DQR of databases.
PubMed searches: overview and strategies for clinicians.
Lindsey, Wesley T; Olin, Bernie R
2013-04-01
PubMed is a biomedical and life sciences database maintained by a division of the National Library of Medicine known as the National Center for Biotechnology Information (NCBI). It is a large resource with more than 5600 journals indexed and greater than 22 million total citations. Searches conducted in PubMed provide references that are more specific for the intended topic compared with other popular search engines. Effective PubMed searches allow the clinician to remain current on the latest clinical trials, systematic reviews, and practice guidelines. PubMed continues to evolve by allowing users to create a customized experience through the My NCBI portal, new arrangements and options in search filters, and supporting scholarly projects through exportation of citations to reference managing software. Prepackaged search options available in the Clinical Queries feature also allow users to efficiently search for clinical literature. PubMed also provides information regarding the source journals themselves through the Journals in NCBI Databases link. This article provides an overview of the PubMed database's structure and features as well as strategies for conducting an effective search.
NASA Technical Reports Server (NTRS)
Sabanayagam, Chandran; Howard, Hillari; Hoover, Richard B.
2010-01-01
Extremophiles are microorganisms that have adapted to severe conditions that were once considered devoid of life. The extreme settings in which these organisms flourish on earth resemble many extraterrestrial environments. Identification and classification of extremophiles in situ (without the requirement for excessive handling and processing) can provide a basis for designing remotely operated instruments for extraterrestrial life exploration. An important consideration when designing such experiments is to prevent contamination of the environments. We are developing a reference spectral database of autofluorescence from microbial extremophiles using long-UV excitation (405 nm). Aromatic compounds are essential components of living systems, and biological molecules such as aromatic amino acids, nucleotides, porphyrins and vitamins can also exhibit fluorescence under long-UV excitation conditions. Autofluorescence spectra were obtained from a confocal microscope that additionally allowed observations of microbial geometry and motility. It was observed that all extremophiles studied displayed an autofluorescence peak at around 470 nm, followed by a long decay that was species specific. The autofluorescence database can potentially be used as a reference to identify and classify past or present microbial life in our solar system.
Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders.
Hamosh, Ada; Scott, Alan F; Amberger, Joanna; Bocchini, Carol; Valle, David; McKusick, Victor A
2002-01-01
Online Mendelian Inheritance in Man (OMIM) is a comprehensive, authoritative and timely knowledgebase of human genes and genetic disorders compiled to support research and education in human genomics and the practice of clinical genetics. Started by Dr Victor A. McKusick as the definitive reference Mendelian Inheritance in Man, OMIM (www.ncbi.nlm.nih.gov/omim) is now distributed electronically by the National Center for Biotechnology Information (NCBI), where it is integrated with the Entrez suite of databases. Derived from the biomedical literature, OMIM is written and edited at Johns Hopkins University with input from scientists and physicians around the world. Each OMIM entry has a full-text summary of a genetically determined phenotype and/or gene and has numerous links to other genetic databases such as DNA and protein sequence, PubMed references, general and locus-specific mutation databases, approved gene nomenclature, and the highly detailed mapviewer, as well as patient support groups and many others. OMIM is an easy and straightforward portal to the burgeoning information in human genetics.
Saraf, Abhijeet
2012-01-01
Purpose: Not a single drug in Ayurveda has been termed as non-medicinal. This means every Dravya has medicinal value in this world. Jangam dravya is an animal sourced medicine. In samhita Jangam Dravya are described first. So as per Krama Varnan Vichar, Jangam Dravyas are significant in this type. In Ayurvedic literature there is more literature on Audbhid & Parthiva Dravyas. I Total available nighantu: more than 25. Total available Rasa Grantha: about 145. There is no one Grantha on Jangam Dravya which describes their whole information. Jangam Dravyas are described in Ayurvedic literature in different views and in different branches. Gross description is available in Samhitas. But they aren’t in format. They are not compiled according to their Guna Karma, Upayogitwa, Vyadhiharatwa, and Kalpa etc. Their use in Chikitsa is minimal as their ready references are not available, though very much effective. So due to sheer need of compilation of these references this topic was selected for study. The basic need for study of Jangam Dravya is to prepare its whole DATABASE. So through this study Database of Jangam Dravya can be available like Jangam Dravya. Method: Selection of topic this is a fundamental & literary study, Selection of material, Selection of Database software & font, Collection of data & preparation of Master Chart, Preparation of Database, Interpretation & summarization of data. Result: So in this paper, we are going to focus on literature availability of jangam dravya with the help of modern technique like Microsoft Excel. And also how we can prepare and use the categorical interpretation of jangam dravya with help of database Conclusion: Jangam Dravyas are described in Ayurvedic literature in different views and in different branches. Importances of these dravyas are the main key point of this study.
Ma, Jinhui; Siminoski, Kerry; Alos, Nathalie; Halton, Jacqueline; Ho, Josephine; Lentle, Brian; Matzinger, MaryAnn; Shenouda, Nazih; Atkinson, Stephanie; Barr, Ronald; Cabral, David A; Couch, Robert; Cummings, Elizabeth A; Fernandez, Conrad V; Grant, Ronald M; Rodd, Celia; Sbrocchi, Anne Marie; Scharke, Maya; Rauch, Frank; Ward, Leanne M
2015-03-01
Our objectives were to assess the magnitude of the disparity in lumbar spine bone mineral density (LSBMD) Z-scores generated by different reference databases and to evaluate whether the relationship between LSBMD Z-scores and vertebral fractures (VF) varies by choice of database. Children with leukemia underwent LSBMD by cross-calibrated dual-energy x-ray absorptiometry, with Z-scores generated according to Hologic and Lunar databases. VF were assessed by the Genant method on spine radiographs. Logistic regression was used to assess the association between fractures and LSBMD Z-scores. Net reclassification improvement and area under the receiver operating characteristic curve were calculated to assess the predictive accuracy of LSBMD Z-scores for VF. For the 186 children from 0 to 18 years of age, 6 different age ranges were studied. The Z-scores generated for the 0 to 18 group were highly correlated (r ≥ 0.90), but the proportion of children with LSBMD Z-scores ≤-2.0 among those with VF varied substantially (from 38-66%). Odds ratios (OR) for the association between LSBMD Z-score and VF were similar regardless of database (OR = 1.92, 95% confidence interval 1.44, 2.56 to OR = 2.70, 95% confidence interval 1.70, 4.28). Area under the receiver operating characteristic curve and net reclassification improvement ranged from 0.71 to 0.75 and -0.15 to 0.07, respectively. Although the use of a LSBMD Z-score threshold as part of the definition of osteoporosis in a child with VF does not appear valid, the study of relationships between BMD and VF is valid regardless of the BMD database that is used.
Vesco, Umberto; Knap, Nataša; Labruna, Marcelo B; Avšič-Županc, Tatjana; Estrada-Peña, Agustín; Guglielmone, Alberto A; Bechara, Gervasio H; Gueye, Arona; Lakos, Andras; Grindatto, Anna; Conte, Valeria; De Meneghi, Daniele
2011-05-01
Tick-borne zoonoses (TBZ) are emerging diseases worldwide. A large amount of information (e.g. case reports, results of epidemiological surveillance, etc.) is dispersed through various reference sources (ISI and non-ISI journals, conference proceedings, technical reports, etc.). An integrated database-derived from the ICTTD-3 project ( http://www.icttd.nl )-was developed in order to gather TBZ records in the (sub-)tropics, collected both by the authors and collaborators worldwide. A dedicated website ( http://www.tickbornezoonoses.org ) was created to promote collaboration and circulate information. Data collected are made freely available to researchers for analysis by spatial methods, integrating mapped ecological factors for predicting TBZ risk. The authors present the assembly process of the TBZ database: the compilation of an updated list of TBZ relevant for (sub-)tropics, the database design and its structure, the method of bibliographic search, the assessment of spatial precision of geo-referenced records. At the time of writing, 725 records extracted from 337 publications related to 59 countries in the (sub-)tropics, have been entered in the database. TBZ distribution maps were also produced. Imported cases have been also accounted for. The most important datasets with geo-referenced records were those on Spotted Fever Group rickettsiosis in Latin-America and Crimean-Congo Haemorrhagic Fever in Africa. The authors stress the need for international collaboration in data collection to update and improve the database. Supervision of data entered remains always necessary. Means to foster collaboration are discussed. The paper is also intended to describe the challenges encountered to assemble spatial data from various sources and to help develop similar data collections.
Moore, Jeffrey C; Spink, John; Lipp, Markus
2012-04-01
Food ingredient fraud and economically motivated adulteration are emerging risks, but a comprehensive compilation of information about known problematic ingredients and detection methods does not currently exist. The objectives of this research were to collect such information from publicly available articles in scholarly journals and general media, organize into a database, and review and analyze the data to identify trends. The results summarized are a database that will be published in the US Pharmacopeial Convention's Food Chemicals Codex, 8th edition, and includes 1305 records, including 1000 records with analytical methods collected from 677 references. Olive oil, milk, honey, and saffron were the most common targets for adulteration reported in scholarly journals, and potentially harmful issues identified include spices diluted with lead chromate and lead tetraoxide, substitution of Chinese star anise with toxic Japanese star anise, and melamine adulteration of high protein content foods. High-performance liquid chromatography and infrared spectroscopy were the most common analytical detection procedures, and chemometrics data analysis was used in a large number of reports. Future expansion of this database will include additional publically available articles published before 1980 and in other languages, as well as data outside the public domain. The authors recommend in-depth analyses of individual incidents. This report describes the development and application of a database of food ingredient fraud issues from publicly available references. The database provides baseline information and data useful to governments, agencies, and individual companies assessing the risks of specific products produced in specific regions as well as products distributed and sold in other regions. In addition, the report describes current analytical technologies for detecting food fraud and identifies trends and developments. © 2012 US Pharmacupia Journal of Food Science © 2012 Institute of Food Technologistsreg;
JRC GMO-Amplicons: a collection of nucleic acid sequences related to genetically modified organisms
Petrillo, Mauro; Angers-Loustau, Alexandre; Henriksson, Peter; Bonfini, Laura; Patak, Alex; Kreysa, Joachim
2015-01-01
The DNA target sequence is the key element in designing detection methods for genetically modified organisms (GMOs). Unfortunately this information is frequently lacking, especially for unauthorized GMOs. In addition, patent sequences are generally poorly annotated, buried in complex and extensive documentation and hard to link to the corresponding GM event. Here, we present the JRC GMO-Amplicons, a database of amplicons collected by screening public nucleotide sequence databanks by in silico determination of PCR amplification with reference methods for GMO analysis. The European Union Reference Laboratory for Genetically Modified Food and Feed (EU-RL GMFF) provides these methods in the GMOMETHODS database to support enforcement of EU legislation and GM food/feed control. The JRC GMO-Amplicons database is composed of more than 240 000 amplicons, which can be easily accessed and screened through a web interface. To our knowledge, this is the first attempt at pooling and collecting publicly available sequences related to GMOs in food and feed. The JRC GMO-Amplicons supports control laboratories in the design and assessment of GMO methods, providing inter-alia in silico prediction of primers specificity and GM targets coverage. The new tool can assist the laboratories in the analysis of complex issues, such as the detection and identification of unauthorized GMOs. Notably, the JRC GMO-Amplicons database allows the retrieval and characterization of GMO-related sequences included in patents documentation. Finally, it can help annotating poorly described GM sequences and identifying new relevant GMO-related sequences in public databases. The JRC GMO-Amplicons is freely accessible through a web-based portal that is hosted on the EU-RL GMFF website. Database URL: http://gmo-crl.jrc.ec.europa.eu/jrcgmoamplicons/ PMID:26424080
JRC GMO-Amplicons: a collection of nucleic acid sequences related to genetically modified organisms.
Petrillo, Mauro; Angers-Loustau, Alexandre; Henriksson, Peter; Bonfini, Laura; Patak, Alex; Kreysa, Joachim
2015-01-01
The DNA target sequence is the key element in designing detection methods for genetically modified organisms (GMOs). Unfortunately this information is frequently lacking, especially for unauthorized GMOs. In addition, patent sequences are generally poorly annotated, buried in complex and extensive documentation and hard to link to the corresponding GM event. Here, we present the JRC GMO-Amplicons, a database of amplicons collected by screening public nucleotide sequence databanks by in silico determination of PCR amplification with reference methods for GMO analysis. The European Union Reference Laboratory for Genetically Modified Food and Feed (EU-RL GMFF) provides these methods in the GMOMETHODS database to support enforcement of EU legislation and GM food/feed control. The JRC GMO-Amplicons database is composed of more than 240 000 amplicons, which can be easily accessed and screened through a web interface. To our knowledge, this is the first attempt at pooling and collecting publicly available sequences related to GMOs in food and feed. The JRC GMO-Amplicons supports control laboratories in the design and assessment of GMO methods, providing inter-alia in silico prediction of primers specificity and GM targets coverage. The new tool can assist the laboratories in the analysis of complex issues, such as the detection and identification of unauthorized GMOs. Notably, the JRC GMO-Amplicons database allows the retrieval and characterization of GMO-related sequences included in patents documentation. Finally, it can help annotating poorly described GM sequences and identifying new relevant GMO-related sequences in public databases. The JRC GMO-Amplicons is freely accessible through a web-based portal that is hosted on the EU-RL GMFF website. Database URL: http://gmo-crl.jrc.ec.europa.eu/jrcgmoamplicons/. © The Author(s) 2015. Published by Oxford University Press.
Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases
Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz
2015-01-01
Background and Aims: The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Methods: Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. Results: The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. Conclusion: PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles. PMID:26236086
Establishment of an international database for genetic variants in esophageal cancer.
Vihinen, Mauno
2016-10-01
The establishment of a database has been suggested in order to collect, organize, and distribute genetic information about esophageal cancer. The World Organization for Specialized Studies on Diseases of the Esophagus and the Human Variome Project will be in charge of a central database of information about esophageal cancer-related variations from publications, databases, and laboratories; in addition to genetic details, clinical parameters will also be included. The aim will be to get all the central players in research, clinical, and commercial laboratories to contribute. The database will follow established recommendations and guidelines. The database will require a team of dedicated curators with different backgrounds. Numerous layers of systematics will be applied to facilitate computational analyses. The data items will be extensively integrated with other information sources. The database will be distributed as open access to ensure exchange of the data with other databases. Variations will be reported in relation to reference sequences on three levels--DNA, RNA, and protein-whenever applicable. In the first phase, the database will concentrate on genetic variations including both somatic and germline variations for susceptibility genes. Additional types of information can be integrated at a later stage. © 2016 New York Academy of Sciences.
Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases.
Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz
2015-06-01
The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles.
The thyrotropin receptor mutation database: update 2003.
Führer, Dagmar; Lachmund, Peter; Nebel, Istvan-Tibor; Paschke, Ralf
2003-12-01
In 1999 we have created a TSHR mutation database compiling TSHR mutations with their basic characteristics and associated clinical conditions (www.uni-leipzig.de/innere/tshr). Since then, more than 2887 users from 36 countries have logged into the TSHR mutation database and have contributed several valuable suggestions for further improvement of the database. We now present an updated and extended version of the TSHR database to which several novel features have been introduced: 1. detailed functional characteristics on all 65 mutations (43 activating and 22 inactivating mutations) reported to date, 2. 40 pedigrees with detailed information on molecular aspects, clinical courses and treatment options in patients with gain-of-function and loss-of-function germline TSHR mutations, 3. a first compilation of site-directed mutagenesis studies, 4. references with Medline links, 5. a user friendly search tool for specific database searches, user-specific database output and 6. an administrator tool for the submission of novel TSHR mutations. The TSHR mutation database is installed as one of the locus specific HUGO mutation databases. It is listed under index TSHR 603372 (http://ariel.ucs.unimelb.edu.au/~cotton/glsdbq.htm) and can be accessed via www.uni-leipzig.de/innere/tshr.
[Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
2015-09-01
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska
Selkowitz, D.J.; Stehman, S.V.
2011-01-01
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.
Morales, Marco U; Saker, Saker; Wilde, Craig; Pellizzari, Carlo; Pallikaris, Aristophanes; Notaroberto, Neil; Rubinstein, Martin; Rui, Chiara; Limoli, Paolo; Smolek, Michael K; Amoaku, Winfried M
2016-11-01
The purpose of this study was to establish a normal reference database for fixation stability measured with the bivariate contour ellipse area (BCEA) in the Macular Integrity Assessment (MAIA) microperimeter. Subjects were 358 healthy volunteers who had the MAIA examination. Fixation stability was assessed using two BCEA fixation indices (63% and 95% proportional values) and the percentage of fixation points within 1° and 2° from the fovea (P1 and P2). Statistical analysis was performed with linear regression and Pearson's product moment correlation coefficient. Average areas of 0.80 deg 2 (min = 0.03, max = 3.90, SD = 0.68) for the index BCEA@63% and 2.40 deg 2 (min = 0.20, max = 11.70, SD = 2.04) for the index BCEA@95% were found. The average values of P1 and P2 were 95% (min = 76, max = 100, SD = 5.31) and 99% (min = 91, max = 100, SD = 1.42), respectively. The Pearson's product moment test showed an almost perfect correlation index, r = 0.999, between BCEA@63% and BCEA@95%. Index P1 showed a very strong correlation with BCEA@63%, r = -0.924, as well as with BCEA@95%, r = -0.925. Index P2 demonstrated a slightly lower correlation with both BCEA@63% and BCEA@95%, r = -0.874 and -0.875, respectively. The single parameter of the BCEA@95% may be taken as accurately reporting fixation stability and serves as a reference database of normal subjects with a cutoff area of 2.40 ± 2.04 deg 2 in MAIA microperimeter. Fixation stability can be measured with different indices. This study originates reference fixation values for the MAIA using a single fixation index.
NASA Technical Reports Server (NTRS)
1991-01-01
This catalog lists 783 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered into NASA Scientific and Technical Information Database during the year's 1987 through 1990. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alekhin, S.I.; Ezhela, V.V.; Filimonov, B.B.
We present an indexed guide to the literature experimental particle physics for the years 1988--1992. About 4,000 papers are indexed by Beam/Target/Momentum, Reaction Momentum (including the final state), Final State Particle, and Accelerator/Detector/Experiment. All indices are cross-referenced to the paper`s title and reference in the ID/Reference/Title Index. The information in this guide is also publicly available from a regularly updated computer database.
NASA Technical Reports Server (NTRS)
1990-01-01
This catalog lists 190 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered into the NASA scientific and technical information database during accession year 1989. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.
NASA Technical Reports Server (NTRS)
1993-01-01
This catalog lists 458 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered into the NASA Scientific and Technical Information database during accession year 1991 through 1992. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.
NASA Technical Reports Server (NTRS)
1988-01-01
This catalog lists 239 citations of all NASA Special Publications, NASA Reference Publications, NASA Conference Publications, and NASA Technical Papers that were entered in the NASA scientific and technical information database during accession year 1987. The entries are grouped by subject category. Indexes of subject terms, personal authors, and NASA report numbers are provided.
Tanabe, Akifumi S; Toju, Hirokazu
2013-01-01
Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research.
Aronson, Jeffrey K
2016-01-01
Objective To examine how misspellings of drug names could impede searches for published literature. Design Database review. Data source PubMed. Review methods The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names—the international non-proprietary names, variant names—deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants—variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for “standard name[tw]” and “variant name[tw] NOT standard name[tw].” Results The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in “in,” “ine,” or “micin” were commonly misspelt. Failing to search for hidden reference variants of “gentamicin,” “amitriptyline,” “mirtazapine,” and “trazodone” would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, “No-el”, was rare; variants of “X-miss” were rarer. Conclusion When performing searches, researchers should include misspellings of drug names among their search terms. PMID:27974346
Ferner, Robin E; Aronson, Jeffrey K
2016-12-14
To examine how misspellings of drug names could impede searches for published literature. Database review. PubMed. The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names-the international non-proprietary names, variant names-deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants-variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for "standard name[tw]" and "variant name[tw] NOT standard name[tw]." The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in "in," "ine," or "micin" were commonly misspelt. Failing to search for hidden reference variants of "gentamicin," "amitriptyline," "mirtazapine," and "trazodone" would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, "No-el", was rare; variants of "X-miss" were rarer. When performing searches, researchers should include misspellings of drug names among their search terms. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Astrobib: A Literature Referencing System Compatible with the AAS/WGAS Latex Macros
NASA Astrophysics Data System (ADS)
Ferguson, H. C.
1993-12-01
Perhaps the most tedious part of preparing an article is dealing with the references: keeping track of which have been cited and formatting the reference section at the end of the paper in accordance with a particular journal's requirements. This package aims to simplify this task, while remaining compatible with the AAS/WGAS latex macros (as well as the latex styles distributed by A&A and MNRAS). For lack of a better name, we call this package Astrobib. The astrobib package can be used on two levels. The first uses the standard ``bibtex'' software to collect all the references cited in the text and format the reference list at the end of the paper according to the style requirements of the Journal. All we have done here is to modify the public-domain ``chicago.bst'' bibtex styles to produce citations in the formats required by ApJ, AJ, A&A, MNRAS, and PASP. All implement, to first order, the formats for references specified in 1992 or 1993 ``Instructions to Authors'' of the different journals. If the paper is rejected by MNRAS, changing three lines will allow it to be printed in ApJ format. The second level overcomes two drawbacks bibtex: the tedious use of braces and commas in bibliography database and the requirement that the author remember citation keys, typically constructed of the authors' initials and the date. With Astrobib the bibliography is kept in a much simpler database (based on the Unix `refer' style) and a couple of Unix-specific programs parse the database into bibtex format and preprocess the text to convert ``loose'' citations into bibtex citation keys. Loose citations allow the author to cite just a few authors (in any order) and perhaps the year or a word of the title of the conference proceedings. Documentation and instructions for electronic access to the package will be available at the meeting. Support for this work was provided by the SERC and by NASA through grant HF1043 awarded by the STScI which is operated by AURA, Inc., for NASA under contract NAS5-26555.
Tanabe, Akifumi S.; Toju, Hirokazu
2013-01-01
Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate the registration of reference barcode sequences to apply high-throughput DNA barcoding to genus or species level identification in biodiversity research. PMID:24204702
GetData: A filesystem-based, column-oriented database format for time-ordered binary data
NASA Astrophysics Data System (ADS)
Wiebe, Donald V.; Netterfield, Calvin B.; Kisner, Theodore S.
2015-12-01
The GetData Project is the reference implementation of the Dirfile Standards, a filesystem-based, column-oriented database format for time-ordered binary data. Dirfiles provide a fast, simple format for storing and reading data, suitable for both quicklook and analysis pipelines. GetData provides a C API and bindings exist for various other languages. GetData is distributed under the terms of the GNU Lesser General Public License.
Wilson, Frederic H.; Hults, Chad P.; Mull, Charles G.; Karl, Susan M.
2015-12-31
This Alaska compilation is unique in that it is integrated with a rich database of information provided in the spatial datasets and standalone attribute databases. Within the spatial files every line and polygon is attributed to its original source; the references to these sources are contained in related tables, as well as in stand-alone tables. Additional attributes include typical lithology, geologic setting, and age range for the map units. Also included are tables of radiometric ages.
Turk, Gregory C; Sharpless, Katherine E; Cleveland, Danielle; Jongsma, Candice; Mackey, Elizabeth A; Marlow, Anthony F; Oflaz, Rabia; Paul, Rick L; Sieber, John R; Thompson, Robert Q; Wood, Laura J; Yu, Lee L; Zeisler, Rolf; Wise, Stephen A; Yen, James H; Christopher, Steven J; Day, Russell D; Long, Stephen E; Greene, Ella; Harnly, James; Ho, I-Pin; Betz, Joseph M
2013-01-01
Standard Reference Material 3280 Multivitamin/ Multielement Tablets was issued by the National Institute of Standards and Technology in 2009, and has certified and reference mass fraction values for 13 vitamins, 26 elements, and two carotenoids. Elements were measured using two or more analytical methods at NIST with additional data contributed by collaborating laboratories. This reference material is expected to serve a dual purpose: to provide quality assurance in support of a database of dietary supplement products and to provide a means for analysts, dietary supplement manufacturers, and researchers to assess the appropriateness and validity of their analytical methods and the accuracy of their results.
The Biological Macromolecule Crystallization Database and NASA Protein Crystal Growth Archive
Gilliland, Gary L.; Tung, Michael; Ladner, Jane
1996-01-01
The NIST/NASA/CARB Biological Macromolecule Crystallization Database (BMCD), NIST Standard Reference Database 21, contains crystal data and crystallization conditions for biological macromolecules. The database entries include data abstracted from published crystallographic reports. Each entry consists of information describing the biological macromolecule crystallized and crystal data and the crystallization conditions for each crystal form. The BMCD serves as the NASA Protein Crystal Growth Archive in that it contains protocols and results of crystallization experiments undertaken in microgravity (space). These database entries report the results, whether successful or not, from NASA-sponsored protein crystal growth experiments in microgravity and from microgravity crystallization studies sponsored by other international organizations. The BMCD was designed as a tool to assist x-ray crystallographers in the development of protocols to crystallize biological macromolecules, those that have previously been crystallized, and those that have not been crystallized. PMID:11542472
GIS and RDBMS Used with Offline FAA Airspace Databases
NASA Technical Reports Server (NTRS)
Clark, J.; Simmons, J.; Scofield, E.; Talbott, B.
1994-01-01
A geographic information system (GIS) and relational database management system (RDBMS) were used in a Macintosh environment to access, manipulate, and display off-line FAA databases of airport and navigational aid locations, airways, and airspace boundaries. This proof-of-concept effort used data available from the Adaptation Controlled Environment System (ACES) and Digital Aeronautical Chart Supplement (DACS) databases to allow FAA cartographers and others to create computer-assisted charts and overlays as reference material for air traffic controllers. These products were created on an engineering model of the future GRASP (GRaphics Adaptation Support Position) workstation that will be used to make graphics and text products for the Advanced Automation System (AAS), which will upgrade and replace the current air traffic control system. Techniques developed during the prototyping effort have shown the viability of using databases to create graphical products without the need for an intervening data entry step.
Lupiañez-Barbero, Ascension; González Blanco, Cintia; de Leiva Hidalgo, Alberto
2018-05-23
Food composition tables and databases (FCTs or FCDBs) provide the necessary information to estimate intake of nutrients and other food components. In Spain, the lack of a reference database has resulted in use of different FCTs/FCDBs in nutritional surveys and research studies, as well as for development of dietetic for diet analysis. As a result, biased, non-comparable results are obtained, and healthcare professionals are rarely aware of these limitations. AECOSAN and the BEDCA association developed a FCDB following European standards, the Spanish Food Composition Database Network (RedBEDCA).The current database has a limited number of foods and food components and barely contains processed foods, which limits its use in epidemiological studies and in the daily practice of healthcare professionals. Copyright © 2018 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.
Virus Database and Online Inquiry System Based on Natural Vectors.
Dong, Rui; Zheng, Hui; Tian, Kun; Yau, Shek-Chung; Mao, Weiguang; Yu, Wenping; Yin, Changchuan; Yu, Chenglong; He, Rong Lucy; Yang, Jie; Yau, Stephen St
2017-01-01
We construct a virus database called VirusDB (http://yaulab.math.tsinghua.edu.cn/VirusDB/) and an online inquiry system to serve people who are interested in viral classification and prediction. The database stores all viral genomes, their corresponding natural vectors, and the classification information of the single/multiple-segmented viral reference sequences downloaded from National Center for Biotechnology Information. The online inquiry system serves the purpose of computing natural vectors and their distances based on submitted genomes, providing an online interface for accessing and using the database for viral classification and prediction, and back-end processes for automatic and manual updating of database content to synchronize with GenBank. Submitted genomes data in FASTA format will be carried out and the prediction results with 5 closest neighbors and their classifications will be returned by email. Considering the one-to-one correspondence between sequence and natural vector, time efficiency, and high accuracy, natural vector is a significant advance compared with alignment methods, which makes VirusDB a useful database in further research.
Publications - AR 2015 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic publication sales page for more information. Quadrangle(s): Alaska General Bibliographic Reference DGGS Staff
Publications - GMC 280 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic please see our publication sales page for more information. Bibliographic Reference Piggott, Neil, and
Calorie count - Alcoholic beverages
... nih.gov/pubmed/23384768 . United States Department of Agriculture website. National Nutrient Database for Standard Reference. ndb. ... 2018. Accessed April 12, 2018. US Department of Agriculture website. Nutrition and your health: Table E-3. ...
The MAR databases: development and implementation of databases specific for marine metagenomics
Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen
2018-01-01
Abstract We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. PMID:29106641
Evaluating the quality of Marfan genotype-phenotype correlations in existing FBN1 databases.
Groth, Kristian A; Von Kodolitsch, Yskert; Kutsche, Kerstin; Gaustadnes, Mette; Thorsen, Kasper; Andersen, Niels H; Gravholt, Claus H
2017-07-01
Genetic FBN1 testing is pivotal for confirming the clinical diagnosis of Marfan syndrome. In an effort to evaluate variant causality, FBN1 databases are often used. We evaluated the current databases regarding FBN1 variants and validated associated phenotype records with a new Marfan syndrome geno-phenotyping tool called the Marfan score. We evaluated four databases (UMD-FBN1, ClinVar, the Human Gene Mutation Database (HGMD), and Uniprot) containing 2,250 FBN1 variants supported by 4,904 records presented in 307 references. The Marfan score calculated for phenotype data from the records quantified variant associations with Marfan syndrome phenotype. We calculated a Marfan score for 1,283 variants, of which we confirmed the database diagnosis of Marfan syndrome in 77.1%. This represented only 35.8% of the total registered variants; 18.5-33.3% (UMD-FBN1 versus HGMD) of variants associated with Marfan syndrome in the databases could not be confirmed by the recorded phenotype. FBN1 databases can be imprecise and incomplete. Data should be used with caution when evaluating FBN1 variants. At present, the UMD-FBN1 database seems to be the biggest and best curated; therefore, it is the most comprehensive database. However, the need for better genotype-phenotype curated databases is evident, and we hereby present such a database.Genet Med advance online publication 01 December 2016.
Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M
2018-01-01
Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Karger, Axel; Stock, Rüdiger; Ziller, Mario; Elschner, Mandy C; Bettin, Barbara; Melzer, Falk; Maier, Thomas; Kostrzewa, Markus; Scholz, Holger C; Neubauer, Heinrich; Tomaso, Herbert
2012-10-10
Burkholderia (B.) pseudomallei and B. mallei are genetically closely related species. B. pseudomallei causes melioidosis in humans and animals, whereas B. mallei is the causative agent of glanders in equines and rarely also in humans. Both agents have been classified by the CDC as priority category B biological agents. Rapid identification is crucial, because both agents are intrinsically resistant to many antibiotics. Matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-TOF MS) has the potential of rapid and reliable identification of pathogens, but is limited by the availability of a database containing validated reference spectra. The aim of this study was to evaluate the use of MALDI-TOF MS for the rapid and reliable identification and differentiation of B. pseudomallei and B. mallei and to build up a reliable reference database for both organisms. A collection of ten B. pseudomallei and seventeen B. mallei strains was used to generate a library of reference spectra. Samples of both species could be identified by MALDI-TOF MS, if a dedicated subset of the reference spectra library was used. In comparison with samples representing B. mallei, higher genetic diversity among B. pseudomallei was reflected in the higher average Eucledian distances between the mass spectra and a broader range of identification score values obtained with commercial software for the identification of microorganisms. The type strain of B. pseudomallei (ATCC 23343) was isolated decades ago and is outstanding in the spectrum-based dendrograms probably due to massive methylations as indicated by two intensive series of mass increments of 14 Da specifically and reproducibly found in the spectra of this strain. Handling of pathogens under BSL 3 conditions is dangerous and cumbersome but can be minimized by inactivation of bacteria with ethanol, subsequent protein extraction under BSL 1 conditions and MALDI-TOF MS analysis being faster than nucleic amplification methods. Our spectra demonstrated a higher homogeneity in B. mallei than in B. pseudomallei isolates. As expected for closely related species, the identification process with MALDI Biotyper software (Bruker Daltonik GmbH, Bremen, Germany) requires the careful selection of spectra from reference strains. When a dedicated reference set is used and spectra of high quality are acquired, it is possible to distinguish both species unambiguously. The need for a careful curation of reference spectra databases is stressed.
2012-01-01
Background Burkholderia (B.) pseudomallei and B. mallei are genetically closely related species. B. pseudomallei causes melioidosis in humans and animals, whereas B. mallei is the causative agent of glanders in equines and rarely also in humans. Both agents have been classified by the CDC as priority category B biological agents. Rapid identification is crucial, because both agents are intrinsically resistant to many antibiotics. Matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-TOF MS) has the potential of rapid and reliable identification of pathogens, but is limited by the availability of a database containing validated reference spectra. The aim of this study was to evaluate the use of MALDI-TOF MS for the rapid and reliable identification and differentiation of B. pseudomallei and B. mallei and to build up a reliable reference database for both organisms. Results A collection of ten B. pseudomallei and seventeen B. mallei strains was used to generate a library of reference spectra. Samples of both species could be identified by MALDI-TOF MS, if a dedicated subset of the reference spectra library was used. In comparison with samples representing B. mallei, higher genetic diversity among B. pseudomallei was reflected in the higher average Eucledian distances between the mass spectra and a broader range of identification score values obtained with commercial software for the identification of microorganisms. The type strain of B. pseudomallei (ATCC 23343) was isolated decades ago and is outstanding in the spectrum-based dendrograms probably due to massive methylations as indicated by two intensive series of mass increments of 14 Da specifically and reproducibly found in the spectra of this strain. Conclusions Handling of pathogens under BSL 3 conditions is dangerous and cumbersome but can be minimized by inactivation of bacteria with ethanol, subsequent protein extraction under BSL 1 conditions and MALDI-TOF MS analysis being faster than nucleic amplification methods. Our spectra demonstrated a higher homogeneity in B. mallei than in B. pseudomallei isolates. As expected for closely related species, the identification process with MALDI Biotyper software (Bruker Daltonik GmbH, Bremen, Germany) requires the careful selection of spectra from reference strains. When a dedicated reference set is used and spectra of high quality are acquired, it is possible to distinguish both species unambiguously. The need for a careful curation of reference spectra databases is stressed. PMID:23046611
Hegedűs, Tamás; Chaubey, Pururawa Mayank; Várady, György; Szabó, Edit; Sarankó, Hajnalka; Hofstetter, Lia; Roschitzki, Bernd; Sarkadi, Balázs
2015-01-01
Based on recent results, the determination of the easily accessible red blood cell (RBC) membrane proteins may provide new diagnostic possibilities for assessing mutations, polymorphisms or regulatory alterations in diseases. However, the analysis of the current mass spectrometry-based proteomics datasets and other major databases indicates inconsistencies—the results show large scattering and only a limited overlap for the identified RBC membrane proteins. Here, we applied membrane-specific proteomics studies in human RBC, compared these results with the data in the literature, and generated a comprehensive and expandable database using all available data sources. The integrated web database now refers to proteomic, genetic and medical databases as well, and contains an unexpected large number of validated membrane proteins previously thought to be specific for other tissues and/or related to major human diseases. Since the determination of protein expression in RBC provides a method to indicate pathological alterations, our database should facilitate the development of RBC membrane biomarker platforms and provide a unique resource to aid related further research and diagnostics. Database URL: http://rbcc.hegelab.org PMID:26078478
Disbiome database: linking the microbiome to disease.
Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart
2018-06-04
Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.
An Index to PGE-Ni-Cr Deposits and Occurrences in Selected Mineral-Occurrence Databases
Causey, J. Douglas; Galloway, John P.; Zientek, Michael L.
2009-01-01
Databases of mineral deposits and occurrences are essential to conducting assessments of undiscovered mineral resources. In the USGS's (U.S. Geological Survey) global assessment of undiscovered resources of copper, potash, and the platinum-group elements (PGE), only a few mineral deposit types will be evaluated. For example, only porphyry-copper and sediment-hosted copper deposits will be considered for the copper assessment. To support the global assessment, the USGS prepared comprehensive compilations of the occurrences of these two deposit types in order to develop grade and tonnage models and delineate permissive areas for undiscovered deposits of those types. This publication identifies previously published databases and database records that describe PGE, nickel, and chromium deposits and occurrences. Nickel and chromium were included in this overview because of the close association of PGE with nickel and chromium mineralization. Users of this database will need to refer to the original databases for detailed information about the deposits and occurrences. This information will be used to develop a current and comprehensive global database of PGE deposits and occurrences.
The U.S. Dairy Forage Research Center (USDFRC) Condensed Tannin NMR Database.
Zeller, Wayne E; Schatz, Paul F
2017-06-28
This Perspective describes a solution-state NMR database for flavan-3-ol monomers and condensed tannin dimers through tetramers obtained from the literature to 2015, containing data searchable by structure, molecular formula, degrees of polymerization, and 1 H and 13 C chemical shifts of the condensed tannins. Citations for all literature references are provided and should serve as valuable resource for scientists working in the field of condensed tannin research. The database will be periodically updated as additional information becomes available, typically on a yearly basis and is available for use, free of charge, from the U.S. Dairy Forage Research Center (USDFRC) Website.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calm, J.M.
1998-03-15
The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to thermophysical properties, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air conditioning and refrigeration equipment. It also references documents addressing compatibility ofmore » refrigerants and lubricants with other materials.« less
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
[Relevance of the hemovigilance regional database for the shared medical file identity server].
Doly, A; Fressy, P; Garraud, O
2008-11-01
The French Health Products Safety Agency coordinates the national initiative of computerization of blood products traceability within regional blood banks and public and private hospitals. The Auvergne-Loire Regional French Blood Service, based in Saint-Etienne, together with a number of public hospitals set up a transfusion data network named EDITAL. After four years of progressive implementation and experimentation, a software enabling standardized data exchange has built up a regional nominative database, endorsed by the Traceability Computerization National Committee in 2004. This database now provides secured web access to a regional transfusion history enabling biologists and all hospital and family practitioners to take in charge the patient follow-up. By running independently from the softwares of its partners, EDITAL database provides reference for the regional identity server.
... the amount of vitamin K they contain (USDA- ARS, 2015). Table 2. Sources of vitamin K. Food ... U.S. Department of Agriculture, Agricultural Research Service USDA-ARS. (2015). National Nutrient Database for Standard Reference, Release ...
Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow
2018-03-01
Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 <0.70). Of the 32 transferred reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
New Features in the ADS Abstract Service
NASA Astrophysics Data System (ADS)
Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Kurtz, M. J.; ReyBacaicoa, V.; Murray, S. S.
2001-11-01
The ADS Abstract Service contains over 2.3 million references in four databases: Astronomy/Astrophysics/Planetary Sciences, Instrumentation, Physics/Geophysics, and Preprints. We provide abstracts and articles free to the astronomical community for all major and many smaller astronomy journals, PhD theses, conference proceedings, and technical reports. These four databases can be queried either separately of jointly. The ADS also has scanned 1.3 million pages in 180,000 articles in the ADS Article Service. This literature archive contains all major Astronomy journals and many smaller journals, as well as conference proceedings, including the abstract books from all the LPSCs back to volume 2. A new feature gives our users the ability to see list of articles that were also read by the readers of a given article. This is a powerful tool to find out what current articles are relevant in a particular field of study. We have recently expanded the citation and reference query capabilities. It allows our users to select papers for which they want to see references or citations and then retrieve these citations/references. Another new capability is the ability to sort a list of articles by their citation count. As usual, users should be reminded that the citations in ADS are incomplete because we do not obtain reference lists from all publishers. In addition, we cannot match all references (e.g. in press, private communications, author errors, some conference papers, etc.). Anyone using the citations for analysis of publishing records should keep this in mind. More work on expanding the citation and reference features is planned over the next year. ADS Home Page http://ads.harvard.edu/
Yin, Li; Yao, Jiqiang; Gardner, Brent P; Chang, Kaifen; Yu, Fahong; Goodenow, Maureen M
2012-01-01
Next Generation sequencing (NGS) applied to human papilloma viruses (HPV) can provide sensitive methods to investigate the molecular epidemiology of multiple type HPV infection. Currently a genotyping system with a comprehensive collection of updated HPV reference sequences and a capacity to handle NGS data sets is lacking. HPV-QUEST was developed as an automated and rapid HPV genotyping system. The web-based HPV-QUEST subtyping algorithm was developed using HTML, PHP, Perl scripting language, and MYSQL as the database backend. HPV-QUEST includes a database of annotated HPV reference sequences with updated nomenclature covering 5 genuses, 14 species and 150 mucosal and cutaneous types to genotype blasted query sequences. HPV-QUEST processes up to 10 megabases of sequences within 1 to 2 minutes. Results are reported in html, text and excel formats and display e-value, blast score, and local and coverage identities; provide genus, species, type, infection site and risk for the best matched reference HPV sequence; and produce results ready for additional analyses.
Reference genotype and exome data from an Australian Aboriginal population for health-based research
Tang, Dave; Anderson, Denise; Francis, Richard W.; Syn, Genevieve; Jamieson, Sarra E.; Lassmann, Timo; Blackwell, Jenefer M.
2016-01-01
Genetic analyses, including genome-wide association studies and whole exome sequencing (WES), provide powerful tools for the analysis of complex and rare genetic diseases. To date there are no reference data for Aboriginal Australians to underpin the translation of health-based genomic research. Here we provide a catalogue of variants called after sequencing the exomes of 72 Aboriginal individuals to a depth of 20X coverage in ∼80% of the sequenced nucleotides. We determined 320,976 single nucleotide variants (SNVs) and 47,313 insertions/deletions using the Genome Analysis Toolkit. We had previously genotyped a subset of the Aboriginal individuals (70/72) using the Illumina Omni2.5 BeadChip platform and found ~99% concordance at overlapping sites, which suggests high quality genotyping. Finally, we compared our SNVs to six publicly available variant databases, such as dbSNP and the Exome Sequencing Project, and 70,115 of our SNVs did not overlap any of the single nucleotide polymorphic sites in all the databases. Our data set provides a useful reference point for genomic studies on Aboriginal Australians. PMID:27070114
Tang, Dave; Anderson, Denise; Francis, Richard W; Syn, Genevieve; Jamieson, Sarra E; Lassmann, Timo; Blackwell, Jenefer M
2016-04-12
Genetic analyses, including genome-wide association studies and whole exome sequencing (WES), provide powerful tools for the analysis of complex and rare genetic diseases. To date there are no reference data for Aboriginal Australians to underpin the translation of health-based genomic research. Here we provide a catalogue of variants called after sequencing the exomes of 72 Aboriginal individuals to a depth of 20X coverage in ∼80% of the sequenced nucleotides. We determined 320,976 single nucleotide variants (SNVs) and 47,313 insertions/deletions using the Genome Analysis Toolkit. We had previously genotyped a subset of the Aboriginal individuals (70/72) using the Illumina Omni2.5 BeadChip platform and found ~99% concordance at overlapping sites, which suggests high quality genotyping. Finally, we compared our SNVs to six publicly available variant databases, such as dbSNP and the Exome Sequencing Project, and 70,115 of our SNVs did not overlap any of the single nucleotide polymorphic sites in all the databases. Our data set provides a useful reference point for genomic studies on Aboriginal Australians.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
APPRIS 2017: principal isoforms for multiple gene sets
Rodriguez-Rivas, Juan; Di Domenico, Tomás; Vázquez, Jesús; Valencia, Alfonso
2018-01-01
Abstract The APPRIS database (http://appris-tools.org) uses protein structural and functional features and information from cross-species conservation to annotate splice isoforms in protein-coding genes. APPRIS selects a single protein isoform, the ‘principal’ isoform, as the reference for each gene based on these annotations. A single main splice isoform reflects the biological reality for most protein coding genes and APPRIS principal isoforms are the best predictors of these main proteins isoforms. Here, we present the updates to the database, new developments that include the addition of three new species (chimpanzee, Drosophila melangaster and Caenorhabditis elegans), the expansion of APPRIS to cover the RefSeq gene set and the UniProtKB proteome for six species and refinements in the core methods that make up the annotation pipeline. In addition APPRIS now provides a measure of reliability for individual principal isoforms and updates with each release of the GENCODE/Ensembl and RefSeq reference sets. The individual GENCODE/Ensembl, RefSeq and UniProtKB reference gene sets for six organisms have been merged to produce common sets of splice variants. PMID:29069475
Velez-Montoya, Raul; Oliver, Scott C N; Olson, Jeffrey L; Fine, Stuart L; Quiroz-Mercado, Hugo; Mandava, Naresh
2014-03-01
To address the most dynamic and current issues concerning human genetics, risk factors, pharmacoeconomics, and prevention regarding age-related macular degeneration. An online review of the database Pubmed and Ovid was performed, searching for the key words: age-related macular degeneration, AMD, pharmacoeconomics, risk factors, VEGF, prevention, genetics and their compound phrases. The search was limited to articles published since 1985 to date. All returned articles were carefully screened and their references were manually reviewed for additional relevant data. The webpage www.clinicaltrials.gov was also accessed in search of relevant research trials. A total of 366 articles were reviewed, including 64 additional articles extracted from the references and 25 webpages and online databases from different institutions. At the end, only 244 references were included in this review. Age-related macular degeneration is a complex multifactorial disease that has an uneven manifestation around the world but with one common denominator, it is increasing and spreading. The economic burden that this disease poses in developed nations will increase in the coming years. Effective preventive therapies need to be developed in the near future.
Development of a biomarkers database for the National Children's Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobdell, Danelle T.; Mendola, Pauline
The National Children's Study (NCS) is a federally-sponsored, longitudinal study of environmental influences on the health and development of children across the United States (www.nationalchildrensstudy.gov). Current plans are to study approximately 100,000 children and their families beginning before birth up to age 21 years. To explore potential biomarkers that could be important measurements in the NCS, we compiled the relevant scientific literature to identify both routine or standardized biological markers as well as new and emerging biological markers. Although the search criteria encouraged examination of factors that influence the breadth of child health and development, attention was primarily focused onmore » exposure, susceptibility, and outcome biomarkers associated with four important child health outcomes: autism and neurobehavioral disorders, injury, cancer, and asthma. The Biomarkers Database was designed to allow users to: (1) search the biomarker records compiled by type of marker (susceptibility, exposure or effect), sampling media (e.g., blood, urine, etc.), and specific marker name; (2) search the citations file; and (3) read the abstract evaluations relative to our search criteria. A searchable, user-friendly database of over 2000 articles was created and is publicly available at: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85844. PubMed was the primary source of references with some additional searches of Toxline, NTIS, and other reference databases. Our initial focus was on review articles, beginning as early as 1996, supplemented with searches of the recent primary research literature from 2001 to 2003. We anticipate this database will have applicability for the NCS as well as other studies of children's environmental health.« less
Hunter, Susan B.; Vauterin, Paul; Lambert-Fair, Mary Ann; Van Duyne, M. Susan; Kubota, Kristy; Graves, Lewis; Wrigley, Donna; Barrett, Timothy; Ribot, Efrain
2005-01-01
The PulseNet National Database, established by the Centers for Disease Control and Prevention in 1996, consists of pulsed-field gel electrophoresis (PFGE) patterns obtained from isolates of food-borne pathogens (currently Escherichia coli O157:H7, Salmonella, Shigella, and Listeria) and textual information about the isolates. Electronic images and accompanying text are submitted from over 60 U.S. public health and food regulatory agency laboratories. The PFGE patterns are generated according to highly standardized PFGE protocols. Normalization and accurate comparison of gel images require the use of a well-characterized size standard in at least three lanes of each gel. Originally, a well-characterized strain of each organism was chosen as the reference standard for that particular database. The increasing number of databases, difficulty in identifying an organism-specific standard for each database, the increased range of band sizes generated by the use of additional restriction endonucleases, and the maintenance of many different organism-specific strains encouraged us to search for a more versatile and universal DNA size marker. A Salmonella serotype Braenderup strain (H9812) was chosen as the universal size standard. This strain was subjected to rigorous testing in our laboratories to ensure that it met the desired criteria, including coverage of a wide range of DNA fragment sizes, even distribution of bands, and stability of the PFGE pattern. The strategy used to convert and compare data generated by the new and old reference standards is described. PMID:15750058
Mouse IDGenes: a reference database for genetic interactions in the developing mouse brain.
Matthes, Michaela; Preusse, Martin; Zhang, Jingzhong; Schechter, Julia; Mayer, Daniela; Lentes, Bernd; Theis, Fabian; Prakash, Nilima; Wurst, Wolfgang; Trümbach, Dietrich
2014-01-01
The study of developmental processes in the mouse and other vertebrates includes the understanding of patterning along the anterior-posterior, dorsal-ventral and medial- lateral axis. Specifically, neural development is also of great clinical relevance because several human neuropsychiatric disorders such as schizophrenia, autism disorders or drug addiction and also brain malformations are thought to have neurodevelopmental origins, i.e. pathogenesis initiates during childhood and adolescence. Impacts during early neurodevelopment might also predispose to late-onset neurodegenerative disorders, such as Parkinson's disease. The neural tube develops from its precursor tissue, the neural plate, in a patterning process that is determined by compartmentalization into morphogenetic units, the action of local signaling centers and a well-defined and locally restricted expression of genes and their interactions. While public databases provide gene expression data with spatio-temporal resolution, they usually neglect the genetic interactions that govern neural development. Here, we introduce Mouse IDGenes, a reference database for genetic interactions in the developing mouse brain. The database is highly curated and offers detailed information about gene expressions and the genetic interactions at the developing mid-/hindbrain boundary. To showcase the predictive power of interaction data, we infer new Wnt/β-catenin target genes by machine learning and validate one of them experimentally. The database is updated regularly. Moreover, it can easily be extended by the research community. Mouse IDGenes will contribute as an important resource to the research on mouse brain development, not exclusively by offering data retrieval, but also by allowing data input. http://mouseidgenes.helmholtz-muenchen.de. © The Author(s) 2014. Published by Oxford University Press.
Database citation in full text biomedical articles.
Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R
2013-01-01
Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.
Database Citation in Full Text Biomedical Articles
Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R.
2013-01-01
Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services. PMID:23734176
Team X Spacecraft Instrument Database Consolidation
NASA Technical Reports Server (NTRS)
Wallenstein, Kelly A.
2005-01-01
In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.
Nursing Reference Center: a point-of-care resource.
Vardell, Emily; Paulaitis, Gediminas Geddy
2012-01-01
Nursing Reference Center is a point-of-care resource designed for the practicing nurse, as well as nursing administrators, nursing faculty, and librarians. Users can search across multiple resources, including topical Quick Lessons, evidence-based care sheets, patient education materials, practice guidelines, and more. Additional features include continuing education modules, e-books, and a new iPhone application. A sample search and comparison with similar databases were conducted.
Using pseudoalignment and base quality to accurately quantify microbial community composition
Novembre, John
2018-01-01
Pooled DNA from multiple unknown organisms arises in a variety of contexts, for example microbial samples from ecological or human health research. Determining the composition of pooled samples can be difficult, especially at the scale of modern sequencing data and reference databases. Here we propose a novel method for taxonomic profiling in pooled DNA that combines the speed and low-memory requirements of k-mer based pseudoalignment with a likelihood framework that uses base quality information to better resolve multiply mapped reads. We apply the method to the problem of classifying 16S rRNA reads using a reference database of known organisms, a common challenge in microbiome research. Using simulations, we show the method is accurate across a variety of read lengths, with different length reference sequences, at different sample depths, and when samples contain reads originating from organisms absent from the reference. We also assess performance in real 16S data, where we reanalyze previous genetic association data to show our method discovers a larger number of quantitative trait associations than other widely used methods. We implement our method in the software Karp, for k-mer based analysis of read pools, to provide a novel combination of speed and accuracy that is uniquely suited for enhancing discoveries in microbial studies. PMID:29659582