Normative Databases for Imaging Instrumentation.
Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray
2015-08-01
To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.
Normative Databases for Imaging Instrumentation
Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray
2015-01-01
Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003
National Institute of Standards and Technology Data Gateway
SRD 60 NIST ITS-90 Thermocouple Database (Web, free access) Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).
USDA-ARS?s Scientific Manuscript database
The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 Standard Reference 26 (SR 26) database. Sentinel Foods are foods and beverages identified by USDA to be monitored as primary indicat...
USDA National Nutrient Database for Standard Reference, release 28
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference, Release 28 contains data for nearly 8,800 food items for up to 150 food components. SR28 replaces the previous release, SR27, originally issued in August 2014. Data in SR28 supersede values in the printed handbooks and previous electronic...
USDA National Nutrient Database for Standard Reference, Release 25
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference, Release 25(SR25)contains data for over 8,100 food items for up to 146 food components. It replaces the previous release, SR24, issued in September 2011. Data in SR25 supersede values in the printed handbooks and previous electronic releas...
USDA National Nutrient Database for Standard Reference, Release 24
USDA-ARS?s Scientific Manuscript database
The USDA Nutrient Database for Standard Reference, Release 24 contains data for over 7,900 food items for up to 146 food components. It replaces the previous release, SR23, issued in September 2010. Data in SR24 supersede values in the printed Handbooks and previous electronic releases of the databa...
USDA-ARS?s Scientific Manuscript database
USDA National Nutrient Database for Standard Reference Dataset for What We Eat In America, NHANES (Survey-SR) provides the nutrient data for assessing dietary intakes from the national survey What We Eat In America, National Health and Nutrition Examination Survey (WWEIA, NHANES). The current versi...
Dimai, Hans P
2017-11-01
Dual-energy X-ray absorptiometry (DXA) is a two-dimensional imaging technology developed to assess bone mineral density (BMD) of the entire human skeleton and also specifically of skeletal sites known to be most vulnerable to fracture. In order to simplify interpretation of BMD measurement results and allow comparability among different DXA-devices, the T-score concept was introduced. This concept involves an individual's BMD which is then compared with the mean value of a young healthy reference population, with the difference expressed as a standard deviation (SD). Since the early nineties of the past century, the diagnostic categories "normal, osteopenia, and osteoporosis", as recommended by a WHO working Group, are based on this concept. Thus, DXA is still the globally accepted "gold-standard" method for the noninvasive diagnosis of osteoporosis. Another score obtained from DXA measurement, termed Z-score, describes the number of SDs by which the BMD in an individual differs from the mean value expected for age and sex. Although not intended for diagnosis of osteoporosis in adults, it nevertheless provides information about an individual's fracture risk compared to peers. DXA measurement can either be used as a "stand-alone" means in the assessment of an individual's fracture risk, or incorporated into one of the available fracture risk assessment tools such as FRAX® or Garvan, thus improving the predictive power of such tools. The issue which reference databases should be used by DXA-device manufacturers for T-score reference standards has been recently addressed by an expert group, who recommended use National Health and Nutrition Examination Survey III (NHANES III) databases for the hip reference standard but own databases for the lumbar spine. Furthermore, in men it is recommended use female reference databases for calculation of the T-score and use male reference databases for calculation of Z-score. Copyright © 2017 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Many species of wild game and fish that are legal to hunt or catch do not have nutrition information in the USDA National Nutrient Database for Standard Reference (SR). Among those species that lack nutrition information are brook trout. The research team worked with the Nutrient Data Laboratory wit...
A Partnership for Public Health: USDA Branded Food Products Database
USDA-ARS?s Scientific Manuscript database
The importance of comprehensive food composition databases is more critical than ever in helping to address global food security. The USDA National Nutrient Database for Standard Reference is the “gold standard” for food composition databases. The presentation will include new developments in stren...
National Vulnerability Database (NVD)
National Institute of Standards and Technology Data Gateway
National Vulnerability Database (NVD) (Web, free access) NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.
GMOMETHODS: the European Union database of reference methods for GMO analysis.
Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre
2012-01-01
In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.
Hunter, Susan B.; Vauterin, Paul; Lambert-Fair, Mary Ann; Van Duyne, M. Susan; Kubota, Kristy; Graves, Lewis; Wrigley, Donna; Barrett, Timothy; Ribot, Efrain
2005-01-01
The PulseNet National Database, established by the Centers for Disease Control and Prevention in 1996, consists of pulsed-field gel electrophoresis (PFGE) patterns obtained from isolates of food-borne pathogens (currently Escherichia coli O157:H7, Salmonella, Shigella, and Listeria) and textual information about the isolates. Electronic images and accompanying text are submitted from over 60 U.S. public health and food regulatory agency laboratories. The PFGE patterns are generated according to highly standardized PFGE protocols. Normalization and accurate comparison of gel images require the use of a well-characterized size standard in at least three lanes of each gel. Originally, a well-characterized strain of each organism was chosen as the reference standard for that particular database. The increasing number of databases, difficulty in identifying an organism-specific standard for each database, the increased range of band sizes generated by the use of additional restriction endonucleases, and the maintenance of many different organism-specific strains encouraged us to search for a more versatile and universal DNA size marker. A Salmonella serotype Braenderup strain (H9812) was chosen as the universal size standard. This strain was subjected to rigorous testing in our laboratories to ensure that it met the desired criteria, including coverage of a wide range of DNA fragment sizes, even distribution of bands, and stability of the PFGE pattern. The strategy used to convert and compare data generated by the new and old reference standards is described. PMID:15750058
USDA Branded Food Products Database, Release 2
USDA-ARS?s Scientific Manuscript database
The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...
Parson, W; Gusmão, L; Hares, D R; Irwin, J A; Mayr, W R; Morling, N; Pokorak, E; Prinz, M; Salas, A; Schneider, P M; Parsons, T J
2014-11-01
The DNA Commission of the International Society of Forensic Genetics (ISFG) regularly publishes guidelines and recommendations concerning the application of DNA polymorphisms to the question of human identification. Previous recommendations published in 2000 addressed the analysis and interpretation of mitochondrial DNA (mtDNA) in forensic casework. While the foundations set forth in the earlier recommendations still apply, new approaches to the quality control, alignment and nomenclature of mitochondrial sequences, as well as the establishment of mtDNA reference population databases, have been developed. Here, we describe these developments and discuss their application to both mtDNA casework and mtDNA reference population databasing applications. While the generation of mtDNA for forensic casework has always been guided by specific standards, it is now well-established that data of the same quality are required for the mtDNA reference population data used to assess the statistical weight of the evidence. As a result, we introduce guidelines regarding sequence generation, as well as quality control measures based on the known worldwide mtDNA phylogeny, that can be applied to ensure the highest quality population data possible. For both casework and reference population databasing applications, the alignment and nomenclature of haplotypes is revised here and the phylogenetic alignment proffered as acceptable standard. In addition, the interpretation of heteroplasmy in the forensic context is updated, and the utility of alignment-free database searches for unbiased probability estimates is highlighted. Finally, we discuss statistical issues and define minimal standards for mtDNA database searches. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Aronson, Jeffrey K
2016-01-01
Objective To examine how misspellings of drug names could impede searches for published literature. Design Database review. Data source PubMed. Review methods The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names—the international non-proprietary names, variant names—deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants—variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for “standard name[tw]” and “variant name[tw] NOT standard name[tw].” Results The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in “in,” “ine,” or “micin” were commonly misspelt. Failing to search for hidden reference variants of “gentamicin,” “amitriptyline,” “mirtazapine,” and “trazodone” would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, “No-el”, was rare; variants of “X-miss” were rarer. Conclusion When performing searches, researchers should include misspellings of drug names among their search terms. PMID:27974346
Ferner, Robin E; Aronson, Jeffrey K
2016-12-14
To examine how misspellings of drug names could impede searches for published literature. Database review. PubMed. The study included 30 drug names that are commonly misspelt on prescription charts in hospitals in Birmingham, UK (test set), and 30 control names randomly chosen from a hospital formulary (control set). The following definitions were used: standard names-the international non-proprietary names, variant names-deviations in spelling from standard names that are not themselves standard names in English language nomenclature, and hidden reference variants-variant spellings that identified publications in textword (tw) searches of PubMed or other databases, and which were not identified by textword searches for the standard names. Variant names were generated from standard names by applying letter substitutions, omissions, additions, transpositions, duplications, deduplications, and combinations of these. Searches were carried out in PubMed (30 June 2016) for "standard name[tw]" and "variant name[tw] NOT standard name[tw]." The 30 standard names of drugs in the test set gave 325 979 hits in total, and 160 hidden reference variants gave 3872 hits (1.17%). The standard names of the control set gave 470 064 hits, and 79 hidden reference variants gave 766 hits (0.16%). Letter substitutions (particularly i to y and vice versa) and omissions together accounted for 2924 (74%) of the variants. Amitriptyline (8530 hits) yielded 18 hidden reference variants (179 (2.1%) hits). Names ending in "in," "ine," or "micin" were commonly misspelt. Failing to search for hidden reference variants of "gentamicin," "amitriptyline," "mirtazapine," and "trazodone" would miss at least 19 systematic reviews. A hidden reference variant related to Christmas, "No-el", was rare; variants of "X-miss" were rarer. When performing searches, researchers should include misspellings of drug names among their search terms. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
[Establishment of database with standard 3D tooth crowns based on 3DS MAX].
Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun
2009-08-01
The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system.
National Institute of Standards and Technology Data Gateway
NIST Scoring Package (PC database for purchase) The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.
NASA Astrophysics Data System (ADS)
Aschonitis, Vassilis G.; Papamichail, Dimitris; Demertzi, Kleoniki; Colombani, Nicolo; Mastrocicco, Micol; Ghirardini, Andrea; Castaldelli, Giuseppe; Fano, Elisa-Anna
2017-08-01
The objective of the study is to provide global grids (0.5°) of revised annual coefficients for the Priestley-Taylor (P-T) and Hargreaves-Samani (H-S) evapotranspiration methods after calibration based on the ASCE (American Society of Civil Engineers)-standardized Penman-Monteith method (the ASCE method includes two reference crops: short-clipped grass and tall alfalfa). The analysis also includes the development of a global grid of revised annual coefficients for solar radiation (Rs) estimations using the respective Rs formula of H-S. The analysis was based on global gridded climatic data of the period 1950-2000. The method for deriving annual coefficients of the P-T and H-S methods was based on partial weighted averages (PWAs) of their mean monthly values. This method estimates the annual values considering the amplitude of the parameter under investigation (ETo and Rs) giving more weight to the monthly coefficients of the months with higher ETo values (or Rs values for the case of the H-S radiation formula). The method also eliminates the effect of unreasonably high or low monthly coefficients that may occur during periods where ETo and Rs fall below a specific threshold. The new coefficients were validated based on data from 140 stations located in various climatic zones of the USA and Australia with expanded observations up to 2016. The validation procedure for ETo estimations of the short reference crop showed that the P-T and H-S methods with the new revised coefficients outperformed the standard methods reducing the estimated root mean square error (RMSE) in ETo values by 40 and 25 %, respectively. The estimations of Rs using the H-S formula with revised coefficients reduced the RMSE by 28 % in comparison to the standard H-S formula. Finally, a raster database was built consisting of (a) global maps for the mean monthly ETo values estimated by ASCE-standardized method for both reference crops, (b) global maps for the revised annual coefficients of the P-T and H-S evapotranspiration methods for both reference crops and a global map for the revised annual coefficient of the H-S radiation formula and (c) global maps that indicate the optimum locations for using the standard P-T and H-S methods and their possible annual errors based on reference values. The database can support estimations of ETo and solar radiation for locations where climatic data are limited and it can support studies which require such estimations on larger scales (e.g. country, continent, world). The datasets produced in this study are archived in the PANGAEA database (https://doi.org/10.1594/PANGAEA.868808) and in the ESRN database (http://www.esrn-database.org or http://esrn-database.weebly.com).
RefPrimeCouch—a reference gene primer CouchApp
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831
RefPrimeCouch--a reference gene primer CouchApp.
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.
Turk, Gregory C; Sharpless, Katherine E; Cleveland, Danielle; Jongsma, Candice; Mackey, Elizabeth A; Marlow, Anthony F; Oflaz, Rabia; Paul, Rick L; Sieber, John R; Thompson, Robert Q; Wood, Laura J; Yu, Lee L; Zeisler, Rolf; Wise, Stephen A; Yen, James H; Christopher, Steven J; Day, Russell D; Long, Stephen E; Greene, Ella; Harnly, James; Ho, I-Pin; Betz, Joseph M
2013-01-01
Standard Reference Material 3280 Multivitamin/ Multielement Tablets was issued by the National Institute of Standards and Technology in 2009, and has certified and reference mass fraction values for 13 vitamins, 26 elements, and two carotenoids. Elements were measured using two or more analytical methods at NIST with additional data contributed by collaborating laboratories. This reference material is expected to serve a dual purpose: to provide quality assurance in support of a database of dietary supplement products and to provide a means for analysts, dietary supplement manufacturers, and researchers to assess the appropriateness and validity of their analytical methods and the accuracy of their results.
MARC and Relational Databases.
ERIC Educational Resources Information Center
Llorens, Jose; Trenor, Asuncion
1993-01-01
Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)
Pathology Imagebase-a reference image database for standardization of pathology.
Egevad, Lars; Cheville, John; Evans, Andrew J; Hörnblad, Jonas; Kench, James G; Kristiansen, Glen; Leite, Katia R M; Magi-Galluzzi, Cristina; Pan, Chin-Chen; Samaratunga, Hemamali; Srigley, John R; True, Lawrence; Zhou, Ming; Clements, Mark; Delahunt, Brett
2017-11-01
Despite efforts to standardize histopathology practice through the development of guidelines, the interpretation of morphology is still hampered by subjectivity. We here describe Pathology Imagebase, a novel mechanism for establishing an international standard for the interpretation of pathology specimens. The International Society of Urological Pathology (ISUP) established a reference image database through the input of experts in the field. Three panels were formed, one each for prostate, urinary bladder and renal pathology, consisting of 24 international experts. Each of the panel members uploaded microphotographs of cases into a non-public database. The remaining 23 experts were asked to vote from a multiple-choice menu. Prior to and while voting, panel members were unable to access the results of voting by the other experts. When a consensus level of at least two-thirds or 16 votes was reached, cases were automatically transferred to the main database. Consensus was reached in a total of 287 cases across five projects on the grading of prostate, bladder and renal cancer and the classification of renal tumours and flat lesions of the bladder. The full database is available to all ISUP members at www.isupweb.org. Non-members may access a selected number of cases. It is anticipated that the database will assist pathologists in calibrating their grading, and will also promote consistency in the diagnosis of difficult cases. © 2017 John Wiley & Sons Ltd.
National Nutrient Database for Standard Reference - Find Nutrient Value of Common Foods by Nutrient
... grams Household * required field USDA Food Composition Databases Software developed by the National Agricultural Library v.3.9.4.1 2018-06-11 NAL Home | USDA.gov | Agricultural Research Service | Plain Language | FOIA | Accessibility Statement | Information Quality | Privacy ...
NED and SIMBAD Conventions for Bibliographic Reference Coding
NASA Technical Reports Server (NTRS)
Schmitz, M.; Helou, G.; Dubois, P.; LaGue, C.; Madore, B.; Jr., H. G. Corwin; Lesteven, S.
1995-01-01
The primary purpose of the 'reference code' is to provide a unique and traceable representation of a bibliographic reference within the structure of each database. The code is used frequently in the interfaces as a succinct abbreviation of a full bibliographic reference. Since its inception, it has become a standard code not only for NED and SIMBAD, but also for other bibliographic services.
The quest for the perfect gravity anomaly: Part 1 - New calculation standards
Li, X.; Hildenbrand, T.G.; Hinze, W. J.; Keller, Gordon R.; Ravat, D.; Webring, M.
2006-01-01
The North American gravity database together with databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revision of procedures and standards for calculating gravity anomalies taking into account our enhanced computational power, modern satellite-based positioning technology, improved terrain databases, and increased interest in more accurately defining different anomaly components. The most striking revision is the use of one single internationally accepted reference ellipsoid for the horizontal and vertical datums of gravity stations as well as for the computation of the theoretical gravity. The new standards hardly impact the interpretation of local anomalies, but do improve regional anomalies. Most importantly, such new standards can be consistently applied to gravity database compilations of nations, continents, and even the entire world. ?? 2005 Society of Exploration Geophysicists.
The Ins and Outs of USDA Nutrient Composition
USDA-ARS?s Scientific Manuscript database
The USDA National Nutrient Database for Standard Reference (SR) is the major source of food composition data in the United States, providing the foundation for most food composition databases in the public and private sectors. Sources of data used in SR include analytical studies, food manufacturer...
Standardization of Keyword Search Mode
ERIC Educational Resources Information Center
Su, Di
2010-01-01
In spite of its popularity, keyword search mode has not been standardized. Though information professionals are quick to adapt to various presentations of keyword search mode, novice end-users may find keyword search confusing. This article compares keyword search mode in some major reference databases and calls for standardization. (Contains 3…
Example of monitoring measurements in a virtual eye clinic using 'big data'.
Jones, Lee; Bryan, Susan R; Miranda, Marco A; Crabb, David P; Kotecha, Aachal
2017-10-26
To assess the equivalence of measurement outcomes between patients attending a standard glaucoma care service, where patients see an ophthalmologist in a face-to-face setting, and a glaucoma monitoring service (GMS). The average mean deviation (MD) measurement on the visual field (VF) test for 250 patients attending a GMS were compared with a 'big data' repository of patients attending a standard glaucoma care service (reference database). In addition, the speed of VF progression between GMS patients and reference database patients was compared. Reference database patients were used to create expected outcomes that GMS patients could be compared with. For GMS patients falling outside of the expected limits, further analysis was carried out on the clinical management decisions for these patients. The average MD of patients in the GMS ranged from +1.6 dB to -18.9 dB between two consecutive appointments at the clinic. In the first analysis, 12 (4.8%; 95% CI 2.5% to 8.2%) GMS patients scored outside the 90% expected values based on the reference database. In the second analysis, 1.9% (95% CI 0.4% to 5.4%) GMS patients had VF changes outside of the expected 90% limits. Using 'big data' collected in the standard glaucoma care service, we found that patients attending a GMS have equivalent outcomes on the VF test. Our findings provide support for the implementation of virtual healthcare delivery in the hospital eye service. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Thematic Accuracy Assessment of the 2011 National Land Cover Database (NLCD)
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment o...
Update of NDL’s list of key foods based on the 2007-2008 WWEIA-NHANES
USDA-ARS?s Scientific Manuscript database
The Nutrient Data Laboratory is responsible for developing authoritative nutrient databases that contain a wide range of food composition values of the nation's food supply. This requires updating and revising the USDA Nutrient Database for Standard Reference (SR) and developing various special int...
Standards for Clinical Grade Genomic Databases.
Yohe, Sophia L; Carter, Alexis B; Pfeifer, John D; Crawford, James M; Cushman-Vokoun, Allison; Caughron, Samuel; Leonard, Debra G B
2015-11-01
Next-generation sequencing performed in a clinical environment must meet clinical standards, which requires reproducibility of all aspects of the testing. Clinical-grade genomic databases (CGGDs) are required to classify a variant and to assist in the professional interpretation of clinical next-generation sequencing. Applying quality laboratory standards to the reference databases used for sequence-variant interpretation presents a new challenge for validation and curation. To define CGGD and the categories of information contained in CGGDs and to frame recommendations for the structure and use of these databases in clinical patient care. Members of the College of American Pathologists Personalized Health Care Committee reviewed the literature and existing state of genomic databases and developed a framework for guiding CGGD development in the future. Clinical-grade genomic databases may provide different types of information. This work group defined 3 layers of information in CGGDs: clinical genomic variant repositories, genomic medical data repositories, and genomic medicine evidence databases. The layers are differentiated by the types of genomic and medical information contained and the utility in assisting with clinical interpretation of genomic variants. Clinical-grade genomic databases must meet specific standards regarding submission, curation, and retrieval of data, as well as the maintenance of privacy and security. These organizing principles for CGGDs should serve as a foundation for future development of specific standards that support the use of such databases for patient care.
GlycoRDF: an ontology to standardize glycomics data in RDF
Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F.; Campbell, Matthew P.; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi
2015-01-01
Motivation: Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. Results: An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. Availability and implementation: The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). Contact: rene@ccrc.uga.edu or kkiyoko@soka.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25388145
GlycoRDF: an ontology to standardize glycomics data in RDF.
Ranzinger, Rene; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P; Kawano, Shin; Lütteke, Thomas; Okuda, Shujiro; Shinmachi, Daisuke; Shikanai, Toshihide; Sawaki, Hiromichi; Toukach, Philip; Matsubara, Masaaki; Yamada, Issaku; Narimatsu, Hisashi
2015-03-15
Over the last decades several glycomics-based bioinformatics resources and databases have been created and released to the public. Unfortunately, there is no common standard in the representation of the stored information or a common machine-readable interface allowing bioinformatics groups to easily extract and cross-reference the stored information. An international group of bioinformatics experts in the field of glycomics have worked together to create a standard Resource Description Framework (RDF) representation for glycomics data, focused on glycan sequences and related biological source, publications and experimental data. This RDF standard is defined by the GlycoRDF ontology and will be used by database providers to generate common machine-readable exports of the data stored in their databases. The ontology, supporting documentation and source code used by database providers to generate standardized RDF are available online (http://www.glycoinfo.org/GlycoRDF/). © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Jung, Bo Kyeung; Kim, Jeeyong; Cho, Chi Hyun; Kim, Ju Yeon; Nam, Myung Hyun; Shin, Bong Kyung; Rho, Eun Youn; Kim, Sollip; Sung, Heungsup; Kim, Shinyoung; Ki, Chang Seok; Park, Min Jung; Lee, Kap No; Yoon, Soo Young
2017-04-01
The National Health Information Standards Committee was established in 2004 in Korea. The practical subcommittee for laboratory test terminology was placed in charge of standardizing laboratory medicine terminology in Korean. We aimed to establish a standardized Korean laboratory terminology database, Korea-Logical Observation Identifier Names and Codes (K-LOINC) based on former products sponsored by this committee. The primary product was revised based on the opinions of specialists. Next, we mapped the electronic data interchange (EDI) codes that were revised in 2014, to the corresponding K-LOINC. We established a database of synonyms, including the laboratory codes of three reference laboratories and four tertiary hospitals in Korea. Furthermore, we supplemented the clinical microbiology section of K-LOINC using an alternative mapping strategy. We investigated other systems that utilize laboratory codes in order to investigate the compatibility of K-LOINC with statistical standards for a number of tests. A total of 48,990 laboratory codes were adopted (21,539 new and 16,330 revised). All of the LOINC synonyms were translated into Korean, and 39,347 Korean synonyms were added. Moreover, 21,773 synonyms were added from reference laboratories and tertiary hospitals. Alternative strategies were established for mapping within the microbiology domain. When we applied these to a smaller hospital, the mapping rate was successfully increased. Finally, we confirmed K-LOINC compatibility with other statistical standards, including a newly proposed EDI code system. This project successfully established an up-to-date standardized Korean laboratory terminology database, as well as an updated EDI mapping to facilitate the introduction of standard terminology into institutions. © 2017 The Korean Academy of Medical Sciences.
NASA Astrophysics Data System (ADS)
Wolery, Thomas J.; Jové Colón, Carlos F.
2017-09-01
Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data. Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO2, water, and aqueous species such as Na+ and Cl-. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15 K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of "links" to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare "key" or "reference" datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.
Data mining the EXFOR database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David A.; Hirdt, John; Herman, Michal
2013-12-13
The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as this data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the connections of various types between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. Analysing this abstract graph, we are able to address very specific questions such as 1) what observables are being used as reference measurements by the experimental community? 2) are thesemore » observables given the attention needed by various standards organisations? 3) are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross section) observables that should be evaluated and made into reaction reference standards.« less
Reference Manual for Machine-Readable Bibliographic Descriptions. Second Revised Edition.
ERIC Educational Resources Information Center
Dierickx, H., Ed.; Hopkinson, A., Ed.
A product of the UNISIST International Centre for Bibliographic Descriptions (UNIBIB), this reference manual presents a standardized communication format for the exchange of machine-readable bibliographic information between bibliographic databases or other types of bibliographic information services, including libraries. The manual is produced in…
Standardization of Terminology in Laboratory Medicine II
Lee, Kap No; Yoon, Jong-Hyun; Min, Won Ki; Lim, Hwan Sub; Song, Junghan; Chae, Seok Lae; Jang, Seongsoo; Ki, Chang-Seok; Bae, Sook Young; Kim, Jang Su; Kwon, Jung-Ah; Lee, Chang Kyu
2008-01-01
Standardization of medical terminology is essential in data transmission between health care institutes and in maximizing the benefits of information technology. The purpose of this study was to standardize medical terms for laboratory observations. During the second year of the study, a standard database of concept names for laboratory terms that covered those used in tertiary health care institutes and reference laboratories was developed. The laboratory terms in the Logical Observation Identifier Names and Codes (LOINC) database were adopted and matched with the electronic data interchange (EDI) codes in Korea. A public hearing and a workshop for clinical pathologists were held to collect the opinions of experts. The Korean standard laboratory terminology database containing six axial concept names, components, property, time aspect, system (specimen), scale type, and method type, was established for 29,340 test observations. Short names and mapping tables for EDI codes and UMLS were added. Synonym tables were prepared to help match concept names to common terms used in the fields. We herein described the Korean standard laboratory terminology database for test names, result description terms, and result units encompassing most of the laboratory tests in Korea. PMID:18756062
Protein, fat, moisture, and cooking yields from a nationwide study of retail beef cuts.
USDA-ARS?s Scientific Manuscript database
Nutrient data from the U.S. Department of Agriculture (USDA) are an important resource for U.S. and international databases. To ensure the data for retail beef cuts in USDA’s National Nutrient Database for Standard Reference (SR) are current, a comprehensive, nationwide, multiyear study was conducte...
ERIC Educational Resources Information Center
Missouri Univ., Columbia. Instructional Materials Lab.
These two documents deal with the relationship between Missouri's Show-Me Standards (the standards defining what all Missouri students should know upon graduation from high school) with the vocational competencies taught in secondary-level agricultural education courses. The first document, which is a database documenting the common ground that…
ERIC Educational Resources Information Center
Tieman, Rebecca; Burns, Stacey
This publication consists of the main and mini reports for Missouri's Show-Me Standards and vocational education competencies for business education. This database documents the common ground between academic skills and vocational competencies. Both components of the Show-Me Standards--knowledge (content) and performance (process)--have been…
ERIC Educational Resources Information Center
Missouri Univ., Columbia. Instructional Materials Lab.
This publication consists of the main and mini reports for Missouri's Show-Me Standards and vocational education competencies for industrial education. This database documents the common ground between academic skills and vocational competencies. Both components of the Show-Me Standards--knowledge (content) and performance (process)--have been…
ERIC Educational Resources Information Center
Tieman, Rebecca; Burns, Stacey
This publication consists of the main and mini reports for Missouri's Show-Me Standards and vocational education competencies for health occupations. This database documents the common ground between academic skills and vocational competencies. Both components of the Show-Me Standards--knowledge (content) and performance (process)--have been…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolery, Thomas J.; Jove Colon, Carlos F.
Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data.more » Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO 2, water, and aqueous species such as Na + and Cl -. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of “links” to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare “key” or “reference” datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.« less
Wolery, Thomas J.; Jove Colon, Carlos F.
2016-09-26
Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data.more » Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO 2, water, and aqueous species such as Na + and Cl -. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of “links” to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare “key” or “reference” datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent links to the same elemental reference form in a thermodynamic database will result in an inconsistency in the database. Thus, in constructing a database, it is important to establish a set of reliable links (generally resulting in a set of primary reference data) and then correct all data adopted subsequently for consistency with that set. Recommended values of key data have not been constant through history. We review some of this history through the lens of major compilations and other influential reports, and note a number of problem areas. Finally, we illustrate the concepts developed in this paper by applying them to some key species of geochemical interest, including liquid water; quartz and aqueous silica; and gibbsite, corundum, and the aqueous aluminum ion.« less
Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro
2016-06-01
Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Semi-Automated Annotation of Biobank Data Using Standard Medical Terminologies in a Graph Database.
Hofer, Philipp; Neururer, Sabrina; Goebel, Georg
2016-01-01
Data describing biobank resources frequently contains unstructured free-text information or insufficient coding standards. (Bio-) medical ontologies like Orphanet Rare Diseases Ontology (ORDO) or the Human Disease Ontology (DOID) provide a high number of concepts, synonyms and entity relationship properties. Such standard terminologies increase quality and granularity of input data by adding comprehensive semantic background knowledge from validated entity relationships. Moreover, cross-references between terminology concepts facilitate data integration across databases using different coding standards. In order to encourage the use of standard terminologies, our aim is to identify and link relevant concepts with free-text diagnosis inputs within a biobank registry. Relevant concepts are selected automatically by lexical matching and SPARQL queries against a RDF triplestore. To ensure correctness of annotations, proposed concepts have to be confirmed by medical data administration experts before they are entered into the registry database. Relevant (bio-) medical terminologies describing diseases and phenotypes were identified and stored in a graph database which was tied to a local biobank registry. Concept recommendations during data input trigger a structured description of medical data and facilitate data linkage between heterogeneous systems.
Reference Manual for Machine-Readable Descriptions of Research Projects and Institutions.
ERIC Educational Resources Information Center
Dierickx, Harold; Hopkinson, Alan
This reference manual presents a standardized communication format for the exchange between databases or other information services of machine-readable information on research in progress. The manual is produced in loose-leaf format to facilitate updating. Its first section defines in broad outline the format and content of applicable records. A…
The multiple personalities of Watson and Crick strands.
Cartwright, Reed A; Graur, Dan
2011-02-08
In genetics it is customary to refer to double-stranded DNA as containing a "Watson strand" and a "Crick strand." However, there seems to be no consensus in the literature on the exact meaning of these two terms, and the many usages contradict one another as well as the original definition. Here, we review the history of the terminology and suggest retaining a single sense that is currently the most useful and consistent. The Saccharomyces Genome Database defines the Watson strand as the strand which has its 5'-end at the short-arm telomere and the Crick strand as its complement. The Watson strand is always used as the reference strand in their database. Using this as the basis of our standard, we recommend that Watson and Crick strand terminology only be used in the context of genomics. When possible, the centromere or other genomic feature should be used as a reference point, dividing the chromosome into two arms of unequal lengths. Under our proposal, the Watson strand is standardized as the strand whose 5'-end is on the short arm of the chromosome, and the Crick strand as the one whose 5'-end is on the long arm. Furthermore, the Watson strand should be retained as the reference (plus) strand in a genomic database. This usage not only makes the determination of Watson and Crick unambiguous, but also allows unambiguous selection of reference stands for genomics. This article was reviewed by John M. Logsdon, Igor B. Rogozin (nominated by Andrey Rzhetsky), and William Martin.
ERIC Educational Resources Information Center
Tieman, Rebecca; Burns, Stacey
These two documents deal with the relationship between Missouri's Show-Me Standards (the standards defining what all Missouri students should know upon graduation from high school) with the vocational competencies taught in secondary-level family and consumer science (FACS) education courses. The first document, which is a database documenting the…
Calorie count - Alcoholic beverages
... nih.gov/pubmed/23384768 . United States Department of Agriculture website. National Nutrient Database for Standard Reference. ndb. ... 2018. Accessed April 12, 2018. US Department of Agriculture website. Nutrition and your health: Table E-3. ...
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
Cameron, M; Perry, J; Middleton, J R; Chaffer, M; Lewis, J; Keefe, G P
2018-01-01
This study evaluated MALDI-TOF mass spectrometry and a custom reference spectra expanded database for the identification of bovine-associated coagulase-negative staphylococci (CNS). A total of 861 CNS isolates were used in the study, covering 21 different CNS species. The majority of the isolates were previously identified by rpoB gene sequencing (n = 804) and the remainder were identified by sequencing of hsp60 (n = 56) and tuf (n = 1). The genotypic identification was considered the gold standard identification. Using a direct transfer protocol and the existing commercial database, MALDI-TOF mass spectrometry showed a typeability of 96.5% (831/861) and an accuracy of 99.2% (824/831). Using a custom reference spectra expanded database, which included an additional 13 in-house created reference spectra, isolates were identified by MALDI-TOF mass spectrometry with 99.2% (854/861) typeability and 99.4% (849/854) accuracy. Overall, MALDI-TOF mass spectrometry using the direct transfer method was shown to be a highly reliable tool for the identification of bovine-associated CNS. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
... the amount of vitamin K they contain (USDA- ARS, 2015). Table 2. Sources of vitamin K. Food ... U.S. Department of Agriculture, Agricultural Research Service USDA-ARS. (2015). National Nutrient Database for Standard Reference, Release ...
USDA-ARS?s Scientific Manuscript database
Beef nutrition is very important to the worldwide beef industry and its consumers. The objective of this study was to analyze nutrient composition of eight beef rib and plate cuts to update the nutrient data in the USDA National Nutrient Database for Standard Reference (SR). Seventy-two carcasses ...
Lupiañez-Barbero, Ascension; González Blanco, Cintia; de Leiva Hidalgo, Alberto
2018-05-23
Food composition tables and databases (FCTs or FCDBs) provide the necessary information to estimate intake of nutrients and other food components. In Spain, the lack of a reference database has resulted in use of different FCTs/FCDBs in nutritional surveys and research studies, as well as for development of dietetic for diet analysis. As a result, biased, non-comparable results are obtained, and healthcare professionals are rarely aware of these limitations. AECOSAN and the BEDCA association developed a FCDB following European standards, the Spanish Food Composition Database Network (RedBEDCA).The current database has a limited number of foods and food components and barely contains processed foods, which limits its use in epidemiological studies and in the daily practice of healthcare professionals. Copyright © 2018 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.
Plant Reactome: a resource for plant pathways and comparative analysis
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D.; Wu, Guanming; Fabregat, Antonio; Elser, Justin L.; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D.; Ware, Doreen; Jaiswal, Pankaj
2017-01-01
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. PMID:27799469
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
The multiple personalities of Watson and Crick strands
2011-01-01
Background In genetics it is customary to refer to double-stranded DNA as containing a "Watson strand" and a "Crick strand." However, there seems to be no consensus in the literature on the exact meaning of these two terms, and the many usages contradict one another as well as the original definition. Here, we review the history of the terminology and suggest retaining a single sense that is currently the most useful and consistent. Proposal The Saccharomyces Genome Database defines the Watson strand as the strand which has its 5'-end at the short-arm telomere and the Crick strand as its complement. The Watson strand is always used as the reference strand in their database. Using this as the basis of our standard, we recommend that Watson and Crick strand terminology only be used in the context of genomics. When possible, the centromere or other genomic feature should be used as a reference point, dividing the chromosome into two arms of unequal lengths. Under our proposal, the Watson strand is standardized as the strand whose 5'-end is on the short arm of the chromosome, and the Crick strand as the one whose 5'-end is on the long arm. Furthermore, the Watson strand should be retained as the reference (plus) strand in a genomic database. This usage not only makes the determination of Watson and Crick unambiguous, but also allows unambiguous selection of reference stands for genomics. Reviewers This article was reviewed by John M. Logsdon, Igor B. Rogozin (nominated by Andrey Rzhetsky), and William Martin. PMID:21303550
Calorie count - sodas and energy drinks
... Accessed May 11, 2016. United States Department of Agriculture website. ChooseMyPlate.gov. Make better beverage choices. www. ... Accessed May 11, 2016. United States Department of Agriculture. National nutrient database for standard reference. Release 28. ...
Sample and data processing considerations for the NIST quantitative infrared database
NASA Astrophysics Data System (ADS)
Chu, Pamela M.; Guenther, Franklin R.; Rhoderick, George C.; Lafferty, Walter J.; Phillips, William
1999-02-01
Fourier-transform infrared (FT-IR) spectrometry has become a useful real-time in situ analytical technique for quantitative gas phase measurements. In fact, the U.S. Environmental Protection Agency (EPA) has recently approved open-path FT-IR monitoring for the determination of hazardous air pollutants (HAP) identified in EPA's Clean Air Act of 1990. To support infrared based sensing technologies, the National Institute of Standards and Technology (NIST) is currently developing a standard quantitative spectral database of the HAPs based on gravimetrically prepared standard samples. The procedures developed to ensure the quantitative accuracy of the reference data are discussed, including sample preparation, residual sample contaminants, data processing considerations, and estimates of error.
National Software Reference Library (NSRL)
National Institute of Standards and Technology Data Gateway
National Software Reference Library (NSRL) (PC database for purchase) A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.
Datacube Services in Action, Using Open Source and Open Standards
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2016-12-01
Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.
Disbiome database: linking the microbiome to disease.
Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart
2018-06-04
Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.
Schacherer, Lindsey J; Xie, Weiping; Owens, Michaela A; Alarcon, Clara; Hu, Tiger X
2016-09-01
Liquid chromatography coupled with tandem mass spectrometry is increasingly used for protein detection for transgenic crops research. Currently this is achieved with protein reference standards which may take a significant time or efforts to obtain and there is a need for rapid protein detection without protein reference standards. A sensitive and specific method was developed to detect target proteins in transgenic maize leaf crude extract at concentrations as low as ∼30 ng mg(-1) dry leaf without the need of reference standards or any sample enrichment. A hybrid Q-TRAP mass spectrometer was used to monitor all potential tryptic peptides of the target proteins in both transgenic and non-transgenic samples. The multiple reaction monitoring-initiated detection and sequencing (MIDAS) approach was used for initial peptide/protein identification via Mascot database search. Further confirmation was achieved by direct comparison between transgenic and non-transgenic samples. Definitive confirmation was provided by running the same experiments of synthetic peptides or protein standards, if available. A targeted proteomic mass spectrometry method using MIDAS approach is an ideal methodology for detection of new proteins in early stages of transgenic crop research and development when neither protein reference standards nor antibodies are available. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Classification of Chemicals Based On Structured Toxicity Information
Thirty years and millions of dollars worth of pesticide registration toxicity studies, historically stored as hardcopy and scanned documents, have been digitized into highly standardized and structured toxicity data within the Toxicity Reference Database (ToxRefDB). Toxicity-bas...
The Joint Committee for Traceability in Laboratory Medicine (JCTLM) - its history and operation.
Jones, Graham R D; Jackson, Craig
2016-01-30
The Joint Committee for Traceability in Laboratory Medicine (JCTLM) was formed to bring together the sciences of metrology, laboratory medicine and laboratory quality management. The aim of this collaboration is to support worldwide comparability and equivalence of measurement results in clinical laboratories for the purpose of improving healthcare. The JCTLM has its origins in the activities of international metrology treaty organizations, professional societies and federations devoted to improving measurement quality in physical, chemical and medical sciences. The three founding organizations, the International Committee for Weights and Measures (CIPM), the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) and the International Laboratory Accreditation Cooperation (ILAC) are the leaders of this activity. The main service of the JCTLM is a web-based database with a list of reference materials, reference methods and reference measurement services meeting appropriate international standards. This database allows manufacturers to select references for assay traceability and provides support for suppliers of these services. As of mid 2015 the database lists 295 reference materials for 162 analytes, 170 reference measurement procedures for 79 analytes and 130 reference measurement services for 39 analytes. There remains a need for the development and implementation of metrological traceability in many areas of laboratory medicine and the JCTLM will continue to promote these activities into the future. Copyright © 2015 Elsevier B.V. All rights reserved.
Plant Reactome: a resource for plant pathways and comparative analysis.
Naithani, Sushma; Preece, Justin; D'Eustachio, Peter; Gupta, Parul; Amarasinghe, Vindhya; Dharmawardhana, Palitha D; Wu, Guanming; Fabregat, Antonio; Elser, Justin L; Weiser, Joel; Keays, Maria; Fuentes, Alfonso Munoz-Pomer; Petryszak, Robert; Stein, Lincoln D; Ware, Doreen; Jaiswal, Pankaj
2017-01-04
Plant Reactome (http://plantreactome.gramene.org/) is a free, open-source, curated plant pathway database portal, provided as part of the Gramene project. The database provides intuitive bioinformatics tools for the visualization, analysis and interpretation of pathway knowledge to support genome annotation, genome analysis, modeling, systems biology, basic research and education. Plant Reactome employs the structural framework of a plant cell to show metabolic, transport, genetic, developmental and signaling pathways. We manually curate molecular details of pathways in these domains for reference species Oryza sativa (rice) supported by published literature and annotation of well-characterized genes. Two hundred twenty-two rice pathways, 1025 reactions associated with 1173 proteins, 907 small molecules and 256 literature references have been curated to date. These reference annotations were used to project pathways for 62 model, crop and evolutionarily significant plant species based on gene homology. Database users can search and browse various components of the database, visualize curated baseline expression of pathway-associated genes provided by the Expression Atlas and upload and analyze their Omics datasets. The database also offers data access via Application Programming Interfaces (APIs) and in various standardized pathway formats, such as SBML and BioPAX. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
The wavelet/scalar quantization compression standard for digital fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
NASA Astrophysics Data System (ADS)
Wziontek, Hartmut; Falk, Reinhard; Bonvalot, Sylvain; Rülke, Axel
2017-04-01
After about 10 years of successful joint operation by BGI and BKG, the International Database for Absolute Gravity Measurements "AGrav" (see references hereafter) was under a major revision. The outdated web interface was replaced by a responsive, high level web application framework based on Python and built on top of Pyramid. Functionality was added, like interactive time series plots or a report generator and the interactive map-based station overview was updated completely, comprising now clustering and the classification of stations. Furthermore, the database backend was migrated to PostgreSQL for better support of the application framework and long-term availability. As comparisons of absolute gravimeters (AG) become essential to realize a precise and uniform gravity standard, the database was extended to document the results on international and regional level, including those performed at monitoring stations equipped with SGs. By this it will be possible to link different AGs and to trace their equivalence back to the key comparisons under the auspices of International Committee for Weights and Measures (CIPM) as the best metrological realization of the absolute gravity standard. In this way the new AGrav database accommodates the demands of the new Global Absolute Gravity Reference System as recommended by the IAG Resolution No. 2 adopted in Prague 2015. The new database will be presented with focus on the new user interface and new functionality, calling all institutions involved in absolute gravimetry to participate and contribute with their information to built up a most complete picture of high precision absolute gravimetry and improve its visibility. A Digital Object Identifier (DOI) will be provided by BGI to contributors to give a better traceability and facilitate the referencing of their gravity surveys. Links and references: BGI mirror site : http://bgi.obs-mip.fr/data-products/Gravity-Databases/Absolute-Gravity-data/ BKG mirror site: http://agrav.bkg.bund.de/agrav-meta/ Wilmes, H., H. Wziontek, R. Falk, S. Bonvalot (2009). AGrav - the New Absolute Gravity Database and a Proposed Cooperation with the GGP Project. J. of Geodynamics, 48, pp. 305-309. doi:10.1016/j.jog.2009.09.035. Wziontek, H., H. Wilmes, S. Bonvalot (2011). AGrav: An international database for absolute gravity measurements. In Geodesy for Planet Earth (S. Kenyon at al. eds). IAG Symposia, 136, 1035-1040, Springer, Berlin. 2011. doi:10.1007/978-3-642-20338-1_130.
Abou El Hassan, Mohamed; Stoianov, Alexandra; Araújo, Petra A T; Sadeghieh, Tara; Chan, Man Khun; Chen, Yunqi; Randell, Edward; Nieuwesteeg, Michelle; Adeli, Khosrow
2015-11-01
The CALIPER program has established a comprehensive database of pediatric reference intervals using largely the Abbott ARCHITECT biochemical assays. To expand clinical application of CALIPER reference standards, the present study is aimed at transferring CALIPER reference intervals from the Abbott ARCHITECT to Beckman Coulter AU assays. Transference of CALIPER reference intervals was performed based on the CLSI guidelines C28-A3 and EP9-A2. The new reference intervals were directly verified using up to 100 reference samples from the healthy CALIPER cohort. We found a strong correlation between Abbott ARCHITECT and Beckman Coulter AU biochemical assays, allowing the transference of the vast majority (94%; 30 out of 32 assays) of CALIPER reference intervals previously established using Abbott assays. Transferred reference intervals were, in general, similar to previously published CALIPER reference intervals, with some exceptions. Most of the transferred reference intervals were sex-specific and were verified using healthy reference samples from the CALIPER biobank based on CLSI criteria. It is important to note that the comparisons performed between the Abbott and Beckman Coulter assays make no assumptions as to assay accuracy or which system is more correct/accurate. The majority of CALIPER reference intervals were transferrable to Beckman Coulter AU assays, allowing the establishment of a new database of pediatric reference intervals. This further expands the utility of the CALIPER database to clinical laboratories using the AU assays; however, each laboratory should validate these intervals for their analytical platform and local population as recommended by the CLSI. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Hermjakob, Henning; Montecchi-Palazzi, Luisa; Bader, Gary; Wojcik, Jérôme; Salwinski, Lukasz; Ceol, Arnaud; Moore, Susan; Orchard, Sandra; Sarkans, Ugis; von Mering, Christian; Roechert, Bernd; Poux, Sylvain; Jung, Eva; Mersch, Henning; Kersey, Paul; Lappe, Michael; Li, Yixue; Zeng, Rong; Rana, Debashis; Nikolski, Macha; Husi, Holger; Brun, Christine; Shanker, K; Grant, Seth G N; Sander, Chris; Bork, Peer; Zhu, Weimin; Pandey, Akhilesh; Brazma, Alvis; Jacq, Bernard; Vidal, Marc; Sherman, David; Legrain, Pierre; Cesareni, Gianni; Xenarios, Ioannis; Eisenberg, David; Steipe, Boris; Hogue, Chris; Apweiler, Rolf
2004-02-01
A major goal of proteomics is the complete description of the protein interaction network underlying cell physiology. A large number of small scale and, more recently, large-scale experiments have contributed to expanding our understanding of the nature of the interaction network. However, the necessary data integration across experiments is currently hampered by the fragmentation of publicly available protein interaction data, which exists in different formats in databases, on authors' websites or sometimes only in print publications. Here, we propose a community standard data model for the representation and exchange of protein interaction data. This data model has been jointly developed by members of the Proteomics Standards Initiative (PSI), a work group of the Human Proteome Organization (HUPO), and is supported by major protein interaction data providers, in particular the Biomolecular Interaction Network Database (BIND), Cellzome (Heidelberg, Germany), the Database of Interacting Proteins (DIP), Dana Farber Cancer Institute (Boston, MA, USA), the Human Protein Reference Database (HPRD), Hybrigenics (Paris, France), the European Bioinformatics Institute's (EMBL-EBI, Hinxton, UK) IntAct, the Molecular Interactions (MINT, Rome, Italy) database, the Protein-Protein Interaction Database (PPID, Edinburgh, UK) and the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING, EMBL, Heidelberg, Germany).
Lessons Learned and Technical Standards: A Logical Marriage
NASA Technical Reports Server (NTRS)
Gill, Paul; Vaughan, William W.; Garcia, Danny; Gill, Maninderpal S. (Technical Monitor)
2001-01-01
A comprehensive database of lessons learned that corresponds with relevant technical standards would be a boon to technical personnel and standards developers. The authors discuss the emergence of one such database within NASA, and show how and why the incorporation of lessons learned into technical standards databases can be an indispensable tool for government and industry. Passed down from parent to child, teacher to pupil, and from senior to junior employees, lessons learned have been the basis for our accomplishments throughout the ages. Government and industry, too, have long recognized the need to systematically document And utilize the knowledge gained from past experiences in order to avoid the repetition of failures and mishaps. The use of lessons learned is a principle component of any organizational culture committed to continuous improvement. They have formed the foundation for discoveries, inventions, improvements, textbooks, and technical standards. Technical standards are a very logical way to communicate these lessons. Using the time-honored tradition of passing on lessons learned while utilizing the newest in information technology, the National Aeronautics and Space Administration (NASA) has launched an intensive effort to link lessons learned with specific technical standards through various Internet databases. This article will discuss the importance of lessons learned to engineers, the difficulty in finding relevant lessons learned while engaged in an engineering project, and the new NASA project that can help alleviate this difficulty. The article will conclude with recommendations for more expanded cross-sectoral uses of lessons learned with reference to technical standards.
GetData: A filesystem-based, column-oriented database format for time-ordered binary data
NASA Astrophysics Data System (ADS)
Wiebe, Donald V.; Netterfield, Calvin B.; Kisner, Theodore S.
2015-12-01
The GetData Project is the reference implementation of the Dirfile Standards, a filesystem-based, column-oriented database format for time-ordered binary data. Dirfiles provide a fast, simple format for storing and reading data, suitable for both quicklook and analysis pipelines. GetData provides a C API and bindings exist for various other languages. GetData is distributed under the terms of the GNU Lesser General Public License.
Solving the Problem: Genome Annotation Standards before the Data Deluge.
Klimke, William; O'Donovan, Claire; White, Owen; Brister, J Rodney; Clark, Karen; Fedorov, Boris; Mizrachi, Ilene; Pruitt, Kim D; Tatusova, Tatiana
2011-10-15
The promise of genome sequencing was that the vast undiscovered country would be mapped out by comparison of the multitude of sequences available and would aid researchers in deciphering the role of each gene in every organism. Researchers recognize that there is a need for high quality data. However, different annotation procedures, numerous databases, and a diminishing percentage of experimentally determined gene functions have resulted in a spectrum of annotation quality. NCBI in collaboration with sequencing centers, archival databases, and researchers, has developed the first international annotation standards, a fundamental step in ensuring that high quality complete prokaryotic genomes are available as gold standard references. Highlights include the development of annotation assessment tools, community acceptance of protein naming standards, comparison of annotation resources to provide consistent annotation, and improved tracking of the evidence used to generate a particular annotation. The development of a set of minimal standards, including the requirement for annotated complete prokaryotic genomes to contain a full set of ribosomal RNAs, transfer RNAs, and proteins encoding core conserved functions, is an historic milestone. The use of these standards in existing genomes and future submissions will increase the quality of databases, enabling researchers to make accurate biological discoveries.
Solving the Problem: Genome Annotation Standards before the Data Deluge
Klimke, William; O'Donovan, Claire; White, Owen; Brister, J. Rodney; Clark, Karen; Fedorov, Boris; Mizrachi, Ilene; Pruitt, Kim D.; Tatusova, Tatiana
2011-01-01
The promise of genome sequencing was that the vast undiscovered country would be mapped out by comparison of the multitude of sequences available and would aid researchers in deciphering the role of each gene in every organism. Researchers recognize that there is a need for high quality data. However, different annotation procedures, numerous databases, and a diminishing percentage of experimentally determined gene functions have resulted in a spectrum of annotation quality. NCBI in collaboration with sequencing centers, archival databases, and researchers, has developed the first international annotation standards, a fundamental step in ensuring that high quality complete prokaryotic genomes are available as gold standard references. Highlights include the development of annotation assessment tools, community acceptance of protein naming standards, comparison of annotation resources to provide consistent annotation, and improved tracking of the evidence used to generate a particular annotation. The development of a set of minimal standards, including the requirement for annotated complete prokaryotic genomes to contain a full set of ribosomal RNAs, transfer RNAs, and proteins encoding core conserved functions, is an historic milestone. The use of these standards in existing genomes and future submissions will increase the quality of databases, enabling researchers to make accurate biological discoveries. PMID:22180819
Standard methods for sampling North American freshwater fishes
Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.
2009-01-01
This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.
Validation of asthma recording in electronic health records: a systematic review
Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J
2017-01-01
Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227
A literature search tool for intelligent extraction of disease-associated genes.
Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P
2014-01-01
To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.
[Standardization of terminology in laboratory medicine I].
Yoon, Soo Young; Yoon, Jong Hyun; Min, Won Ki; Lim, Hwan Sub; Song, Junghan; Chae, Seok Lae; Lee, Chang Kyu; Kwon, Jung Ah; Lee, Kap No
2007-04-01
Standardization of medical terminology is essential for data transmission between health-care institutions or clinical laboratories and for maximizing the benefits of information technology. Purpose of our study was to standardize the medical terms used in the clinical laboratory, such as test names, units, terms used in result descriptions, etc. During the first year of the study, we developed a standard database of concept names for laboratory terms, which covered the terms used in government health care centers, their branch offices, and primary health care units. Laboratory terms were collected from the electronic data interchange (EDI) codes from National Health Insurance Corporation (NHIC), Logical Observation Identifier Names and Codes (LOINC) database, community health centers and their branch offices, and clinical laboratories of representative university medical centers. For standard expression, we referred to the English-Korean/ Korean-English medical dictionary of Korean Medical Association and the rules for foreign language translation. Programs for mapping between LOINC DB and EDI code and for translating English to Korean were developed. A Korean standard laboratory terminology database containing six axial concept names such as components, property, time aspect, system (specimen), scale type, and method type was established for 7,508 test observations. Short names and a mapping table for EDI codes and Unified Medical Language System (UMLS) were added. Synonym tables for concept names, words used in the database, and six axial terms were prepared to make it easier to find the standard terminology with common terms used in the field of laboratory medicine. Here we report for the first time a Korean standard laboratory terminology database for test names, result description terms, result units covering most laboratory tests in primary healthcare centers.
GMDD: a database of GMO detection methods.
Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing
2008-06-04
Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.
Database for Safety-Oriented Tracking of Chemicals
NASA Technical Reports Server (NTRS)
Stump, Jacob; Carr, Sandra; Plumlee, Debrah; Slater, Andy; Samson, Thomas M.; Holowaty, Toby L.; Skeete, Darren; Haenz, Mary Alice; Hershman, Scot; Raviprakash, Pushpa
2010-01-01
SafetyChem is a computer program that maintains a relational database for tracking chemicals and associated hazards at Johnson Space Center (JSC) by use of a Web-based graphical user interface. The SafetyChem database is accessible to authorized users via a JSC intranet. All new chemicals pass through a safety office, where information on hazards, required personal protective equipment (PPE), fire-protection warnings, and target organ effects (TOEs) is extracted from material safety data sheets (MSDSs) and recorded in the database. The database facilitates real-time management of inventory with attention to such issues as stability, shelf life, reduction of waste through transfer of unused chemicals to laboratories that need them, quantification of chemical wastes, and identification of chemicals for which disposal is required. Upon searching the database for a chemical, the user receives information on physical properties of the chemical, hazard warnings, required PPE, a link to the MSDS, and references to the applicable International Standards Organization (ISO) 9000 standard work instructions and the applicable job hazard analysis. Also, to reduce the labor hours needed to comply with reporting requirements of the Occupational Safety and Health Administration, the data can be directly exported into the JSC hazardous- materials database.
Dicken, Connie L.; Dunlap, Pamela; Parks, Heather L.; Hammarstrom, Jane M.; Zientek, Michael L.; Zientek, Michael L.; Hammarstrom, Jane M.; Johnson, Kathleen M.
2016-07-13
As part of the first-ever U.S. Geological Survey global assessment of undiscovered copper resources, data common to several regional spatial databases published by the U.S. Geological Survey, including one report from Finland and one from Greenland, were standardized, updated, and compiled into a global copper resource database. This integrated collection of spatial databases provides location, geologic and mineral resource data, and source references for deposits, significant prospects, and areas permissive for undiscovered deposits of both porphyry copper and sediment-hosted copper. The copper resource database allows for efficient modeling on a global scale in a geographic information system (GIS) and is provided in an Esri ArcGIS file geodatabase format.
SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access.
Amigo, Jorge; Salas, Antonio; Phillips, Christopher; Carracedo, Angel
2008-10-10
In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs) in widespread use in human population genetics: SPSmart (SNPs for Population Studies). A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 x 10(9) genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full numerical description of the data is output in statistical results panels that include common population genetics metrics such as heterozygosity, Fst and In.
Splendore, Alessandra; Fanganiello, Roberto D; Masotti, Cibele; Morganti, Lucas S C; Passos-Bueno, M Rita
2005-05-01
Recently, a novel exon was described in TCOF1 that, although alternatively spliced, is included in the major protein isoform. In addition, most published mutations in this gene do not conform to current mutation nomenclature guidelines. Given these observations, we developed an online database of TCOF1 mutations in which all the reported mutations are renamed according to standard recommendations and in reference to the genomic and novel cDNA reference sequences (www.genoma.ib.usp.br/TCOF1_database). We also report in this work: 1) results of the first screening for large deletions in TCOF1 by Southern blot in patients without mutation detected by direct sequencing; 2) the identification of the first pathogenic mutation in the newly described exon 6A; and 3) statistical analysis of pathogenic mutations and polymorphism distribution throughout the gene.
Maalouf, Joyce; Cogswell, Mary E.; Yuan, Keming; Martin, Carrie; Gillespie, Cathleen; Ahuja, Jaspreet KC; Pehrsson, Pamela; Merritt, Robert
2015-01-01
The sodium concentration (mg/100g) for 23 of 125 Sentinel Foods (e.g. white bread) were identified in the 2009 CDC Packaged Food Database (PFD) and compared with data in the USDA’s 2013 National Nutrient Database for Standard Reference(SR 26). Sentinel Foods are foods identified by USDA to be monitored as primary indicators to assess the changes in the sodium content of commercially processed foods from stores and restaurants. Overall, 937 products were evaluated in the CDC PFD, and between 3 (one brand of ready-to-eat cereal) and 126 products (white bread) were evaluated per selected food. The mean sodium concentrations of 17 of the 23 (74%) selected foods in the CDC PFD were 90%–110% of the mean sodium concentrations in SR 26 and differences in sodium concentration were statistically significant for 6 Sentinel Foods. The sodium concentration of most of the Sentinel Foods, as selected in the PFD, appeared to represent the sodium concentrations of the corresponding food category. The results of our study help improve the understanding of how nutrition information compares between national analytic values and the label and whether the selected Sentinel Foods represent their corresponding food category as indicators for assessment of change of the sodium content in the food supply. PMID:26484010
NASA Technical Reports Server (NTRS)
Baumback, J. I.; Davies, A. N.; Vonirmer, A.; Lampen, P. H.
1995-01-01
To assist peak assignment in ion mobility spectrometry it is important to have quality reference data. The reference collection should be stored in a database system which is capable of being searched using spectral or substance information. We propose to build such a database customized for ion mobility spectra. To start off with it is important to quickly reach a critical mass of data in the collection. We wish to obtain as many spectra combined with their IMS parameters as possible. Spectra suppliers will be rewarded for their participation with access to the database. To make the data exchange between users and system administration possible, it is important to define a file format specially made for the requirements of ion mobility spectra. The format should be computer readable and flexible enough for extensive comments to be included. In this document we propose a data exchange format, and we would like you to give comments on it. For the international data exchange it is important, to have a standard data exchange format. We propose to base the definition of this format on the JCAMP-DX protocol, which was developed for the exchange of infrared spectra. This standard made by the Joint Committee on Atomic and Molecular Physical Data is of a flexible design. The aim of this paper is to adopt JCAMP-DX to the special requirements of ion mobility spectra.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-19
... Medicines Agency (EMA) European Community Herbal Monographs, and World Health Organization (WHO) Monographs... that several authoritative labeling standards monographs for herbal products specify traditional use... the major scientific reference databases, such as the National Library of Medicine's literature...
USDA Nutrient Data Set for Retail Veal Cuts
USDA-ARS?s Scientific Manuscript database
The U.S. Department of Agriculture (USDA) Nutrient Data Laboratory (NDL), in collaboration with Colorado State University, conducted a research study designed to update and expand the data on veal cuts in the USDA National Nutrient Database for Standard Reference (SR). This research has been necess...
Formulation and Recipe Calculations in the USDA National Nutrient Databank System
USDA-ARS?s Scientific Manuscript database
The objectives of the presentation are to: 1) familiarize representatives of the Office of Pesticide Programs of the Environmental Protection Agency with the Nutrient Data Laboratory's USDA National Nutrient Database for Standard Reference and its relationship to the Food Surveys Research Group's Fo...
Greenlee, Dave
2007-01-01
A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.
Influence of Paternal Age on Assisted Reproduction Outcome
2017-04-27
We Will Retrospectively Assess Our Databases in Our Clinic; Instituto Valenciano de Infertilidad in Valencia (Spain); Searching for Assisted Reproduction Procedures; IUI Standard IVF/ICSI Cycles and Ovum Donation IVF/ICSI Cycles; Who Were Referred to Our Unit to Cryopreserve Sperm During the Period; From January 2000 to December 2006
The USDA Table of Cooking Yields for Meat and Poultry
USDA-ARS?s Scientific Manuscript database
The Nutrient Data Laboratory (NDL) at the USDA conducts food composition research to develop accurate, unbiased, and representative food and nutrient composition data which are released as the USDA National Nutrient Database for Standard Reference (SR). SR is used as the foundation of most other foo...
Digital hand atlas and computer-aided bone age assessment via the Web
NASA Astrophysics Data System (ADS)
Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente
1999-07-01
A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.
GMDD: a database of GMO detection methods
Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing
2008-01-01
Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755
Tran, Le-Thuy T.; Brewster, Philip J.; Chidambaram, Valliammai; Hurdle, John F.
2017-01-01
This study presents a method laying the groundwork for systematically monitoring food quality and the healthfulness of consumers’ point-of-sale grocery purchases. The method automates the process of identifying United States Department of Agriculture (USDA) Food Patterns Equivalent Database (FPED) components of grocery food items. The input to the process is the compact abbreviated descriptions of food items that are similar to those appearing on the point-of-sale sales receipts of most food retailers. The FPED components of grocery food items are identified using Natural Language Processing techniques combined with a collection of food concept maps and relationships that are manually built using the USDA Food and Nutrient Database for Dietary Studies, the USDA National Nutrient Database for Standard Reference, the What We Eat In America food categories, and the hierarchical organization of food items used by many grocery stores. We have established the construct validity of the method using data from the National Health and Nutrition Examination Survey, but further evaluation of validity and reliability will require a large-scale reference standard with known grocery food quality measures. Here we evaluate the method’s utility in identifying the FPED components of grocery food items available in a large sample of retail grocery sales data (~190 million transaction records). PMID:28475153
Tran, Le-Thuy T; Brewster, Philip J; Chidambaram, Valliammai; Hurdle, John F
2017-05-05
This study presents a method laying the groundwork for systematically monitoring food quality and the healthfulness of consumers' point-of-sale grocery purchases. The method automates the process of identifying United States Department of Agriculture (USDA) Food Patterns Equivalent Database (FPED) components of grocery food items. The input to the process is the compact abbreviated descriptions of food items that are similar to those appearing on the point-of-sale sales receipts of most food retailers. The FPED components of grocery food items are identified using Natural Language Processing techniques combined with a collection of food concept maps and relationships that are manually built using the USDA Food and Nutrient Database for Dietary Studies, the USDA National Nutrient Database for Standard Reference, the What We Eat In America food categories, and the hierarchical organization of food items used by many grocery stores. We have established the construct validity of the method using data from the National Health and Nutrition Examination Survey, but further evaluation of validity and reliability will require a large-scale reference standard with known grocery food quality measures. Here we evaluate the method's utility in identifying the FPED components of grocery food items available in a large sample of retail grocery sales data (~190 million transaction records).
Kaminsky, Leonard A; Imboden, Mary T; Arena, Ross; Myers, Jonathan
2017-02-01
The importance of cardiorespiratory fitness (CRF) is well established. This report provides newly developed standards for CRF reference values derived from cardiopulmonary exercise testing (CPX) using cycle ergometry in the United States. Ten laboratories in the United States experienced in CPX administration with established quality control procedures contributed to the "Fitness Registry and the Importance of Exercise: A National Database" (FRIEND) Registry from April 2014 through May 2016. Data from 4494 maximal (respiratory exchange ratio, ≥1.1) cycle ergometer tests from men and women (20-79 years) from 27 states, without cardiovascular disease, were used to develop these references values. Percentiles of maximum oxygen consumption (VO 2max ) for men and women were determined for each decade from age 20 years through age 79 years. Comparisons of VO 2max were made to reference data established with CPX data from treadmill data in the FRIEND Registry and previously published reports. As expected, there were significant differences between sex and age groups for VO 2max (P<.01). For cycle tests within the FRIEND Registry, the 50th percentile VO 2max of men and women aged 20 to 29 years declined from 41.9 and 31.0 mLO 2 /kg/min to 19.5 and 14.8 mLO 2 /kg/min for ages 70 to 79 years, respectively. The rate of decline in this cohort was approximately 10% per decade. The FRIEND Registry reference data will be useful in providing more accurate interpretations for the US population of CPX-measured VO 2max from exercise tests using cycle ergometry compared with previous approaches based on estimations of standard differences from treadmill testing reference values. Copyright © 2016 Mayo Foundation for Medical Education and Research. All rights reserved.
Reference-free automatic quality assessment of tracheoesophageal speech.
Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip
2009-01-01
Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.
Core data elements tracking elder sexual abuse.
Hanrahan, Nancy P; Burgess, Ann W; Gerolamo, Angela M
2005-05-01
Sexual abuse in the older adult population is an understudied vector of violent crimes with significant physical and psychological consequences for victims and families. Research requires a theoretical framework that delineates core elements using a standardized instrument. To develop a conceptual framework and identify core data elements specific to the older adult population, clinical, administrative, and criminal experts were consulted using a nominal group method to revise an existing sexual assault instrument. The revised instrument could be used to establish a national database of elder sexual abuse. The database could become a standard reference to guide the detection, assessment, and prosecution of elder sexual abuse crimes as well as build a base from which policy makers could plan and evaluate interventions that targeted risk factors.
Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M
2018-01-01
Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning
2007-10-18
Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at http://www.ebi.ac.uk/Tools/picr.
Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning
2007-01-01
Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at . PMID:17945017
NASA Astrophysics Data System (ADS)
Bilitza, Dieter
2017-04-01
The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.
Database of Geoscientific References Through 2007 for Afghanistan, Version 2
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2007-01-01
This report describes an accompanying database of geoscientific references for the country of Afghanistan. Included is an accompanying Microsoft? Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan's energy, mineral, and water resources, and geologic hazards, currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,462) and unpublished (n = 174) references compiled through September, 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable, interface and only minimum knowledge of the use of Microsoft? Access is required.
MetaboLights: An Open-Access Database Repository for Metabolomics Data.
Kale, Namrata S; Haug, Kenneth; Conesa, Pablo; Jayseelan, Kalaivani; Moreno, Pablo; Rocca-Serra, Philippe; Nainala, Venkata Chandrasekhar; Spicer, Rachel A; Williams, Mark; Li, Xuefei; Salek, Reza M; Griffin, Julian L; Steinbeck, Christoph
2016-03-24
MetaboLights is the first general purpose, open-access database repository for cross-platform and cross-species metabolomics research at the European Bioinformatics Institute (EMBL-EBI). Based upon the open-source ISA framework, MetaboLights provides Metabolomics Standard Initiative (MSI) compliant metadata and raw experimental data associated with metabolomics experiments. Users can upload their study datasets into the MetaboLights Repository. These studies are then automatically assigned a stable and unique identifier (e.g., MTBLS1) that can be used for publication reference. The MetaboLights Reference Layer associates metabolites with metabolomics studies in the archive and is extensively annotated with data fields such as structural and chemical information, NMR and MS spectra, target species, metabolic pathways, and reactions. The database is manually curated with no specific release schedules. MetaboLights is also recommended by journals for metabolomics data deposition. This unit provides a guide to using MetaboLights, downloading experimental data, and depositing metabolomics datasets using user-friendly submission tools. Copyright © 2016 John Wiley & Sons, Inc.
STRBase: a short tandem repeat DNA database for the human identity testing community
Ruitberg, Christian M.; Reeder, Dennis J.; Butler, John M.
2001-01-01
The National Institute of Standards and Technology (NIST) has compiled and maintained a Short Tandem Repeat DNA Internet Database (http://www.cstl.nist.gov/biotech/strbase/) since 1997 commonly referred to as STRBase. This database is an information resource for the forensic DNA typing community with details on commonly used short tandem repeat (STR) DNA markers. STRBase consolidates and organizes the abundant literature on this subject to facilitate on-going efforts in DNA typing. Observed alleles and annotated sequence for each STR locus are described along with a review of STR analysis technologies. Additionally, commercially available STR multiplex kits are described, published polymerase chain reaction (PCR) primer sequences are reported, and validation studies conducted by a number of forensic laboratories are listed. To supplement the technical information, addresses for scientists and hyperlinks to organizations working in this area are available, along with the comprehensive reference list of over 1300 publications on STRs used for DNA typing purposes. PMID:11125125
Cholesterol and vitamin D content of eggs in the U.S. retail market
USDA-ARS?s Scientific Manuscript database
Nationwide sampling in the U.S. of whole large eggs, to update values in the USDA National Nutrient Database for Standard Reference (SR) (http://www.ars.usda.gov/nutrientdata), was conducted in 2000-2001 and again in 2010. Retail cartons of large eggs were obtained from 12 supermarket locations usi...
Reach for Reference. No Opposition Here! Opposing Viewpoints Resource Center Is a Very Good Database
ERIC Educational Resources Information Center
Safford, Barbara Ripp
2004-01-01
"Opposing Viewpoints" and "Opposing Viewpoints Juniors" have long been standard titles in upper elementary, middle level, and high school collections. "Opposing Viewpoints Juniors" should be required as information literacy/critical thinking curriculum tools as early as fifth grade as they use current controversies to teach students how to…
USDA-ARS?s Scientific Manuscript database
Vitamin D, ergosterol, ergosterol metabolites, and phytosterols were analyzed in ten mushroom types sampled nationwide in the U.S. to update the USDA Nutrient Database for Standard Reference. Sterols were analyzed by GC-FID with mass spectrometric confirmation of components. Vitamin D was assayed ...
30 CFR 1227.200 - What are a State's general responsibilities if it accepts a delegation?
Code of Federal Regulations, 2011 CFR
2011-07-01
... controls and accountability; (4) Maintain a system of accounts that includes a comprehensive audit trail so... production information for royalty management purposes; (c) Assist ONRR in meeting the requirements of the... maintaining adequate reference, royalty, and production databases as provided in the Standards issued under...
Similarity-based modeling in large-scale prediction of drug-drug interactions.
Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P
2014-09-01
Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented.
IDD Info: a software to manage surveillance data of Iodine Deficiency Disorders.
Liu, Peng; Teng, Bai-Jun; Zhang, Shu-Bin; Su, Xiao-Hui; Yu, Jun; Liu, Shou-Jun
2011-08-01
IDD info, a new software for managing survey data of Iodine Deficiency Disorders (IDD), is presented in this paper. IDD Info aims to create IDD project databases, process, analyze various national or regional surveillance data and form final report. It has series measures of choosing database from existing ones, revising it, choosing indicators from pool to establish database and adding indicators to pool. It also provides simple tools to scan one database and compare two databases, to set IDD standard parameters, to analyze data by single indicator and multi-indicators, and finally to form typeset report with content customized. IDD Info was developed using Chinese national IDD surveillance data of 2005. Its validity was evaluated by comparing with survey report given by China CDC. The IDD Info is a professional analysis tool, which succeeds in speeding IDD data analysis up to about 14.28% with respect to standard reference routines. It consequently enhances analysis performance and user compliance. IDD Info is a practical and accurate means of managing the multifarious IDD surveillance data that can be widely used by non-statisticians in national and regional IDD surveillance. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Forster, Samuel C; Browne, Hilary P; Kumar, Nitin; Hunt, Martin; Denise, Hubert; Mitchell, Alex; Finn, Robert D; Lawley, Trevor D
2016-01-04
The Human Pan-Microbe Communities (HPMC) database (http://www.hpmcd.org/) provides a manually curated, searchable, metagenomic resource to facilitate investigation of human gastrointestinal microbiota. Over the past decade, the application of metagenome sequencing to elucidate the microbial composition and functional capacity present in the human microbiome has revolutionized many concepts in our basic biology. When sufficient high quality reference genomes are available, whole genome metagenomic sequencing can provide direct biological insights and high-resolution classification. The HPMC database provides species level, standardized phylogenetic classification of over 1800 human gastrointestinal metagenomic samples. This is achieved by combining a manually curated list of bacterial genomes from human faecal samples with over 21000 additional reference genomes representing bacteria, viruses, archaea and fungi with manually curated species classification and enhanced sample metadata annotation. A user-friendly, web-based interface provides the ability to search for (i) microbial groups associated with health or disease state, (ii) health or disease states and community structure associated with a microbial group, (iii) the enrichment of a microbial gene or sequence and (iv) enrichment of a functional annotation. The HPMC database enables detailed analysis of human microbial communities and supports research from basic microbiology and immunology to therapeutic development in human health and disease. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The development of variable MLM editor and TSQL translator based on Arden Syntax in Taiwan.
Liang, Yan Ching; Chang, Polun
2003-01-01
The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database.
Injury profiles related to mortality in patients with a low Injury Severity Score: a case-mix issue?
Joosse, Pieter; Schep, Niels W L; Goslings, J Carel
2012-07-01
Outcome prediction models are widely used to evaluate trauma care. External benchmarking provides individual institutions with a tool to compare survival with a reference dataset. However, these models do have limitations. In this study, the hypothesis was tested whether specific injuries are associated with increased mortality and whether differences in case-mix of these injuries influence outcome comparison. A retrospective study was conducted in a Dutch trauma region. Injury profiles, based on injuries most frequently endured by unexpected death, were determined. The association between these injury profiles and mortality was studied in patients with a low Injury Severity Score by logistic regression. The standardized survival of our population (Ws statistic) was compared with North-American and British reference databases, with and without patients suffering from previously defined injury profiles. In total, 14,811 patients were included. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic injuries were significantly associated with mortality corrected for age, sex, and physiologic derangement in patients with a low injury severity. Odds ratios ranged from 2.42 to 2.92. The Ws statistic for comparison with North-American databases significantly improved after exclusion of patients with these injuries. The Ws statistic for comparison with a British reference database remained unchanged. Hip fractures, minor pelvic fractures, femur fractures, and minor thoracic wall injuries are associated with increased mortality. Comparative outcome analysis of a population with a reference database that differs in case-mix with respect to these injuries should be interpreted cautiously. Prognostic study, level II.
[Relevance of the hemovigilance regional database for the shared medical file identity server].
Doly, A; Fressy, P; Garraud, O
2008-11-01
The French Health Products Safety Agency coordinates the national initiative of computerization of blood products traceability within regional blood banks and public and private hospitals. The Auvergne-Loire Regional French Blood Service, based in Saint-Etienne, together with a number of public hospitals set up a transfusion data network named EDITAL. After four years of progressive implementation and experimentation, a software enabling standardized data exchange has built up a regional nominative database, endorsed by the Traceability Computerization National Committee in 2004. This database now provides secured web access to a regional transfusion history enabling biologists and all hospital and family practitioners to take in charge the patient follow-up. By running independently from the softwares of its partners, EDITAL database provides reference for the regional identity server.
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean.
Pagliosa, Paulo R; Doria, João G; Misturini, Dairana; Otegui, Mariana B P; Oortman, Mariana S; Weis, Wilson A; Faroni-Perez, Larisse; Alves, Alexandre P; Camargo, Maurício G; Amaral, A Cecília Z; Marques, Antonio C; Lana, Paulo C
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/.
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean
Pagliosa, Paulo R.; Doria, João G.; Misturini, Dairana; Otegui, Mariana B. P.; Oortman, Mariana S.; Weis, Wilson A.; Faroni-Perez, Larisse; Alves, Alexandre P.; Camargo, Maurício G.; Amaral, A. Cecília Z.; Marques, Antonio C.; Lana, Paulo C.
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/ PMID:24573879
Nieves-Moreno, María; Martínez-de-la-Casa, José M; Bambo, María P; Morales-Fernández, Laura; Van Keer, Karel; Vandewalle, Evelien; Stalmans, Ingeborg; García-Feijoó, Julián
2018-02-01
This study examines the capacity to detect glaucoma of inner macular layer thickness measured by spectral-domain optical coherence tomography (SD-OCT) using a new normative database as the reference standard. Participants ( N = 148) were recruited from Leuven (Belgium) and Zaragoza (Spain): 74 patients with early/moderate glaucoma and 74 age-matched healthy controls. One eye was randomly selected for a macular scan using the Spectralis SD-OCT. The variables measured with the instrument's segmentation software were: macular nerve fiber layer (mRNFL), ganglion cell layer (GCL), and inner plexiform layer (IPL) volume and thickness along with circumpapillary RNFL thickness (cpRNFL). The new normative database of macular variables was used to define the cutoff of normality as the fifth percentile by age group. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) of each macular measurement and of cpRNFL were used to distinguish between patients and controls. Overall sensitivity and specificity to detect early-moderate glaucoma were 42.2% and 88.9% for mRNFL, 42.4% and 95.6% for GCL, 42.2% and 94.5% for IPL, and 53% and 94.6% for RNFL, respectively. The best macular variable to discriminate between the two groups of subjects was outer temporal GCL thickness as indicated by an AUROC of 0.903. This variable performed similarly to mean cpRNFL thickness (AUROC = 0.845; P = 0.29). Using our normative database as reference, the diagnostic power of inner macular layer thickness proved comparable to that of peripapillary RNFL thickness. Spectralis SD-OCT, cpRNFL thickness, and individual macular inner layer thicknesses show comparable diagnostic capacity for glaucoma and RNFL, GCL, and IPL thickness may be useful as an alternative diagnostic test when the measure of cpRNFL shows artifacts.
The Biological Macromolecule Crystallization Database and NASA Protein Crystal Growth Archive
Gilliland, Gary L.; Tung, Michael; Ladner, Jane
1996-01-01
The NIST/NASA/CARB Biological Macromolecule Crystallization Database (BMCD), NIST Standard Reference Database 21, contains crystal data and crystallization conditions for biological macromolecules. The database entries include data abstracted from published crystallographic reports. Each entry consists of information describing the biological macromolecule crystallized and crystal data and the crystallization conditions for each crystal form. The BMCD serves as the NASA Protein Crystal Growth Archive in that it contains protocols and results of crystallization experiments undertaken in microgravity (space). These database entries report the results, whether successful or not, from NASA-sponsored protein crystal growth experiments in microgravity and from microgravity crystallization studies sponsored by other international organizations. The BMCD was designed as a tool to assist x-ray crystallographers in the development of protocols to crystallize biological macromolecules, those that have previously been crystallized, and those that have not been crystallized. PMID:11542472
NASA Technical Reports Server (NTRS)
Johnson, Paul W.
2008-01-01
ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.
HMDB 4.0: the human metabolome database for 2018
Feunang, Yannick Djoumbou; Marcu, Ana; Guo, An Chi; Liang, Kevin; Vázquez-Fresno, Rosa; Sajed, Tanvir; Johnson, Daniel; Li, Carin; Karu, Naama; Sayeeda, Zinat; Lo, Elvis; Assempour, Nazanin; Berjanskii, Mark; Singhal, Sandeep; Arndt, David; Liang, Yonjie; Badran, Hasan; Grant, Jason; Serra-Cayuela, Arnau; Liu, Yifeng; Mandal, Rupa; Neveu, Vanessa; Pon, Allison; Knox, Craig; Wilson, Michael; Manach, Claudine; Scalbert, Augustin
2018-01-01
Abstract The Human Metabolome Database or HMDB (www.hmdb.ca) is a web-enabled metabolomic database containing comprehensive information about human metabolites along with their biological roles, physiological concentrations, disease associations, chemical reactions, metabolic pathways, and reference spectra. First described in 2007, the HMDB is now considered the standard metabolomic resource for human metabolic studies. Over the past decade the HMDB has continued to grow and evolve in response to emerging needs for metabolomics researchers and continuing changes in web standards. This year's update, HMDB 4.0, represents the most significant upgrade to the database in its history. For instance, the number of fully annotated metabolites has increased by nearly threefold, the number of experimental spectra has grown by almost fourfold and the number of illustrated metabolic pathways has grown by a factor of almost 60. Significant improvements have also been made to the HMDB’s chemical taxonomy, chemical ontology, spectral viewing, and spectral/text searching tools. A great deal of brand new data has also been added to HMDB 4.0. This includes large quantities of predicted MS/MS and GC–MS reference spectral data as well as predicted (physiologically feasible) metabolite structures to facilitate novel metabolite identification. Additional information on metabolite-SNP interactions and the influence of drugs on metabolite levels (pharmacometabolomics) has also been added. Many other important improvements in the content, the interface, and the performance of the HMDB website have been made and these should greatly enhance its ease of use and its potential applications in nutrition, biochemistry, clinical chemistry, clinical genetics, medicine, and metabolomics science. PMID:29140435
A Database of Woody Vegetation Responses to Elevated Atmospheric CO2 (NDP-072)
Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
1999-01-01
To perform a statistically rigorous meta-analysis of research results on the response by woody vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled. Eighty-four independent CO2-enrichment studies, covering 65 species and 35 response parameters, met the necessary criteria for inclusion in the database: reporting mean response, sample size, and variance of the response (either as standard deviation or standard error). Data were retrieved from the published literature and unpublished reports. This numeric data package contains a 29-field data set of CO2-exposure experiment responses by woody plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data set, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).
CHEMICAL STRUCTURE INDEXING OF TOXICITY DATA ON ...
Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the 'flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives that are accelerating the pace of this transformation, with particular reference to toxicology-related chemical information. Chemical content annotators, structure locator services, large structure/data aggregator web sites, structure browsers, International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) codes, toxicity data models and public chemical/biological activity profiling initiatives are all playing a role in overcoming barriers to the integration of toxicity data, and are bringing researchers closer to the reality of a mineable chemical Semantic Web. An example of this integration of data is provided by the collaboration among researchers involved with the Distributed Structure-Searchable Toxicity (DSSTox) project, the Carcinogenic Potency Project, projects at the National Cancer Institute and the PubChem database. Standardizing chemical structure annotation of public toxicity databases
Full value documentation in the Czech Food Composition Database.
Machackova, M; Holasova, M; Maskova, E
2010-11-01
The aim of this project was to launch a new Food Composition Database (FCDB) Programme in the Czech Republic; to implement a methodology for food description and value documentation according to the standards designed by the European Food Information Resource (EuroFIR) Network of Excellence; and to start the compilation of a pilot FCDB. Foods for the initial data set were selected from the list of foods included in the Czech Food Consumption Basket. Selection of 24 priority components was based on the range of components used in former Czech tables. The priority list was extended with components for which original Czech analytical data or calculated data were available. Values that were input into the compiled database were documented according to the EuroFIR standards within the entities FOOD, COMPONENT, VALUE and REFERENCE using Excel sheets. Foods were described using the LanguaL Thesaurus. A template for documentation of data according to the EuroFIR standards was designed. The initial data set comprised documented data for 162 foods. Values were based on original Czech analytical data (available for traditional and fast foods, milk and milk products, wheat flour types), data derived from literature (for example, fruits, vegetables, nuts, legumes, eggs) and calculated data. The Czech FCDB programme has been successfully relaunched. Inclusion of the Czech data set into the EuroFIR eSearch facility confirmed compliance of the database format with the EuroFIR standards. Excel spreadsheets are applicable for full value documentation in the FCDB.
A Standard Nomenclature for Referencing and Authentication of Pluripotent Stem Cells.
Kurtz, Andreas; Seltmann, Stefanie; Bairoch, Amos; Bittner, Marie-Sophie; Bruce, Kevin; Capes-Davis, Amanda; Clarke, Laura; Crook, Jeremy M; Daheron, Laurence; Dewender, Johannes; Faulconbridge, Adam; Fujibuchi, Wataru; Gutteridge, Alexander; Hei, Derek J; Kim, Yong-Ou; Kim, Jung-Hyun; Kokocinski, Anja Kolb-; Lekschas, Fritz; Lomax, Geoffrey P; Loring, Jeanne F; Ludwig, Tenneille; Mah, Nancy; Matsui, Tohru; Müller, Robert; Parkinson, Helen; Sheldon, Michael; Smith, Kelly; Stachelscheid, Harald; Stacey, Glyn; Streeter, Ian; Veiga, Anna; Xu, Ren-He
2018-01-09
Unambiguous cell line authentication is essential to avoid loss of association between data and cells. The risk for loss of references increases with the rapidity that new human pluripotent stem cell (hPSC) lines are generated, exchanged, and implemented. Ideally, a single name should be used as a generally applied reference for each cell line to access and unify cell-related information across publications, cell banks, cell registries, and databases and to ensure scientific reproducibility. We discuss the needs and requirements for such a unique identifier and implement a standard nomenclature for hPSCs, which can be automatically generated and registered by the human pluripotent stem cell registry (hPSCreg). To avoid ambiguities in PSC-line referencing, we strongly urge publishers to demand registration and use of the standard name when publishing research based on hPSC lines. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
US and foreign alloy cross-reference database
NASA Technical Reports Server (NTRS)
Springer, John M.; Morgan, Steven H.
1991-01-01
Marshall Space Flight Center and other NASA installations have a continuing requirement for materials data from other countries involved with the development of joint international Spacelab experiments and other hardware. This need includes collecting data for common alloys to ascertain composition, physical properties, specifications, and designations. This data is scattered throughout a large number of specification statements, standards, handbooks, and other technical literature which make a manual search both tedious and often limited in extent. In recognition of this problem, a computerized database of information on alloys was developed along with the software necessary to provide the desired functions to access this data. The intention was to produce an initial database covering aluminum alloys, along with the program to provide a user-interface to the data, and then later to extend and refine the database to include other nonferrous and ferrous alloys.
Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen
2014-06-23
We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.
O'Leary, Nuala A; Wright, Mathew W; Brister, J Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S; Kodali, Vamsi K; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M; Murphy, Michael R; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H; Rausch, Daniel; Riddick, Lillian D; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E; Vatsan, Anjana R; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D; Pruitt, Kim D
2016-01-04
The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55,000 organisms (>4800 viruses, >40,000 prokaryotes and >10,000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
The Development of Variable MLM Editor and TSQL Translator Based on Arden Syntax in Taiwan
Liang, Yan-Ching; Chang, Polun
2003-01-01
The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database. PMID:14728414
Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco
2008-01-01
This report includes a document and accompanying Microsoft Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan?s energy, mineral, and water resources, and geologic hazards currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,489) and unpublished (n = 176) references compiled through calendar year 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable interface and only minimum knowledge of the use of Microsoft Access is required.
Slimani, N; Deharveng, G; Unwin, I; Southgate, D A T; Vignat, J; Skeie, G; Salvini, S; Parpinel, M; Møller, A; Ireland, J; Becker, W; Farran, A; Westenbrink, S; Vasilopoulou, E; Unwin, J; Borgejordet, A; Rohrmann, S; Church, S; Gnagnarella, P; Casagrande, C; van Bakel, M; Niravong, M; Boutron-Ruault, M C; Stripp, C; Tjønneland, A; Trichopoulou, A; Georga, K; Nilsson, S; Mattisson, I; Ray, J; Boeing, H; Ocké, M; Peeters, P H M; Jakszyn, P; Amiano, P; Engeset, D; Lund, E; de Magistris, M Santucci; Sacerdote, C; Welch, A; Bingham, S; Subar, A F; Riboli, E
2007-09-01
This paper describes the ad hoc methodological concepts and procedures developed to improve the comparability of Nutrient databases (NDBs) across the 10 European countries participating in the European Prospective Investigation into Cancer and Nutrition (EPIC). This was required because there is currently no European reference NDB available. A large network involving national compilers, nutritionists and experts on food chemistry and computer science was set up for the 'EPIC Nutrient DataBase' (ENDB) project. A total of 550-1500 foods derived from about 37,000 standardized EPIC 24-h dietary recalls (24-HDRS) were matched as closely as possible to foods available in the 10 national NDBs. The resulting national data sets (NDS) were then successively documented, standardized and evaluated according to common guidelines and using a DataBase Management System specifically designed for this project. The nutrient values of foods unavailable or not readily available in NDSs were approximated by recipe calculation, weighted averaging or adjustment for weight changes and vitamin/mineral losses, using common algorithms. The final ENDB contains about 550-1500 foods depending on the country and 26 common components. Each component value was documented and standardized for unit, mode of expression, definition and chemical method of analysis, as far as possible. Furthermore, the overall completeness of NDSs was improved (>or=99%), particularly for beta-carotene and vitamin E. The ENDB constitutes a first real attempt to improve the comparability of NDBs across European countries. This methodological work will provide a useful tool for nutritional research as well as end-user recommendations to improve NDBs in the future.
McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta
2016-01-01
BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org. © The Author(s) 2016. Published by Oxford University Press.
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.
Normand, A C; Packeu, A; Cassagne, C; Hendrickx, M; Ranque, S; Piarroux, R
2018-05-01
Conventional dermatophyte identification is based on morphological features. However, recent studies have proposed to use the nucleotide sequences of the rRNA internal transcribed spacer (ITS) region as an identification barcode of all fungi, including dermatophytes. Several nucleotide databases are available to compare sequences and thus identify isolates; however, these databases often contain mislabeled sequences that impair sequence-based identification. We evaluated five of these databases on a clinical isolate panel. We selected 292 clinical dermatophyte strains that were prospectively subjected to an ITS2 nucleotide sequence analysis. Sequences were analyzed against the databases, and the results were compared to clusters obtained via DNA alignment of sequence segments. The DNA tree served as the identification standard throughout the study. According to the ITS2 sequence identification, the majority of strains (255/292) belonged to the genus Trichophyton , mainly T. rubrum complex ( n = 184), T. interdigitale ( n = 40), T. tonsurans ( n = 26), and T. benhamiae ( n = 5). Other genera included Microsporum (e.g., M. canis [ n = 21], M. audouinii [ n = 10], Nannizzia gypsea [ n = 3], and Epidermophyton [ n = 3]). Species-level identification of T. rubrum complex isolates was an issue. Overall, ITS DNA sequencing is a reliable tool to identify dermatophyte species given that a comprehensive and correctly labeled database is consulted. Since many inaccurate identification results exist in the DNA databases used for this study, reference databases must be verified frequently and amended in line with the current revisions of fungal taxonomy. Before describing a new species or adding a new DNA reference to the available databases, its position in the phylogenetic tree must be verified. Copyright © 2018 American Society for Microbiology.
Bedside Back to Bench: Building Bridges between Basic and Clinical Genomic Research.
Manolio, Teri A; Fowler, Douglas M; Starita, Lea M; Haendel, Melissa A; MacArthur, Daniel G; Biesecker, Leslie G; Worthey, Elizabeth; Chisholm, Rex L; Green, Eric D; Jacob, Howard J; McLeod, Howard L; Roden, Dan; Rodriguez, Laura Lyman; Williams, Marc S; Cooper, Gregory M; Cox, Nancy J; Herman, Gail E; Kingsmore, Stephen; Lo, Cecilia; Lutz, Cathleen; MacRae, Calum A; Nussbaum, Robert L; Ordovas, Jose M; Ramos, Erin M; Robinson, Peter N; Rubinstein, Wendy S; Seidman, Christine; Stranger, Barbara E; Wang, Haoyi; Westerfield, Monte; Bult, Carol
2017-03-23
Genome sequencing has revolutionized the diagnosis of genetic diseases. Close collaborations between basic scientists and clinical genomicists are now needed to link genetic variants with disease causation. To facilitate such collaborations, we recommend prioritizing clinically relevant genes for functional studies, developing reference variant-phenotype databases, adopting phenotype description standards, and promoting data sharing. Published by Elsevier Inc.
Bedside Back to Bench: Building Bridges between Basic and Clinical Genomic Research
Manolio, Teri A.; Fowler, Douglas M.; Starita, Lea M.; Haendel, Melissa A.; MacArthur, Daniel G.; Biesecker, Leslie G.; Worthey, Elizabeth; Chisholm, Rex L.; Green, Eric D.; Jacob, Howard J.; McLeod, Howard L.; Roden, Dan; Rodriguez, Laura Lyman; Williams, Marc S.; Cooper, Gregory M.; Cox, Nancy J.; Herman, Gail E.; Kingsmore, Stephen; Lo, Cecilia; Lutz, Cathleen; MacRae, Calum A.; Nussbaum, Robert L.; Ordovas, Jose M.; Ramos, Erin M.; Robinson, Peter N.; Rubinstein, Wendy S.; Seidman, Christine; Stranger, Barbara E.; Wang, Haoyi; Westerfield, Monte; Bult, Carol
2017-01-01
Summary Genome sequencing has revolutionized the diagnosis of genetic diseases. Close collaborations between basic scientists and clinical genomicists are now needed to link genetic variants with disease causation. To facilitate such collaborations we recommend prioritizing clinically relevant genes for functional studies, developing reference variant-phenotype databases, adopting phenotype description standards, and promoting data sharing. PMID:28340351
USDA-ARS?s Scientific Manuscript database
In response to recent interest in vitamin D composition of foods, USDA-NDL is updating and expanding data in the National Nutrient Database for Standard Reference. In 2007, the USDA sampled vitamin D3 fortified yogurt and milk from 12 and 24 supermarkets, respectively, selected from a nationwide sta...
USDA-ARS?s Scientific Manuscript database
Latinos are the largest minority group in the U.S. The Nutrient Data Laboratory (NDL) is sampling and analyzing foods commonly consumed by Latin Americans in order to improve the quality and quantity of data on ethnic foods in the USDA National Nutrient Database for Standard Reference. Guanabana, gu...
Chapter 4 - The LANDFIRE Prototype Project reference database
John F. Caratti
2006-01-01
This chapter describes the data compilation process for the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project) reference database (LFRDB) and explains the reference data applications for LANDFIRE Prototype maps and models. The reference database formed the foundation for all LANDFIRE tasks. All products generated by the...
New Data Bases and Standards for Gravity Anomalies
NASA Astrophysics Data System (ADS)
Keller, G. R.; Hildenbrand, T. G.; Webring, M. W.; Hinze, W. J.; Ravat, D.; Li, X.
2008-12-01
Ever since the use of high-precision gravimeters emerged in the 1950's, gravity surveys have been an important tool for geologic studies. Recent developments that make geologically useful measurements from airborne and satellite platforms, the ready availability of the Global Positioning System that provides precise vertical and horizontal control, improved global data bases, and the increased availability of processing and modeling software have accelerated the use of the gravity method. As a result, efforts are being made to improve the gravity databases publicly available to the geoscience community by expanding their holdings and increasing the accuracy and precision of the data in them. Specifically the North American Gravity Database as well as the individual databases of Canada, Mexico, and the United States are being revised using new formats and standards to improve their coverage, standardization, and accuracy. An important part of this effort is revision of procedures and standards for calculating gravity anomalies taking into account the enhanced computational power available, modern satellite-based positioning technology, improved terrain databases, and increased interest in more accurately defining the different components of gravity anomalies. The most striking revision is the use of one single internationally accepted reference ellipsoid for the horizontal and vertical datums of gravity stations as well as for the computation of the calculated value of theoretical gravity. The new standards hardly impact the interpretation of local anomalies, but do improve regional anomalies in that long wavelength artifacts are removed. Most importantly, such new standards can be consistently applied to gravity database compilations of nations, continents, and even the entire world. Although many types of gravity anomalies have been described, they fall into three main classes. The primary class incorporates planetary effects, which are analytically prescribed, to derive the predicted or modeled gravity, and thus, anomalies of this class are termed planetary. The most primitive version of a gravity anomaly is simply the difference between the value of gravity predicted by the effect of the reference ellipsoid and the observed gravity anomaly. When the height of the gravity station increases, the ellipsoidal gravity anomaly decreases because of the increased distance of measurement from the anomaly- producing masses. The two primary anomalies in geophysics, which are appropriately classified as planetary anomalies, are the Free-air and Bouguer gravity anomalies. They employ models that account for planetary effects on gravity including the topography of the earth. A second class of anomaly, geological anomalies, includes the modeled gravity effect of known or assumed masses leading to the predicted gravity by using geological data such as densities and crustal thickness. The third class of anomaly, filtered anomalies, removes arbitrary gravity effects of largely unknown sources that are empirically or analytically determined from the nature of the gravity anomalies by filtering.
LOINC, a universal standard for identifying laboratory observations: a 5-year update.
McDonald, Clement J; Huff, Stanley M; Suico, Jeffrey G; Hill, Gilbert; Leavelle, Dennis; Aller, Raymond; Forrey, Arden; Mercer, Kathy; DeMoor, Georges; Hook, John; Williams, Warren; Case, James; Maloney, Pat
2003-04-01
The Logical Observation Identifier Names and Codes (LOINC) database provides a universal code system for reporting laboratory and other clinical observations. Its purpose is to identify observations in electronic messages such as Health Level Seven (HL7) observation messages, so that when hospitals, health maintenance organizations, pharmaceutical manufacturers, researchers, and public health departments receive such messages from multiple sources, they can automatically file the results in the right slots of their medical records, research, and/or public health systems. For each observation, the database includes a code (of which 25 000 are laboratory test observations), a long formal name, a "short" 30-character name, and synonyms. The database comes with a mapping program called Regenstrief LOINC Mapping Assistant (RELMA(TM)) to assist the mapping of local test codes to LOINC codes and to facilitate browsing of the LOINC results. Both LOINC and RELMA are available at no cost from http://www.regenstrief.org/loinc/. The LOINC medical database carries records for >30 000 different observations. LOINC codes are being used by large reference laboratories and federal agencies, e.g., the CDC and the Department of Veterans Affairs, and are part of the Health Insurance Portability and Accountability Act (HIPAA) attachment proposal. Internationally, they have been adopted in Switzerland, Hong Kong, Australia, and Canada, and by the German national standards organization, the Deutsches Instituts für Normung. Laboratories should include LOINC codes in their outbound HL7 messages so that clinical and research clients can easily integrate these results into their clinical and research repositories. Laboratories should also encourage instrument vendors to deliver LOINC codes in their instrument outputs and demand LOINC codes in HL7 messages they get from reference laboratories to avoid the need to lump so many referral tests under the "send out lab" code.
HMDB 4.0: the human metabolome database for 2018.
Wishart, David S; Feunang, Yannick Djoumbou; Marcu, Ana; Guo, An Chi; Liang, Kevin; Vázquez-Fresno, Rosa; Sajed, Tanvir; Johnson, Daniel; Li, Carin; Karu, Naama; Sayeeda, Zinat; Lo, Elvis; Assempour, Nazanin; Berjanskii, Mark; Singhal, Sandeep; Arndt, David; Liang, Yonjie; Badran, Hasan; Grant, Jason; Serra-Cayuela, Arnau; Liu, Yifeng; Mandal, Rupa; Neveu, Vanessa; Pon, Allison; Knox, Craig; Wilson, Michael; Manach, Claudine; Scalbert, Augustin
2018-01-04
The Human Metabolome Database or HMDB (www.hmdb.ca) is a web-enabled metabolomic database containing comprehensive information about human metabolites along with their biological roles, physiological concentrations, disease associations, chemical reactions, metabolic pathways, and reference spectra. First described in 2007, the HMDB is now considered the standard metabolomic resource for human metabolic studies. Over the past decade the HMDB has continued to grow and evolve in response to emerging needs for metabolomics researchers and continuing changes in web standards. This year's update, HMDB 4.0, represents the most significant upgrade to the database in its history. For instance, the number of fully annotated metabolites has increased by nearly threefold, the number of experimental spectra has grown by almost fourfold and the number of illustrated metabolic pathways has grown by a factor of almost 60. Significant improvements have also been made to the HMDB's chemical taxonomy, chemical ontology, spectral viewing, and spectral/text searching tools. A great deal of brand new data has also been added to HMDB 4.0. This includes large quantities of predicted MS/MS and GC-MS reference spectral data as well as predicted (physiologically feasible) metabolite structures to facilitate novel metabolite identification. Additional information on metabolite-SNP interactions and the influence of drugs on metabolite levels (pharmacometabolomics) has also been added. Many other important improvements in the content, the interface, and the performance of the HMDB website have been made and these should greatly enhance its ease of use and its potential applications in nutrition, biochemistry, clinical chemistry, clinical genetics, medicine, and metabolomics science. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Wright, Judy M; Cottrell, David J; Mir, Ghazala
2014-07-01
To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.
An editor for pathway drawing and data visualization in the Biopathways Workbench.
Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar
2009-10-02
Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.
Furu, Kari; Karlstad, Øystein; Skurtveit, Svetlana; Håberg, Siri E; Nafstad, Per; London, Stephanie J; Nystad, Wenche
2011-01-01
Objectives To examine the validity of: 1) maternal questionnaire report of children's use of anti-asthmatics using a prescription database as the reference standard, 2) dispensed anti-asthmatics as a measure of asthma using maternal report of children's asthma as the reference standard. Study Design and Setting 3394 children in the Norwegian Mother and Child Cohort Study (MoBa) aged seven were linked to the Norwegian Prescription Database (NorPD). Maternal report of both children's use of anti-asthmatics during the preceding year and of the presence of asthma was compared with data on dispensed anti-asthmatics. Results 2056 mothers responded and reported use of anti-asthmatics the previous year in 125 of 147 children who had been dispensed anti-asthmatics (sensitivity 85.0%). Of 1909 children with no dispensed anti-asthmatics, 1848 had no maternal report of anti-asthmatic use (specificity 96.8%). Mothers reported current asthma in 133 (6.5% of 2056) children, including 122 (5.9%) reported as verified by a doctor. Of these 122, 98 had been dispensed anti-asthmatics during the preceding year (sensitivity 80.3%). Only 1.2% of the children without reported asthma were dispensed anti-asthmatics. Conclusion Mother-reported use of anti-asthmatics during the previous year among 7 year old children is highly valid. Dispensed anti-asthmatics would be a useful proxy for the presence of current asthma when disease data are not available. PMID:21232920
The Magnetics Information Consortium (MagIC)
NASA Astrophysics Data System (ADS)
Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.
2003-12-01
The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
Improvements in the Protein Identifier Cross-Reference service.
Wein, Samuel P; Côté, Richard G; Dumousseau, Marine; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan A
2012-07-01
The Protein Identifier Cross-Reference (PICR) service is a tool that allows users to map protein identifiers, protein sequences and gene identifiers across over 100 different source databases. PICR takes input through an interactive website as well as Representational State Transfer (REST) and Simple Object Access Protocol (SOAP) services. It returns the results as HTML pages, XLS and CSV files. It has been in production since 2007 and has been recently enhanced to add new functionality and increase the number of databases it covers. Protein subsequences can be Basic Local Alignment Search Tool (BLAST) against the UniProt Knowledgebase (UniProtKB) to provide an entry point to the standard PICR mapping algorithm. In addition, gene identifiers from UniProtKB and Ensembl can now be submitted as input or mapped to as output from PICR. We have also implemented a 'best-guess' mapping algorithm for UniProt. In this article, we describe the usefulness of PICR, how these changes have been implemented, and the corresponding additions to the web services. Finally, we explain that the number of source databases covered by PICR has increased from the initial 73 to the current 102. New resources include several new species-specific Ensembl databases as well as the Ensembl Genome ones. PICR can be accessed at http://www.ebi.ac.uk/Tools/picr/.
A manual for a laboratory information management system (LIMS) for light stable isotopes
Coplen, Tyler B.
1997-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
A manual for a Laboratory Information Management System (LIMS) for light stable isotopes
Coplen, Tyler B.
1998-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program is presented herein. Major benefits of this system include (i) an increase in laboratory efficiency, (ii) reduction in the use of paper, (iii) reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) decreased errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for stable isotope laboratories. Since the original publication of the manual for LIMS for Light Stable Isotopes, the isotopes 3 H, 3 He, and 14 C, and the chlorofluorocarbons (CFCs), CFC-11, CFC-12, and CFC-113, have been added to this program.
Validation of Living Donor Nephrectomy Codes
Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.
2018-01-01
Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679
Grid-enabled measures: using Science 2.0 to standardize measures and share data.
Moser, Richard P; Hesse, Bradford W; Shaikh, Abdul R; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry Y; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-05-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment--a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute (NCI) with two overarching goals: (1) promote the use of standardized measures, which are tied to theoretically based constructs; and (2) facilitate the ability to share harmonized data resulting from the use of standardized measures. The first is accomplished by creating an online venue where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting on, and viewing meta-data about the measures and associated constructs. The second is accomplished by connecting the constructs and measures to an ontological framework with data standards and common data elements such as the NCI Enterprise Vocabulary System (EVS) and the cancer Data Standards Repository (caDSR). This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories--for data sharing). Published by Elsevier Inc.
Kanis, J A; Adachi, J D; Cooper, C; Clark, P; Cummings, S R; Diaz-Curiel, M; Harvey, N; Hiligsmann, M; Papaioannou, A; Pierroz, D D; Silverman, S L; Szulc, P
2013-11-01
The Committee of Scientific Advisors of International Osteoporosis Foundation (IOF) recommends that papers describing the descriptive epidemiology of osteoporosis using bone mineral density (BMD) at the femoral neck include T-scores derived from an international reference standard. The prevalence of osteoporosis as defined by the T-score is inconsistently reported in the literature which makes comparisons between studies problematic. The Epidemiology and Quality of Life Working Group of IOF convened to make its recommendations and endorsement sought thereafter from the Committee of Scientific Advisors of IOF. The Committee of Scientific Advisors of IOF recommends that papers describing the descriptive epidemiology of osteoporosis using BMD at the femoral neck include T-scores derived from the National Health and Nutrition Examination Survey III reference database for femoral neck measurements in Caucasian women aged 20-29 years. It is expected that the use of the reference standard will help resolve difficulties in the comparison of results between studies and the comparative assessment of new technologies.
Morris, Keith B; Law, Eric F; Jefferys, Roger L; Dearth, Elizabeth C; Fabyanic, Emily B
2017-11-01
Through analysis and comparison of firing pin, breech face, and ejector impressions, where appropriate, firearm examiners may connect a cartridge case to a suspect firearm with a certain likelihood in a criminal investigation. When a firearm is not present, an examiner may use the Integrated Ballistics Identification System (IBIS ® ), an automated search and retrieval system coupled with the National Integrated Ballistics Information Network (NIBIN), a database of images showing the markings on fired cartridge cases and bullets from crime scenes along with test fired firearms. For the purpose of measurement quality control of these IBIS ® systems the National Institute of Standards and Technology (NIST) initiated the Standard Reference Material (SRM) 2460/2461 standard bullets and cartridge cases project. The aim of this study was to evaluate the overall performance of the IBIS ® system by using NIST standard cartridge cases. By evaluating the resulting correlation scores, error rates, and percent recovery, both the variability between and within examiners when using IBIS ® , in addition to any inter- and intra-variability between SRM cartridge cases was observed. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mallasch, Paul G.
1993-01-01
This volume contains the complete software system documentation for the Federal Communications Commission (FCC) Transponder Loading Data Conversion Software (FIX-FCC). This software was written to facilitate the formatting and conversion of FCC Transponder Occupancy (Loading) Data before it is loaded into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). The information that FCC supplies NASA is in report form and must be converted into a form readable by the database management software used in the GSOSTATS application. Both the User's Guide and Software Maintenance Manual are contained in this document. This volume of documentation passed an independent quality assurance review and certification by the Product Assurance and Security Office of the Planning Research Corporation (PRC). The manuals were reviewed for format, content, and readability. The Software Management and Assurance Program (SMAP) life cycle and documentation standards were used in the development of this document. Accordingly, these standards were used in the review. Refer to the System/Software Test/Product Assurance Report for the Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS) for additional information.
Designing Reliable Cohorts of Cardiac Patients across MIMIC and eICU
Chronaki, Catherine; Shahin, Abdullah; Mark, Roger
2016-01-01
The design of the patient cohort is an essential and fundamental part of any clinical patient study. Knowledge of the Electronic Health Records, underlying Database Management System, and the relevant clinical workflows are central to an effective cohort design. However, with technical, semantic, and organizational interoperability limitations, the database queries associated with a patient cohort may need to be reconfigured in every participating site. i2b2 and SHRINE advance the notion of patient cohorts as first class objects to be shared, aggregated, and recruited for research purposes across clinical sites. This paper reports on initial efforts to assess the integration of Medical Information Mart for Intensive Care (MIMIC) and Philips eICU, two large-scale anonymized intensive care unit (ICU) databases, using standard terminologies, i.e. LOINC, ICD9-CM and SNOMED-CT. Focus of this work is lab and microbiology observations and key demographics for patients with a primary cardiovascular ICD9-CM diagnosis. Results and discussion reflecting on reference core terminology standards, offer insights on efforts to combine detailed intensive care data from multiple ICUs worldwide. PMID:27774488
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
Schoch, Conrad L; Robbertse, Barbara; Robert, Vincent; Vu, Duong; Cardinali, Gianluigi; Irinyi, Laszlo; Meyer, Wieland; Nilsson, R Henrik; Hughes, Karen; Miller, Andrew N; Kirk, Paul M; Abarenkov, Kessy; Aime, M Catherine; Ariyawansa, Hiran A; Bidartondo, Martin; Boekhout, Teun; Buyck, Bart; Cai, Qing; Chen, Jie; Crespo, Ana; Crous, Pedro W; Damm, Ulrike; De Beer, Z Wilhelm; Dentinger, Bryn T M; Divakar, Pradeep K; Dueñas, Margarita; Feau, Nicolas; Fliegerova, Katerina; García, Miguel A; Ge, Zai-Wei; Griffith, Gareth W; Groenewald, Johannes Z; Groenewald, Marizeth; Grube, Martin; Gryzenhout, Marieka; Gueidan, Cécile; Guo, Liangdong; Hambleton, Sarah; Hamelin, Richard; Hansen, Karen; Hofstetter, Valérie; Hong, Seung-Beom; Houbraken, Jos; Hyde, Kevin D; Inderbitzin, Patrik; Johnston, Peter R; Karunarathna, Samantha C; Kõljalg, Urmas; Kovács, Gábor M; Kraichak, Ekaphan; Krizsan, Krisztina; Kurtzman, Cletus P; Larsson, Karl-Henrik; Leavitt, Steven; Letcher, Peter M; Liimatainen, Kare; Liu, Jian-Kui; Lodge, D Jean; Luangsa-ard, Janet Jennifer; Lumbsch, H Thorsten; Maharachchikumbura, Sajeewa S N; Manamgoda, Dimuthu; Martín, María P; Minnis, Andrew M; Moncalvo, Jean-Marc; Mulè, Giuseppina; Nakasone, Karen K; Niskanen, Tuula; Olariaga, Ibai; Papp, Tamás; Petkovits, Tamás; Pino-Bodas, Raquel; Powell, Martha J; Raja, Huzefa A; Redecker, Dirk; Sarmiento-Ramirez, J M; Seifert, Keith A; Shrestha, Bhushan; Stenroos, Soili; Stielow, Benjamin; Suh, Sung-Oui; Tanaka, Kazuaki; Tedersoo, Leho; Telleria, M Teresa; Udayanga, Dhanushka; Untereiner, Wendy A; Diéguez Uribeondo, Javier; Subbarao, Krishna V; Vágvölgyi, Csaba; Visagie, Cobus; Voigt, Kerstin; Walker, Donald M; Weir, Bevan S; Weiß, Michael; Wijayawardene, Nalin N; Wingfield, Michael J; Xu, J P; Yang, Zhu L; Zhang, Ning; Zhuang, Wen-Ying; Federhen, Scott
2014-01-01
DNA phylogenetic comparisons have shown that morphology-based species recognition often underestimates fungal diversity. Therefore, the need for accurate DNA sequence data, tied to both correct taxonomic names and clearly annotated specimen data, has never been greater. Furthermore, the growing number of molecular ecology and microbiome projects using high-throughput sequencing require fast and effective methods for en masse species assignments. In this article, we focus on selecting and re-annotating a set of marker reference sequences that represent each currently accepted order of Fungi. The particular focus is on sequences from the internal transcribed spacer region in the nuclear ribosomal cistron, derived from type specimens and/or ex-type cultures. Re-annotated and verified sequences were deposited in a curated public database at the National Center for Biotechnology Information (NCBI), namely the RefSeq Targeted Loci (RTL) database, and will be visible during routine sequence similarity searches with NR_prefixed accession numbers. A set of standards and protocols is proposed to improve the data quality of new sequences, and we suggest how type and other reference sequences can be used to improve identification of Fungi. Database URL: http://www.ncbi.nlm.nih.gov/bioproject/PRJNA177353. Published by Oxford University Press 2013. This work is written by US Government employees and is in the public domain in the US.
Storm, Lance; Tressoldi, Patrizio E; Di Risio, Lorenzo
2010-07-01
We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise reduction). For the period 1997-2008, a homogeneous data set of 29 ganzfeld studies yielded a mean effect size of 0.142 (Stouffer Z = 5.48, p = 2.13 x 10(-8)). A homogeneous nonganzfeld noise reduction data set of 16 studies yielded a mean effect size of 0.110 (Stouffer Z = 3.35, p = 2.08 x 10(-4)), and a homogeneous data set of 14 standard free-response studies produced a weak negative mean effect size of -0.029 (Stouffer Z = -2.29, p = .989). The mean effect size value of the ganzfeld database was significantly higher than the mean effect size of the standard free-response database but was not higher than the effect size of the nonganzfeld noise reduction database [corrected].We also found that selected participants (believers in the paranormal, meditators, etc.) had a performance advantage over unselected participants, but only if they were in the ganzfeld condition.
Lhermitte, L; Mejstrikova, E; van der Sluijs-Gelling, A J; Grigore, G E; Sedek, L; Bras, A E; Gaipa, G; Sobral da Costa, E; Novakova, M; Sonneveld, E; Buracchi, C; de Sá Bacelar, T; te Marvelde, J G; Trinquand, A; Asnafi, V; Szczepanski, T; Matarraz, S; Lopez, A; Vidriales, B; Bulsa, J; Hrusak, O; Kalina, T; Lecrevisse, Q; Martin Ayuso, M; Brüggemann, M; Verde, J; Fernandez, P; Burgos, L; Paiva, B; Pedreira, C E; van Dongen, J J M; Orfao, A; van der Velden, V H J
2018-01-01
Precise classification of acute leukemia (AL) is crucial for adequate treatment. EuroFlow has previously designed an AL orientation tube (ALOT) to guide towards the relevant classification panel (T-cell acute lymphoblastic leukemia (T-ALL), B-cell precursor (BCP)-ALL and/or acute myeloid leukemia (AML)) and final diagnosis. Now we built a reference database with 656 typical AL samples (145 T-ALL, 377 BCP-ALL, 134 AML), processed and analyzed via standardized protocols. Using principal component analysis (PCA)-based plots and automated classification algorithms for direct comparison of single-cells from individual patients against the database, another 783 cases were subsequently evaluated. Depending on the database-guided results, patients were categorized as: (i) typical T, B or Myeloid without or; (ii) with a transitional component to another lineage; (iii) atypical; or (iv) mixed-lineage. Using this automated algorithm, in 781/783 cases (99.7%) the right panel was selected, and data comparable to the final WHO-diagnosis was already provided in >93% of cases (85% T-ALL, 97% BCP-ALL, 95% AML and 87% mixed-phenotype AL patients), even without data on the full-characterization panels. Our results show that database-guided analysis facilitates standardized interpretation of ALOT results and allows accurate selection of the relevant classification panels, hence providing a solid basis for designing future WHO AL classifications. PMID:29089646
The FBI compression standard for digitized fingerprint images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
SGDB: a database of synthetic genes re-designed for optimizing protein over-expression.
Wu, Gang; Zheng, Yuanpu; Qureshi, Imran; Zin, Htar Thant; Beck, Tyler; Bulka, Blazej; Freeland, Stephen J
2007-01-01
Here we present the Synthetic Gene Database (SGDB): a relational database that houses sequences and associated experimental information on synthetic (artificially engineered) genes from all peer-reviewed studies published to date. At present, the database comprises information from more than 200 published experiments. This resource not only provides reference material to guide experimentalists in designing new genes that improve protein expression, but also offers a dataset for analysis by bioinformaticians who seek to test ideas regarding the underlying factors that influence gene expression. The SGDB was built under MySQL database management system. We also offer an XML schema for standardized data description of synthetic genes. Users can access the database at http://www.evolvingcode.net/codon/sgdb/index.php, or batch downloads all information through XML files. Moreover, users may visually compare the coding sequences of a synthetic gene and its natural counterpart with an integrated web tool at http://www.evolvingcode.net/codon/sgdb/aligner.php, and discuss questions, findings and related information on an associated e-forum at http://www.evolvingcode.net/forum/viewforum.php?f=27.
Thomas, Jeanice B; Sharpless, Katherine E; Yen, James H; Rimmer, Catherine A
2011-01-01
The concentrations of selected fat-soluble vitamins and carotenoids in Standard Reference Material (SRM) 3280 Multivitamin/Multielement Tablets have been determined by two independent LC methods, with measurements performed by the National Institute of Standards and Technology (NIST). This SRM has been prepared as part of a collaborative effort between NIST and the National Institutes of Health Office of Dietary Supplements. The SRM is also intended to support the Dietary Supplement Ingredient Database that is being established by the U.S. Department of Agriculture. The methods used at NIST to determine the concentration levels of vitamins A and E, and beta-carotene in the SRM used RPLC with absorbance detection. The relative precision of these methods ranged from 2 to 8% for the analytes measured. SRM 3280 is primarily intended for use in validating analytical methods for the determination of selected vitamins, carotenoids, and elements in multivitamin/multielement tablets and similar matrixes.
Building-up a database of spectro-photometric standards from the UV to the NIR
NASA Astrophysics Data System (ADS)
Vernet, J.; Kerber, F.; Mainieri, V.; Rauch, T.; Saitta, F.; D'Odorico, S.; Bohlin, R.; Ivanov, V.; Lidman, C.; Mason, E.; Smette, A.; Walsh, J.; Fosbury, R.; Goldoni, P.; Groot, P.; Hammer, F.; Kaper, L.; Horrobin, M.; Kjaergaard-Rasmussen, P.; Royer, F.
2010-11-01
We present results of a project aimed at establishing a set of 12 spectro-photometric standards over a wide wavelength range from 320 to 2500 nm. Currently no such set of standard stars covering the near-IR is available. Our strategy is to extend the useful range of existing well-established optical flux standards (Oke 1990, Hamuy et al. 1992, 1994) into the near-IR by means of integral field spectroscopy with SINFONI at the VLT combined with state-of-the-art white dwarf stellar atmospheric models (TMAP, Holberg et al. 2008). As a solid reference, we use two primary HST standard white dwarfs GD71 and GD153 and one HST secondary standard BD+17 4708. The data were collected through an ESO “Observatory Programme” over ~40 nights between February 2007 and September 2008.
Is the coverage of google scholar enough to be used alone for systematic reviews
2013-01-01
Background In searches for clinical trials and systematic reviews, it is said that Google Scholar (GS) should never be used in isolation, but in addition to PubMed, Cochrane, and other trusted sources of information. We therefore performed a study to assess the coverage of GS specifically for the studies included in systematic reviews and evaluate if GS was sensitive enough to be used alone for systematic reviews. Methods All the original studies included in 29 systematic reviews published in the Cochrane Database Syst Rev or in the JAMA in 2009 were gathered in a gold standard database. GS was searched for all these studies one by one to assess the percentage of studies which could have been identified by searching only GS. Results All the 738 original studies included in the gold standard database were retrieved in GS (100%). Conclusion The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews. PMID:23302542
Workshop on standards in biomass for energy and chemicals: proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milne, T.A.
1984-11-01
In the course of reviewing standards literature, visiting prominent laboratories and research groups, attending biomass meetings and corresponding widely, a whole set of standards needs was identified, the most prominent of which were: biomass standard reference materials, research materials and sample banks; special collections of microorganisms, clonal material, algae, etc.; standard methods of characterization of substrates and biomass fuels; standard tests and methods for the conversion and end-use of biomass; standard protocols for the description, harvesting, preparation, storage, and measurement of productivity of biomass materials in the energy context; glossaries of terms; development of special tests for assay of enzymaticmore » activity and related processes. There was also a recognition of the need for government, professional and industry support of concensus standards development and the dissemination of information on standards. Some 45 biomass researchers and managers met with key NBS staff to identify and prioritize standards needs. This was done through three working panels: the Panel on Standard Reference Materials (SRM's), Research Materials (RM's), and Sample Banks; the Panel on Production and Characterization; and the Panel on Tests and Methods for Conversion and End Use. This report gives a summary of the action items in standards development recommended unanimously by the workshop attendees. The proceedings of the workshop, and an appendix, contain an extensive written record of the findings of the workshop panelists and others regarding presently existing standards and standards issues and needs. Separate abstracts have been prepared for selected papers for inclusion in the Energy Database.« less
NASA Astrophysics Data System (ADS)
Dziedzic, Adam; Mulawka, Jan
2014-11-01
NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.
Geiss, Karla; Meyer, Martin
2013-09-01
Standardized mortality ratios and standardized incidence ratios are widely used in cohort studies to compare mortality or incidence in a study population to that in the general population on a age-time-specific basis, but their computation is not included in standard statistical software packages. Here we present a user-friendly Microsoft Windows program for computing standardized mortality ratios and standardized incidence ratios based on calculation of exact person-years at risk stratified by sex, age and calendar time. The program offers flexible import of different file formats for input data and easy handling of general population reference rate tables, such as mortality or incidence tables exported from cancer registry databases. The application of the program is illustrated with two examples using empirical data from the Bavarian Cancer Registry. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
Vidal-Acuña, M Reyes; Ruiz-Pérez de Pipaón, Maite; Torres-Sánchez, María José; Aznar, Javier
2017-12-08
An expanded library of matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) has been constructed using the spectra generated from 42 clinical isolates and 11 reference strains, including 23 different species from 8 sections (16 cryptic plus 7 noncryptic species). Out of a total of 379 strains of Aspergillus isolated from clinical samples, 179 strains were selected to be identified by sequencing of beta-tubulin or calmodulin genes. Protein spectra of 53 strains, cultured in liquid medium, were used to construct an in-house reference database in the MALDI-TOF MS. One hundred ninety strains (179 clinical isolates previously identified by sequencing and the 11 reference strains), cultured on solid medium, were blindy analyzed by the MALDI-TOF MS technology to validate the generated in-house reference database. A 100% correlation was obtained with both identification methods, gene sequencing and MALDI-TOF MS, and no discordant identification was obtained. The HUVR database provided species level (score of ≥2.0) identification in 165 isolates (86.84%) and for the remaining 25 (13.16%) a genus level identification (score between 1.7 and 2.0) was obtained. The routine MALDI-TOF MS analysis with the new database, was then challenged with 200 Aspergillus clinical isolates grown on solid medium in a prospective evaluation. A species identification was obtained in 191 strains (95.5%), and only nine strains (4.5%) could not be identified at the species level. Among the 200 strains, A. tubingensis was the only cryptic species identified. We demonstrated the feasibility and usefulness of the new HUVR database in MALDI-TOF MS by the use of a standardized procedure for the identification of Aspergillus clinical isolates, including cryptic species, grown either on solid or liquid media. © The Author 2017. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.
Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing
2018-04-06
Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.
Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen
2013-02-01
The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.
Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow
2013-09-01
The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
TMDB: a literature-curated database for small molecular compounds found from tea.
Yue, Yi; Chu, Gang-Xiu; Liu, Xue-Shi; Tang, Xing; Wang, Wei; Liu, Guang-Jin; Yang, Tao; Ling, Tie-Jun; Wang, Xiao-Gang; Zhang, Zheng-Zhu; Xia, Tao; Wan, Xiao-Chun; Bao, Guan-Hu
2014-09-16
Tea is one of the most consumed beverages worldwide. The healthy effects of tea are attributed to a wealthy of different chemical components from tea. Thousands of studies on the chemical constituents of tea had been reported. However, data from these individual reports have not been collected into a single database. The lack of a curated database of related information limits research in this field, and thus a cohesive database system should necessarily be constructed for data deposit and further application. The Tea Metabolome database (TMDB), a manually curated and web-accessible database, was developed to provide detailed, searchable descriptions of small molecular compounds found in Camellia spp. esp. in the plant Camellia sinensis and compounds in its manufactured products (different kinds of tea infusion). TMDB is currently the most complete and comprehensive curated collection of tea compounds data in the world. It contains records for more than 1393 constituents found in tea with information gathered from 364 published books, journal articles, and electronic databases. It also contains experimental 1H NMR and 13C NMR data collected from the purified reference compounds or collected from other database resources such as HMDB. TMDB interface allows users to retrieve tea compounds entries by keyword search using compound name, formula, occurrence, and CAS register number. Each entry in the TMDB contains an average of 24 separate data fields including its original plant species, compound structure, formula, molecular weight, name, CAS registry number, compound types, compound uses including healthy benefits, reference literatures, NMR, MS data, and the corresponding ID from databases such as HMDB and Pubmed. Users can also contribute novel regulatory entries by using a web-based submission page. The TMDB database is freely accessible from the URL of http://pcsb.ahau.edu.cn:8080/TCDB/index.jsp. The TMDB is designed to address the broad needs of tea biochemists, natural products chemists, nutritionists, and members of tea related research community. The TMDB database provides a solid platform for collection, standardization, and searching of compounds information found in tea. As such this database will be a comprehensive repository for tea biochemistry and tea health research community.
Relativistic MR–MP Energy Levels for L-shell Ions of Silicon
Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter
2018-01-15
Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less
Relativistic MR–MP Energy Levels for L-shell Ions of Silicon
NASA Astrophysics Data System (ADS)
Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter
2018-01-01
Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si V to 0.04 eV in Si XII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.
VizieR Online Data Catalog: Relativistic MR-MP energy levels for Si (Santana+, 2018)
NASA Astrophysics Data System (ADS)
Santana, J. A.; Lopez-Dauphin, N. A.; Beiersdorfer, P.
2018-03-01
Level energies are reported for Si V, Si VI, Si VII, Si VIII, Si IX, Si X, Si XI, and Si XII. The energies have been calculated with the relativistic Multi- Reference Moller-Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20eV in SiV to 0.04eV in SiXII. For K-vacancy states, the available values recommended in the NIST database are limited to Si XII and Si XIII. The average energy deviation is below 0.3eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. We expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements. (1 data file).
Relativistic MR–MP Energy Levels for L-shell Ions of Silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santana, Juan A.; Lopez-Dauphin, Nahyr A.; Beiersdorfer, Peter
Level energies are reported for Si v, Si vi, Si vii, Si viii, Si ix, Si x, Si xi, and Si xii. The energies have been calculated with the relativistic Multi-Reference Møller–Plesset Perturbation Theory method and include valence and K-vacancy states with nl up to 5f. The accuracy of the calculated level energies is established by comparison with the recommended data listed in the National Institute of Standards and Technology (NIST) online database. The average deviation of valence level energies ranges from 0.20 eV in Si v to 0.04 eV in Si xii. For K-vacancy states, the available values recommendedmore » in the NIST database are limited to Si xii and Si xiii. The average energy deviation is below 0.3 eV for K-vacancy states. The extensive and accurate data set presented here greatly augments the amount of available reference level energies. Here, we expect our data to ease the line identification of L-shell ions of Si in celestial sources and laboratory-generated plasmas, and to serve as energy references in the absence of more accurate laboratory measurements.« less
NASA Astrophysics Data System (ADS)
Power, O.; Solve, S.; Chayramy, R.; Stock, M.
2010-01-01
As a part of the ongoing BIPM key comparisons BIPM.EM-K11.a and b, a comparison of the 1.018 V and 10 V voltage reference standards of the BIPM and of the National Standards Authority of Ireland-National Metrology Laboratory (NSAI-NML), Dublin, Ireland, was carried out from March to April 2010. Two BIPM Zener diode-based travelling standards were transported by freight to NSAI-NML. At NSAI-NML, the reference standard for DC voltage is maintained at the 10 V level by means of a group of characterized Zener diode-based electronic voltage standards. The output EMF of each travelling standard, at the 10 V output terminals, was measured by direct comparison with the group standard. Measurements of the output EMF of the travelling standards at the 1.018 V output terminals were made using a potentiometer, standardized against the local 10 V reference standard. At the BIPM, the travelling standards were calibrated at both voltages before and after the measurements at NSAI-NML, using the BIPM Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages on internal temperature and ambient pressure. The comparison results show that the voltage standards maintained by NSAI-NML and the BIPM were equivalent, within their stated expanded uncertainties, on the mean date of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Correction to storm, Tressoldi, and di Risio (2010).
2015-03-01
Reports an error in "Meta-analysis of free-response studies, 1992-2008: Assessing the noise reduction model in parapsychology" by Lance Storm, Patrizio E. Tressoldi and Lorenzo Di Risio (Psychological Bulletin, 2010[Jul], Vol 136[4], 471-485). In the article, the sentence giving the formula in the second paragraph on p. 479 was stated incorrectly. The corrected sentence is included. (The following abstract of the original article appeared in record 2010-12718-001.) [Correction Notice: An erratum for this article was reported in Vol 136(5) of Psychological Bulletin (see record 2010-17510-009). In the article, the second to last sentence of the abstract (p. 471) was stated incorrectly. The sentence should read as follows: "The mean effect size value of the ganzfeld database was significantly higher than the mean effect size of the standard free-response database but was not higher than the effect size of the nonganzfeld noise reduction database."] We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise reduction). For the period 1997-2008, a homogeneous data set of 29 ganzfeld studies yielded a mean effect size of 0.142 (Stouffer Z = 5.48, p = 2.13 × 10-8). A homogeneous nonganzfeld noise reduction data set of 16 studies yielded a mean effect size of 0.110 (Stouffer Z = 3.35, p = 2.08 × 10-4), and a homogeneous data set of 14 standard free-response studies produced a weak negative mean effect size of -0.029 (Stouffer Z = -2.29, p = .989). The mean effect size value of the ganzfeld database were significantly higher than the mean effect size of the nonganzfeld noise reduction and the standard free-response databases. We also found that selected participants (believers in the paranormal, meditators, etc.) had a performance advantage over unselected participants, but only if they were in the ganzfeld condition. PsycINFO Database Record (c) 2015 APA, all rights reserved.
An Information System for European culture collections: the way forward.
Casaregola, Serge; Vasilenko, Alexander; Romano, Paolo; Robert, Vincent; Ozerskaya, Svetlana; Kopf, Anna; Glöckner, Frank O; Smith, David
2016-01-01
Culture collections contain indispensable information about the microorganisms preserved in their repositories, such as taxonomical descriptions, origins, physiological and biochemical characteristics, bibliographic references, etc. However, information currently accessible in databases rarely adheres to common standard protocols. The resultant heterogeneity between culture collections, in terms of both content and format, notably hampers microorganism-based research and development (R&D). The optimized exploitation of these resources thus requires standardized, and simplified, access to the associated information. To this end, and in the interest of supporting R&D in the fields of agriculture, health and biotechnology, a pan-European distributed research infrastructure, MIRRI, including over 40 public culture collections and research institutes from 19 European countries, was established. A prime objective of MIRRI is to unite and provide universal access to the fragmented, and untapped, resources, information and expertise available in European public collections of microorganisms; a key component of which is to develop a dynamic Information System. For the first time, both culture collection curators as well as their users have been consulted and their feedback, concerning the needs and requirements for collection databases and data accessibility, utilised. Users primarily noted that databases were not interoperable, thus rendering a global search of multiple databases impossible. Unreliable or out-of-date and, in particular, non-homogenous, taxonomic information was also considered to be a major obstacle to searching microbial data efficiently. Moreover, complex searches are rarely possible in online databases thus limiting the extent of search queries. Curators also consider that overall harmonization-including Standard Operating Procedures, data structure, and software tools-is necessary to facilitate their work and to make high-quality data easily accessible to their users. Clearly, the needs of culture collection curators coincide with those of users on the crucial point of database interoperability. In this regard, and in order to design an appropriate Information System, important aspects on which the culture collection community should focus include: the interoperability of data sets with the ontologies to be used; setting best practice in data management, and the definition of an appropriate data standard.
Land, Sally; Cunningham, Philip; Zhou, Jialun; Frost, Kevin; Katzenstein, David; Kantor, Rami; Chen, Yi-Ming Arthur; Oka, Shinichi; DeLong, Allison; Sayer, David; Smith, Jeffery; Dax, Elizabeth M.; Law, Matthew
2010-01-01
The TREAT Asia (Therapeutics, Research, Education, and AIDS Training in Asia) Network is building capacity for Human Immunodeficiency Virus Type-1 (HIV-1) drug resistance testing in the region. The objective of the TREAT Asia Quality Assessment Scheme – designated TAQAS – is to standardize HIV-1 genotypic resistance testing (HIV genotyping) among laboratories to permit rigorous comparison of results from different clinics and testing centres. TAQAS has evaluated three panels of HIV-1-positive plasma from clinical material or low-passage, culture supernatant for up to 10 Asian laboratories. Laboratory participants used their standard protocols to perform HIV genotyping. Assessment was in comparison to a target genotype derived from all participants and the reference laboratory’s result. Agreement between most participants at the edited nucleotide sequence level was high (>98%). Most participants performed to the reference laboratory standard in detection of drug resistance mutations (DRMs). However, there was variation in the detection of nucleotide mixtures (0–83%) and a significant correlation with the detection of DRMs (p < 0.01). Interpretation of antiretroviral resistance showed ~70% agreement among participants when different interpretation systems were used but >90% agreement with a common interpretation system, within the Stanford University Drug Resistance Database. Using the principles of external quality assessment and a reference laboratory, TAQAS has demonstrated high quality HIV genotyping results from Asian laboratories. PMID:19490972
Expert searching in public health
Alpi, Kristine M.
2005-01-01
Objective: The article explores the characteristics of public health information needs and the resources available to address those needs that distinguish it as an area of searching requiring particular expertise. Methods: Public health searching activities from reference questions and literature search requests at a large, urban health department library were reviewed to identify the challenges in finding relevant public health information. Results: The terminology of the information request frequently differed from the vocabularies available in the databases. Searches required the use of multiple databases and/or Web resources with diverse interfaces. Issues of the scope and features of the databases relevant to the search questions were considered. Conclusion: Expert searching in public health differs from other types of expert searching in the subject breadth and technical demands of the databases to be searched, the fluidity and lack of standardization of the vocabulary, and the relative scarcity of high-quality investigations at the appropriate level of geographic specificity. Health sciences librarians require a broad exposure to databases, gray literature, and public health terminology to perform as expert searchers in public health. PMID:15685281
Chaitanya, Lakshmi; van Oven, Mannis; Brauer, Silke; Zimmermann, Bettina; Huber, Gabriela; Xavier, Catarina; Parson, Walther; de Knijff, Peter; Kayser, Manfred
2016-03-01
The use of mitochondrial DNA (mtDNA) for maternal lineage identification often marks the last resort when investigating forensic and missing-person cases involving highly degraded biological materials. As with all comparative DNA testing, a match between evidence and reference sample requires a statistical interpretation, for which high-quality mtDNA population frequency data are crucial. Here, we determined, under high quality standards, the complete mtDNA control-region sequences of 680 individuals from across the Netherlands sampled at 54 sites, covering the entire country with 10 geographic sub-regions. The complete mtDNA control region (nucleotide positions 16,024-16,569 and 1-576) was amplified with two PCR primers and sequenced with ten different sequencing primers using the EMPOP protocol. Haplotype diversity of the entire sample set was very high at 99.63% and, accordingly, the random-match probability was 0.37%. No population substructure within the Netherlands was detected with our dataset. Phylogenetic analyses were performed to determine mtDNA haplogroups. Inclusion of these high-quality data in the EMPOP database (accession number: EMP00666) will improve its overall data content and geographic coverage in the interest of all EMPOP users worldwide. Moreover, this dataset will serve as (the start of) a national reference database for mtDNA applications in forensic and missing person casework in the Netherlands. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Defense Standardization Program Journal. November 2002/February 2003
2003-02-01
8217 website, the nonprofit USMA has The National Council of Teachers many types of metric supplies and I I of Mathematics ( NCTM ), in recogni- training...aids available for sale. Also . ,. tion of the metrication efforts of available is a CD) that contains a Met- mathematics teachers , started National...ric Bibliography database of more thanE Metric Week in 1976. In those early 14,000 references to articles about the years, NCTM celebrated National
USDA-ARS?s Scientific Manuscript database
The vitamin D3 content and variability of retail milk in the United States, having a declared fortification level of 400 IU (10 µg) per quart (25% DV per 8 fl oz serving), was determined. In 2007, vitamin D3 fortified milk (skim, 1% fat, 2% fat, whole, and 1% fat chocolate milk) was collected from ...
Pedersen, Sidsel Arnspang; Schmidt, Sigrun Alba Johannesdottir; Klausen, Siri; Pottegård, Anton; Friis, Søren; Hölmich, Lisbet Rosenkrantz; Gaist, David
2018-05-01
The nationwide Danish Cancer Registry and the Danish Melanoma Database both record data on melanoma for purposes of monitoring, quality assurance, and research. However, the data quality of the Cancer Registry and the Melanoma Database has not been formally evaluated. We estimated the positive predictive value (PPV) of melanoma diagnosis for random samples of 200 patients from the Cancer Registry (n = 200) and the Melanoma Database (n = 200) during 2004-2014, using the Danish Pathology Registry as "gold standard" reference. We further validated tumor characteristics in the Cancer Registry and the Melanoma Database. Additionally, we estimated the PPV of in situ melanoma diagnoses in the Melanoma Database, and the sensitivity of melanoma diagnoses in 2004-2014. The PPVs of melanoma in the Cancer Registry and the Melanoma Database were 97% (95% CI = 94, 99) and 100%. The sensitivity was 90% in the Cancer Registry and 77% in the Melanoma Database. The PPV of in situ melanomas in the Melanoma Database was 97% and the sensitivity was 56%. In the Melanoma Database, we observed PPVs of ulceration of 75% and Breslow thickness of 96%. The PPV of histologic subtypes varied between 87% and 100% in the Cancer Registry and 93% and 100% in the Melanoma Database. The PPVs for anatomical localization were 83%-95% in the Cancer Registry and 93%-100% in the Melanoma Database. The data quality in both the Cancer Registry and the Melanoma Database is high, supporting their use in epidemiologic studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report discusses the sample tracking database in use at the Idaho National Engineering Laboratory (INEL) by the Radiation Measurements Laboratory (RML) and Analytical Radiochemistry. The database was designed in-house to meet the specific needs of the RML and Analytical Radiochemistry. The report consists of two parts, a user`s guide and a reference guide. The user`s guide presents some of the fundamentals needed by anyone who will be using the database via its user interface. The reference guide describes the design of both the database and the user interface. Briefly mentioned in the reference guide are the code-generating tools, CREATE-SCHEMAmore » and BUILD-SCREEN, written to automatically generate code for the database and its user interface. The appendices contain the input files used by the these tools to create code for the sample tracking database. The output files generated by these tools are also included in the appendices.« less
PedAM: a database for Pediatric Disease Annotation and Medicine.
Jia, Jinmeng; An, Zhongxin; Ming, Yue; Guo, Yongli; Li, Wei; Li, Xin; Liang, Yunxiang; Guo, Dongming; Tai, Jun; Chen, Geng; Jin, Yaqiong; Liu, Zhimei; Ni, Xin; Shi, Tieliu
2018-01-04
There is a significant number of children around the world suffering from the consequence of the misdiagnosis and ineffective treatment for various diseases. To facilitate the precision medicine in pediatrics, a database namely the Pediatric Disease Annotations & Medicines (PedAM) has been built to standardize and classify pediatric diseases. The PedAM integrates both biomedical resources and clinical data from Electronic Medical Records to support the development of computational tools, by which enables robust data analysis and integration. It also uses disease-manifestation (D-M) integrated from existing biomedical ontologies as prior knowledge to automatically recognize text-mined, D-M-specific syntactic patterns from 774 514 full-text articles and 8 848 796 abstracts in MEDLINE. Additionally, disease connections based on phenotypes or genes can be visualized on the web page of PedAM. Currently, the PedAM contains standardized 8528 pediatric disease terms (4542 unique disease concepts and 3986 synonyms) with eight annotation fields for each disease, including definition synonyms, gene, symptom, cross-reference (Xref), human phenotypes and its corresponding phenotypes in the mouse. The database PedAM is freely accessible at http://www.unimd.org/pedam/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Revisiting the Canadian English vowel space
NASA Astrophysics Data System (ADS)
Hagiwara, Robert
2005-04-01
In order to fill a need for experimental-acoustic baseline measurements of Canadian English vowels, a database is currently being constructed in Winnipeg, Manitoba. The database derives from multiple repetitions of fifteen English vowels (eleven standard monophthongs, syllabic /r/ and three standard diphthongs) in /hVd/ and /hVt/ contexts, as spoken by multiple speakers. Frequencies of the first four formants are taken from three timepoints in every vowel token (25, 50, and 75% of vowel duration). Preliminary results (from five men and five women) confirm some features characteristic of Canadian English, but call others into question. For instance the merger of low back vowels appears to be complete for these speakers, but the result is a lower-mid and probably rounded vowel rather than the low back unround vowel often described. With these data Canadian Raising can be quantified as an average 200 Hz or 1.5 Bark downward shift in the frequency of F1 before voiceless /t/. Analysis of the database will lead to a more accurate picture of the Canadian English vowel system, as well as provide a practical and up-to-date point of reference for further phonetic and sociophonetic comparisons.
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko
2014-07-01
TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Development of a biomarkers database for the National Children's Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobdell, Danelle T.; Mendola, Pauline
The National Children's Study (NCS) is a federally-sponsored, longitudinal study of environmental influences on the health and development of children across the United States (www.nationalchildrensstudy.gov). Current plans are to study approximately 100,000 children and their families beginning before birth up to age 21 years. To explore potential biomarkers that could be important measurements in the NCS, we compiled the relevant scientific literature to identify both routine or standardized biological markers as well as new and emerging biological markers. Although the search criteria encouraged examination of factors that influence the breadth of child health and development, attention was primarily focused onmore » exposure, susceptibility, and outcome biomarkers associated with four important child health outcomes: autism and neurobehavioral disorders, injury, cancer, and asthma. The Biomarkers Database was designed to allow users to: (1) search the biomarker records compiled by type of marker (susceptibility, exposure or effect), sampling media (e.g., blood, urine, etc.), and specific marker name; (2) search the citations file; and (3) read the abstract evaluations relative to our search criteria. A searchable, user-friendly database of over 2000 articles was created and is publicly available at: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85844. PubMed was the primary source of references with some additional searches of Toxline, NTIS, and other reference databases. Our initial focus was on review articles, beginning as early as 1996, supplemented with searches of the recent primary research literature from 2001 to 2003. We anticipate this database will have applicability for the NCS as well as other studies of children's environmental health.« less
R-Syst::diatom: an open-access and curated barcode database for diatoms and freshwater monitoring.
Rimet, Frédéric; Chaumeil, Philippe; Keck, François; Kermarrec, Lenaïg; Vasselon, Valentin; Kahlert, Maria; Franc, Alain; Bouchez, Agnès
2016-01-01
Diatoms are micro-algal indicators of freshwater pollution. Current standardized methodologies are based on microscopic determinations, which is time consuming and prone to identification uncertainties. The use of DNA-barcoding has been proposed as a way to avoid these flaws. Combining barcoding with next-generation sequencing enables collection of a large quantity of barcodes from natural samples. These barcodes are identified as certain diatom taxa by comparing the sequences to a reference barcoding library using algorithms. Proof of concept was recently demonstrated for synthetic and natural communities and underlined the importance of the quality of this reference library. We present an open-access and curated reference barcoding database for diatoms, called R-Syst::diatom, developed in the framework of R-Syst, the network of systematic supported by INRA (French National Institute for Agricultural Research), see http://www.rsyst.inra.fr/en. R-Syst::diatom links DNA-barcodes to their taxonomical identifications, and is dedicated to identify barcodes from natural samples. The data come from two sources, a culture collection of freshwater algae maintained in INRA in which new strains are regularly deposited and barcoded and from the NCBI (National Center for Biotechnology Information) nucleotide database. Two kinds of barcodes were chosen to support the database: 18S (18S ribosomal RNA) and rbcL (Ribulose-1,5-bisphosphate carboxylase/oxygenase), because of their efficiency. Data are curated using innovative (Declic) and classical bioinformatic tools (Blast, classical phylogenies) and up-to-date taxonomy (Catalogues and peer reviewed papers). Every 6 months R-Syst::diatom is updated. The database is available through the R-Syst microalgae website (http://www.rsyst.inra.fr/) and a platform dedicated to next-generation sequencing data analysis, virtual_BiodiversityL@b (https://galaxy-pgtp.pierroton.inra.fr/). We present here the content of the library regarding the number of barcodes and diatom taxa. In addition to these information, morphological features (e.g. biovolumes, chloroplasts…), life-forms (mobility, colony-type) or ecological features (taxa preferenda to pollution) are indicated in R-Syst::diatom. Database URL: http://www.rsyst.inra.fr/. © The Author(s) 2016. Published by Oxford University Press.
R-Syst::diatom: an open-access and curated barcode database for diatoms and freshwater monitoring
Rimet, Frédéric; Chaumeil, Philippe; Keck, François; Kermarrec, Lenaïg; Vasselon, Valentin; Kahlert, Maria; Franc, Alain; Bouchez, Agnès
2016-01-01
Diatoms are micro-algal indicators of freshwater pollution. Current standardized methodologies are based on microscopic determinations, which is time consuming and prone to identification uncertainties. The use of DNA-barcoding has been proposed as a way to avoid these flaws. Combining barcoding with next-generation sequencing enables collection of a large quantity of barcodes from natural samples. These barcodes are identified as certain diatom taxa by comparing the sequences to a reference barcoding library using algorithms. Proof of concept was recently demonstrated for synthetic and natural communities and underlined the importance of the quality of this reference library. We present an open-access and curated reference barcoding database for diatoms, called R-Syst::diatom, developed in the framework of R-Syst, the network of systematic supported by INRA (French National Institute for Agricultural Research), see http://www.rsyst.inra.fr/en. R-Syst::diatom links DNA-barcodes to their taxonomical identifications, and is dedicated to identify barcodes from natural samples. The data come from two sources, a culture collection of freshwater algae maintained in INRA in which new strains are regularly deposited and barcoded and from the NCBI (National Center for Biotechnology Information) nucleotide database. Two kinds of barcodes were chosen to support the database: 18S (18S ribosomal RNA) and rbcL (Ribulose-1,5-bisphosphate carboxylase/oxygenase), because of their efficiency. Data are curated using innovative (Declic) and classical bioinformatic tools (Blast, classical phylogenies) and up-to-date taxonomy (Catalogues and peer reviewed papers). Every 6 months R-Syst::diatom is updated. The database is available through the R-Syst microalgae website (http://www.rsyst.inra.fr/) and a platform dedicated to next-generation sequencing data analysis, virtual_BiodiversityL@b (https://galaxy-pgtp.pierroton.inra.fr/). We present here the content of the library regarding the number of barcodes and diatom taxa. In addition to these information, morphological features (e.g. biovolumes, chloroplasts…), life-forms (mobility, colony-type) or ecological features (taxa preferenda to pollution) are indicated in R-Syst::diatom. Database URL: http://www.rsyst.inra.fr/ PMID:26989149
Reference Fluid Thermodynamic and Transport Properties Database (REFPROP)
National Institute of Standards and Technology Data Gateway
SRD 23 NIST Reference Fluid Thermodynamic and Transport Properties Database (REFPROP) (PC database for purchase) NIST 23 contains revised data in a Windows version of the database, including 105 pure fluids and allowing mixtures of up to 20 components. The fluids include the environmentally acceptable HFCs, traditional HFCs and CFCs and 'natural' refrigerants like ammonia
Validation of a case definition to define chronic dialysis using outpatient administrative data.
Clement, Fiona M; James, Matthew T; Chin, Rick; Klarenbach, Scott W; Manns, Braden J; Quinn, Robert R; Ravani, Pietro; Tonelli, Marcello; Hemmelgarn, Brenda R
2011-03-01
Administrative health care databases offer an efficient and accessible, though as-yet unvalidated, approach to studying outcomes of patients with chronic kidney disease and end-stage renal disease (ESRD). The objective of this study is to determine the validity of outpatient physician billing derived algorithms for defining chronic dialysis compared to a reference standard ESRD registry. A cohort of incident dialysis patients (Jan. 1-Dec. 31, 2008) and prevalent chronic dialysis patients (Jan 1, 2008) was selected from a geographically inclusive ESRD registry and administrative database. Four administrative data definitions were considered: at least 1 outpatient claim, at least 2 outpatient claims, at least 2 outpatient claims at least 90 days apart, and continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. Measures of agreement of the four administrative data definitions were compared to a reference standard (ESRD registry). Basic patient characteristics are compared between all 5 patient groups. 1,118,097 individuals formed the overall population and 2,227 chronic dialysis patients were included in the ESRD registry. The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating "substantial" agreement. "At least 1 outpatient claim" resulted in "excellent" agreement with a kappa statistic of 0.81. Of the four definitions, the simplest (at least 1 outpatient claim) performed comparatively to other definitions. The limitations of this work are the billing codes used are developed in Canada, however, other countries use similar billing practices and thus the codes could easily be mapped to other systems. Our reference standard ESRD registry may not capture all dialysis patients resulting in some misclassification. The registry is linked to on-going care so this is likely to be minimal. The definition utilized will vary with the research objective.
Dilthey, Alexander T; Gourraud, Pierre-Antoine; Mentzer, Alexander J; Cereb, Nezih; Iqbal, Zamin; McVean, Gil
2016-10-01
Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30-250 CPU hours per sample) remain a significant challenge to practical application.
High-Accuracy HLA Type Inference from Whole-Genome Sequencing Data Using Population Reference Graphs
Dilthey, Alexander T.; Gourraud, Pierre-Antoine; McVean, Gil
2016-01-01
Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30–250 CPU hours per sample) remain a significant challenge to practical application. PMID:27792722
Higgins, Victoria; Truong, Dorothy; Woroch, Amy; Chan, Man Khun; Tahmasebi, Houman; Adeli, Khosrow
2018-03-01
Evidence-based reference intervals (RIs) are essential to accurately interpret pediatric laboratory test results. To fill gaps in pediatric RIs, the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER) project developed an age- and sex-specific pediatric RI database based on healthy pediatric subjects. Originally established for Abbott ARCHITECT assays, CALIPER RIs were transferred to assays on Beckman, Roche, Siemens, and Ortho analytical platforms. This study provides transferred reference intervals for 29 biochemical assays for the Ortho VITROS 5600 Chemistry System (Ortho). Based on Clinical Laboratory Standards Institute (CLSI) guidelines, a method comparison analysis was performed by measuring approximately 200 patient serum samples using Abbott and Ortho assays. The equation of the line of best fit was calculated and the appropriateness of the linear model was assessed. This equation was used to transfer RIs from Abbott to Ortho assays. Transferred RIs were verified using 84 healthy pediatric serum samples from the CALIPER cohort. RIs for most chemistry analytes successfully transferred from Abbott to Ortho assays. Calcium and CO 2 did not meet statistical criteria for transference (r 2 <0.70). Of the 32 transferred reference intervals, 29 successfully verified with approximately 90% of results from reference samples falling within transferred confidence limits. Transferred RIs for total bilirubin, magnesium, and LDH did not meet verification criteria and are not reported. This study broadens the utility of the CALIPER pediatric RI database to laboratories using Ortho VITROS 5600 biochemical assays. Clinical laboratories should verify CALIPER reference intervals for their specific analytical platform and local population as recommended by CLSI. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
mirPub: a database for searching microRNA publications.
Vergoulis, Thanasis; Kanellos, Ilias; Kostoulas, Nikos; Georgakilas, Georgios; Sellis, Timos; Hatzigeorgiou, Artemis; Dalamagas, Theodore
2015-05-01
Identifying, amongst millions of publications available in MEDLINE, those that are relevant to specific microRNAs (miRNAs) of interest based on keyword search faces major obstacles. References to miRNA names in the literature often deviate from standard nomenclature for various reasons, since even the official nomenclature evolves. For instance, a single miRNA name may identify two completely different molecules or two different names may refer to the same molecule. mirPub is a database with a powerful and intuitive interface, which facilitates searching for miRNA literature, addressing the aforementioned issues. To provide effective search services, mirPub applies text mining techniques on MEDLINE, integrates data from several curated databases and exploits data from its user community following a crowdsourcing approach. Other key features include an interactive visualization service that illustrates intuitively the evolution of miRNA data, tag clouds summarizing the relevance of publications to particular diseases, cell types or tissues and access to TarBase 6.0 data to oversee genes related to miRNA publications. mirPub is freely available at http://www.microrna.gr/mirpub/. vergoulis@imis.athena-innovation.gr or dalamag@imis.athena-innovation.gr Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Viallon, Joële; Idrees, Faraz; Moussay, Philippe; Wielgosz, Robert; Lin, Tsai-Yin; Norris, James E.; Hodges, Joseph T.
2017-01-01
As part of the on-going key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the ITRI Center for Measurement Standards (CMS-ITRI) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM), via a transfer standard maintained by the National Institute of Standards and Technology (NIST). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Blind image quality assessment without training on human opinion scores
NASA Astrophysics Data System (ADS)
Mittal, Anish; Soundararajan, Rajiv; Muralidhar, Gautam S.; Bovik, Alan C.; Ghosh, Joydeep
2013-03-01
We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.
2011-09-30
DNA profiles. Referred to as geneGIS, the program will provide the ability to display, browse, select, filter and summarize spatial or temporal...of the SPLASH photo-identification records and available DNA profiles is underway through integration and crosschecking by Cascadia and MMI . An...Darwin Core standards where possible and can accommodate the current databases developed for telemetry data at MMI and SPLASH collection records at
FreeSolv: A database of experimental and calculated hydration free energies, with input files
Mobley, David L.; Guthrie, J. Peter
2014-01-01
This work provides a curated database of experimental and calculated hydration free energies for small neutral molecules in water, along with molecular structures, input files, references, and annotations. We call this the Free Solvation Database, or FreeSolv. Experimental values were taken from prior literature and will continue to be curated, with updated experimental references and data added as they become available. Calculated values are based on alchemical free energy calculations using molecular dynamics simulations. These used the GAFF small molecule force field in TIP3P water with AM1-BCC charges. Values were calculated with the GROMACS simulation package, with full details given in references cited within the database itself. This database builds in part on a previous, 504-molecule database containing similar information. However, additional curation of both experimental data and calculated values has been done here, and the total number of molecules is now up to 643. Additional information is now included in the database, such as SMILES strings, PubChem compound IDs, accurate reference DOIs, and others. One version of the database is provided in the Supporting Information of this article, but as ongoing updates are envisioned, the database is now versioned and hosted online. In addition to providing the database, this work describes its construction process. The database is available free-of-charge via http://www.escholarship.org/uc/item/6sd403pz. PMID:24928188
Tikkanen, Tuomas; Leroy, Bernard; Fournier, Jean Louis; Risques, Rosa Ana; Malcikova, Jitka; Soussi, Thierry
2018-07-01
Accurate annotation of genomic variants in human diseases is essential to allow personalized medicine. Assessment of somatic and germline TP53 alterations has now reached the clinic and is required in several circumstances such as the identification of the most effective cancer therapy for patients with chronic lymphocytic leukemia (CLL). Here, we present Seshat, a Web service for annotating TP53 information derived from sequencing data. A flexible framework allows the use of standard file formats such as Mutation Annotation Format (MAF) or Variant Call Format (VCF), as well as common TXT files. Seshat performs accurate variant annotations using the Human Genome Variation Society (HGVS) nomenclature and the stable TP53 genomic reference provided by the Locus Reference Genomic (LRG). In addition, using the 2017 release of the UMD_TP53 database, Seshat provides multiple statistical information for each TP53 variant including database frequency, functional activity, or pathogenicity. The information is delivered in standardized output tables that minimize errors and facilitate comparison of mutational data across studies. Seshat is a beneficial tool to interpret the ever-growing TP53 sequencing data generated by multiple sequencing platforms and it is freely available via the TP53 Website, http://p53.fr or directly at http://vps338341.ovh.net/. © 2018 Wiley Periodicals, Inc.
Bedside diagnosis of dysphagia: a systematic review.
O'Horo, John C; Rogus-Pulia, Nicole; Garcia-Arguello, Lisbeth; Robbins, JoAnne; Safdar, Nasia
2015-04-01
Dysphagia is associated with aspiration, pneumonia, and malnutrition, but remains challenging to identify at the bedside. A variety of exam protocols and maneuvers are commonly used, but the efficacy of these maneuvers is highly variable. We conducted a comprehensive search of 7 databases, including MEDLINE, Embase, and Scopus, from each database's earliest inception through June 9, 2014. Studies reporting diagnostic performance of a bedside examination maneuver compared to a reference gold standard (videofluoroscopic swallow study or flexible endoscopic evaluation of swallowing with sensory testing) were included for analysis. From each study, data were abstracted based on the type of diagnostic method and reference standard study population and inclusion/exclusion characteristics, design, and prediction of aspiration. The search strategy identified 38 articles meeting inclusion criteria. Overall, most bedside examinations lacked sufficient sensitivity to be used for screening purposes across all patient populations examined. Individual studies found dysphonia assessments, abnormal pharyngeal sensation assessments, dual axis accelerometry, and 1 description of water swallow testing to be sensitive tools, but none were reported as consistently sensitive. A preponderance of identified studies was in poststroke adults, limiting the generalizability of results. No bedside screening protocol has been shown to provide adequate predictive value for presence of aspiration. Several individual exam maneuvers demonstrated reasonable sensitivity, but reproducibility and consistency of these protocols was not established. More research is needed to design an optimal protocol for dysphagia detection. © 2015 Society of Hospital Medicine.
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Pimsut, S.; Rujirat, N.
2016-01-01
A comparison of the Josephson array voltage standards of the Bureau International des Poids et Mesures (BIPM) and the National Institute of Metrology - (Thailand), NIMT, was carried out in November 2015 at the level of 10 V. For this exercise, options A and B of the BIPM.EM-K10.b comparison protocol were applied. Option B required the BIPM to provide a reference voltage for measurement by the NIMT using its Josephson standard with its own measuring device. Option A required the NIMT to provide a reference voltage with its Josephson voltage standard for measurement by the BIPM using an analogue nanovoltmeter and associated measurement loop. In all cases the BIPM array was kept floating from ground. The final results were in good agreement within the combined relative standard uncertainty of 2.6 parts in 1010 for the nominal voltage of 10 V. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
A Sediment Testing Reference Area Database for the San Francisco Deep Ocean Disposal Site (SF-DODS)
EPA established and maintains a SF-DODS reference area database of previously-collected sediment test data. Several sets of sediment test data have been successfully collected from the SF-DODS reference area.
Pruitt, Kim D.; Tatusova, Tatiana; Maglott, Donna R.
2005-01-01
The National Center for Biotechnology Information (NCBI) Reference Sequence (RefSeq) database (http://www.ncbi.nlm.nih.gov/RefSeq/) provides a non-redundant collection of sequences representing genomic data, transcripts and proteins. Although the goal is to provide a comprehensive dataset representing the complete sequence information for any given species, the database pragmatically includes sequence data that are currently publicly available in the archival databases. The database incorporates data from over 2400 organisms and includes over one million proteins representing significant taxonomic diversity spanning prokaryotes, eukaryotes and viruses. Nucleotide and protein sequences are explicitly linked, and the sequences are linked to other resources including the NCBI Map Viewer and Gene. Sequences are annotated to include coding regions, conserved domains, variation, references, names, database cross-references, and other features using a combined approach of collaboration and other input from the scientific community, automated annotation, propagation from GenBank and curation by NCBI staff. PMID:15608248
A scalable database model for multiparametric time series: a volcano observatory case study
NASA Astrophysics Data System (ADS)
Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea
2014-05-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
A multidisciplinary database for geophysical time series management
NASA Astrophysics Data System (ADS)
Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.
2013-12-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
A taxonomy has been developed for outcomes in medical research to help improve knowledge discovery.
Dodd, Susanna; Clarke, Mike; Becker, Lorne; Mavergames, Chris; Fish, Rebecca; Williamson, Paula R
2018-04-01
There is increasing recognition that insufficient attention has been paid to the choice of outcomes measured in clinical trials. The lack of a standardized outcome classification system results in inconsistencies due to ambiguity and variation in how outcomes are described across different studies. Being able to classify by outcome would increase efficiency in searching sources such as clinical trial registries, patient registries, the Cochrane Database of Systematic Reviews, and the Core Outcome Measures in Effectiveness Trials (COMET) database of core outcome sets (COS), thus aiding knowledge discovery. A literature review was carried out to determine existing outcome classification systems, none of which were sufficiently comprehensive or granular for classification of all potential outcomes from clinical trials. A new taxonomy for outcome classification was developed, and as proof of principle, outcomes extracted from all published COS in the COMET database, selected Cochrane reviews, and clinical trial registry entries were classified using this new system. Application of this new taxonomy to COS in the COMET database revealed that 274/299 (92%) COS include at least one physiological outcome, whereas only 177 (59%) include at least one measure of impact (global quality of life or some measure of functioning) and only 105 (35%) made reference to adverse events. This outcome taxonomy will be used to annotate outcomes included in COS within the COMET database and is currently being piloted for use in Cochrane Reviews within the Cochrane Linked Data Project. Wider implementation of this standard taxonomy in trial and systematic review databases and registries will further promote efficient searching, reporting, and classification of trial outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Macedo Ribeiro, Ana Freire; Bergmann, Anke; Lemos, Thiago; Pacheco, Antônio Guilherme; Mello Russo, Maitê; Santos de Oliveira, Laura Alice; de Carvalho Rodrigues, Erika
The main objective of this study was to review the literature to identify reference values for angles and distances of body segments related to upright posture in healthy adult women with the Postural Assessment Software (PAS/SAPO). Electronic databases (BVS, PubMed, SciELO and Scopus) were assessed using the following descriptors: evaluation, posture, photogrammetry, physical therapy, postural alignment, postural assessment, and physiotherapy. Studies that performed postural evaluation in healthy adult women with PAS/SAPO and were published in English, Portuguese and Spanish, between the years 2005 and 2014 were included. Four studies met the inclusion criteria. Data from the included studies were grouped to establish the statistical descriptors (mean, variance, and standard deviation) of the body angles and distances. A total of 29 variables were assessed (10 in the anterior views, 16 in the lateral right and left views, and 3 in the posterior views), and its respective mean and standard deviation were calculated. Reference values for the anterior and posterior views showed no symmetry between the right and left sides of the body in the frontal plane. There were also small differences in the calculated reference values for the lateral view. The proposed reference values for quantitative evaluation of the upright posture in healthy adult women estimated in the present study using PAS/SAPO could guide future studies and help clinical practice. Copyright © 2017. Published by Elsevier Inc.
A reference system for animal biometrics: application to the northern leopard frog
Petrovska-Delacretaz, D.; Edwards, A.; Chiasson, J.; Chollet, G.; Pilliod, D.S.
2014-01-01
Reference systems and public databases are available for human biometrics, but to our knowledge nothing is available for animal biometrics. This is surprising because animals are not required to give their agreement to be in a database. This paper proposes a reference system and database for the northern leopard frog (Lithobates pipiens). Both are available for reproducible experiments. Results of both open set and closed set experiments are given.
PHYTOTOX: DATABASE DEALING WITH THE EFFECT OF ORGANIC CHEMICALS ON TERRESTRIAL VASCULAR PLANTS
A new database, PHYTOTOX, dealing with the direct effects of exogenously supplied organic chemicals on terrestrial vascular plants is described. The database consists of two files, a Reference File and Effects File. The Reference File is a bibliographic file of published research...
Abraha, Iosief; Giovannini, Gianni; Serraino, Diego; Fusco, Mario; Montedori, Alessandro
2016-03-18
Breast, lung and colorectal cancers constitute the most common cancers worldwide and their epidemiology, related health outcomes and quality indicators can be studied using administrative healthcare databases. To constitute a reliable source for research, administrative healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases 9th and 10th revision codes to identify breast, lung and colorectal cancer diagnoses in administrative healthcare databases. This review protocol has been developed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. We will search the following databases: MEDLINE, EMBASE, Web of Science and the Cochrane Library, using appropriate search strategies. We will include validation studies that used administrative data to identify breast, lung and colorectal cancer diagnoses or studies that evaluated the validity of breast, lung and colorectal cancer codes in administrative data. The following inclusion criteria will be used: (1) the presence of a reference standard case definition for the disease of interest; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc) and (3) the use of data source from an administrative database. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. Ethics approval is not required. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide to identify appropriate case definitions and algorithms of breast, lung and colorectal cancers for researchers involved in validating administrative healthcare databases as well as for outcome research on these conditions that used administrative healthcare databases. CRD42015026881. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
Prevalence of hypertension among adolescents: systematic review and meta-analysis.
Gonçalves, Vivian Siqueira Santos; Galvão, Taís Freire; de Andrade, Keitty Regina Cordeiro; Dutra, Eliane Said; Bertolin, Maria Natacha Toral; de Carvalho, Kenia Mara Baiocchi; Pereira, Mauricio Gomes
2016-01-01
To estimate the prevalence of hypertension among adolescent Brazilian students. A systematic review of school-based cross-sectional studies was conducted. The articles were searched in the databases MEDLINE, Embase, Scopus, LILACS, SciELO, Web of Science, CAPES thesis database and Trip Database. In addition, we examined the lists of references of relevant studies to identify potentially eligible articles. No restrictions regarding publication date, language, or status applied. The studies were selected by two independent evaluators, who also extracted the data and assessed the methodological quality following eight criteria related to sampling, measuring blood pressure, and presenting results. The meta-analysis was calculated using a random effects model and analyses were performed to investigate heterogeneity. We retrieved 1,577 articles from the search and included 22 in the review. The included articles corresponded to 14,115 adolescents, 51.2% (n = 7,230) female. We observed a variety of techniques, equipment, and references used. The prevalence of hypertension was 8.0% (95%CI 5.0-11.0; I2 = 97.6%), 9.3% (95%CI 5.6-13.6; I2 = 96.4%) in males and 6.5% (95%CI 4.2-9.1; I2 = 94.2%) in females. The meta-regression failed to identify the causes of the heterogeneity among studies. Despite the differences found in the methodologies of the included studies, the results of this systematic review indicate that hypertension is prevalent in the Brazilian adolescent school population. For future investigations, we suggest the standardization of techniques, equipment, and references, aiming at improving the methodological quality of the studies.
Definition of the Beijing/W lineage of Mycobacterium tuberculosis on the basis of genetic markers.
Kremer, Kristin; Glynn, Judith R; Lillebaek, Troels; Niemann, Stefan; Kurepina, Natalia E; Kreiswirth, Barry N; Bifani, Pablo J; van Soolingen, Dick
2004-09-01
Mycobacterium tuberculosis Beijing genotype strains are highly prevalent in Asian countries and in the territory of the former Soviet Union. They are increasingly reported in other areas of the world and are frequently associated with tuberculosis outbreaks and drug resistance. Beijing genotype strains, including W strains, have been characterized by their highly similar multicopy IS6110 restriction fragment length polymorphism (RFLP) patterns, deletion of spacers 1 to 34 in the direct repeat region (Beijing spoligotype), and insertion of IS6110 in the genomic dnaA-dnaN locus. In this study the suitability and comparability of these three genetic markers to identify members of the Beijing lineage were evaluated. In a well-characterized collection of 1,020 M. tuberculosis isolates representative of the IS6110 RFLP genotypes found in The Netherlands, strains of two clades had spoligotypes characteristic of the Beijing lineage. A set of 19 Beijing reference RFLP patterns was selected to retrieve all Beijing strains from the Dutch database. These reference patterns gave a sensitivity of 98.1% and a specificity of 99.7% for identifying Beijing strains (defined by spoligotyping) in an international database of 1,084 strains. The usefulness of the reference patterns was also assessed with large DNA fingerprint databases in two other European countries and for identification strains from the W lineage found in the United States. A standardized definition for the identification of M. tuberculosis strains belonging to the Beijing/W lineage, as described in this work, will facilitate further studies on the spread and characterization of this widespread genotype family of M. tuberculosis strains.
Developmental fluoride neurotoxicity: a systematic review and meta-analysis.
Choi, Anna L; Sun, Guifan; Zhang, Ying; Grandjean, Philippe
2012-10-01
Although fluoride may cause neurotoxicity in animal models and acute fluoride poisoning causes neurotoxicity in adults, very little is known of its effects on children's neurodevelopment. We performed a systematic review and meta-analysis of published studies to investigate the effects of increased fluoride exposure and delayed neurobehavioral development. We searched the MEDLINE, EMBASE, Water Resources Abstracts, and TOXNET databases through 2011 for eligible studies. We also searched the China National Knowledge Infrastructure (CNKI) database, because many studies on fluoride neurotoxicity have been published in Chinese journals only. In total, we identified 27 eligible epidemiological studies with high and reference exposures, end points of IQ scores, or related cognitive function measures with means and variances for the two exposure groups. Using random-effects models, we estimated the standardized mean difference between exposed and reference groups across all studies. We conducted sensitivity analyses restricted to studies using the same outcome assessment and having drinking-water fluoride as the only exposure. We performed the Cochran test for heterogeneity between studies, Begg's funnel plot, and Egger test to assess publication bias, and conducted meta-regressions to explore sources of variation in mean differences among the studies. The standardized weighted mean difference in IQ score between exposed and reference populations was -0.45 (95% confidence interval: -0.56, -0.35) using a random-effects model. Thus, children in high-fluoride areas had significantly lower IQ scores than those who lived in low-fluoride areas. Subgroup and sensitivity analyses also indicated inverse associations, although the substantial heterogeneity did not appear to decrease. The results support the possibility of an adverse effect of high fluoride exposure on children's neurodevelopment. Future research should include detailed individual-level information on prenatal exposure, neurobehavioral performance, and covariates for adjustment.
Online Reference Service--How to Begin: A Selected Bibliography.
ERIC Educational Resources Information Center
Shroder, Emelie J., Ed.
1982-01-01
Materials in this bibliography were selected and recommended by members of the Use of Machine-Assisted Reference in Public Libraries Committee, Reference and Adult Services Division, American Library Association. Topics include: financial aspects, equipment and communications considerations, comparing databases and database systems, advertising…
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
A Circular Dichroism Reference Database for Membrane Proteins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace,B.; Wien, F.; Stone, T.
2006-01-01
Membrane proteins are a major product of most genomes and the target of a large number of current pharmaceuticals, yet little information exists on their structures because of the difficulty of crystallising them; hence for the most part they have been excluded from structural genomics programme targets. Furthermore, even methods such as circular dichroism (CD) spectroscopy which seek to define secondary structure have not been fully exploited because of technical limitations to their interpretation for membrane embedded proteins. Empirical analyses of circular dichroism (CD) spectra are valuable for providing information on secondary structures of proteins. However, the accuracy of themore » results depends on the appropriateness of the reference databases used in the analyses. Membrane proteins have different spectral characteristics than do soluble proteins as a result of the low dielectric constants of membrane bilayers relative to those of aqueous solutions (Chen & Wallace (1997) Biophys. Chem. 65:65-74). To date, no CD reference database exists exclusively for the analysis of membrane proteins, and hence empirical analyses based on current reference databases derived from soluble proteins are not adequate for accurate analyses of membrane protein secondary structures (Wallace et al (2003) Prot. Sci. 12:875-884). We have therefore created a new reference database of CD spectra of integral membrane proteins whose crystal structures have been determined. To date it contains more than 20 proteins, and spans the range of secondary structures from mostly helical to mostly sheet proteins. This reference database should enable more accurate secondary structure determinations of membrane embedded proteins and will become one of the reference database options in the CD calculation server DICHROWEB (Whitmore & Wallace (2004) NAR 32:W668-673).« less
Astrobib: A Literature Referencing System Compatible with the AAS/WGAS Latex Macros
NASA Astrophysics Data System (ADS)
Ferguson, H. C.
1993-12-01
Perhaps the most tedious part of preparing an article is dealing with the references: keeping track of which have been cited and formatting the reference section at the end of the paper in accordance with a particular journal's requirements. This package aims to simplify this task, while remaining compatible with the AAS/WGAS latex macros (as well as the latex styles distributed by A&A and MNRAS). For lack of a better name, we call this package Astrobib. The astrobib package can be used on two levels. The first uses the standard ``bibtex'' software to collect all the references cited in the text and format the reference list at the end of the paper according to the style requirements of the Journal. All we have done here is to modify the public-domain ``chicago.bst'' bibtex styles to produce citations in the formats required by ApJ, AJ, A&A, MNRAS, and PASP. All implement, to first order, the formats for references specified in 1992 or 1993 ``Instructions to Authors'' of the different journals. If the paper is rejected by MNRAS, changing three lines will allow it to be printed in ApJ format. The second level overcomes two drawbacks bibtex: the tedious use of braces and commas in bibliography database and the requirement that the author remember citation keys, typically constructed of the authors' initials and the date. With Astrobib the bibliography is kept in a much simpler database (based on the Unix `refer' style) and a couple of Unix-specific programs parse the database into bibtex format and preprocess the text to convert ``loose'' citations into bibtex citation keys. Loose citations allow the author to cite just a few authors (in any order) and perhaps the year or a word of the title of the conference proceedings. Documentation and instructions for electronic access to the package will be available at the meeting. Support for this work was provided by the SERC and by NASA through grant HF1043 awarded by the STScI which is operated by AURA, Inc., for NASA under contract NAS5-26555.
2012-09-26
format; however, the collective identity and structure of the object are lost. In contrast, XML preserves the structure of the object by using custom...2.1.1 Classes ROCK SOIL MINERAL VEGETATION COATING LIQUID METAL CONSTRUCTION PLASTIC WOOD GLASS FABRIC...2.1.2 Subclasses Subclasses are created using relevant taxonomy from the authority in a particular class. Some examples of subclasses nomenclature in
de Groot, Mark C H; Schuerch, Markus; de Vries, Frank; Hesse, Ulrik; Oliva, Belén; Gil, Miguel; Huerta, Consuelo; Requena, Gema; de Abajo, Francisco; Afonso, Ana S; Souverein, Patrick C; Alvarez, Yolanda; Slattery, Jim; Rottenkolber, Marietta; Schmiedl, Sven; Van Dijk, Liset; Schlienger, Raymond G; Reynolds, Robert; Klungel, Olaf H
2014-05-01
The annual prevalence of antiepileptic drug (AED) prescribing reported in the literature differs considerably among European countries due to use of different type of data sources, time periods, population distribution, and methodologic differences. This study aimed to measure prevalence of AED prescribing across seven European routine health care databases in Spain, Denmark, The Netherlands, the United Kingdom, and Germany using a standardized methodology and to investigate sources of variation. Analyses on the annual prevalence of AEDs were stratified by sex, age, and AED. Overall prevalences were standardized to the European 2008 reference population. Prevalence of any AED varied from 88 per 10,000 persons (The Netherlands) to 144 per 10,000 in Spain and Denmark in 2001. In all databases, prevalence increased linearly: from 6% in Denmark to 15% in Spain each year since 2001. This increase could be attributed entirely to an increase in "new," recently marketed AEDs while prevalence of AEDs that have been available since the mid-1990s, hardly changed. AED use increased with age for both female and male patients up to the ages of 80 to 89 years old and tended to be somewhat higher in female than in male patients between the ages of 40 and 70. No differences between databases in the number of AEDs used simultaneously by a patient were found. We showed that during the study period of 2001-2009, AED prescribing increased in five European Union (EU) countries and that this increase was due entirely to the newer AEDs marketed since the 1990s. Using a standardized methodology, we showed consistent trends across databases and countries over time. Differences in age and sex distribution explained only part of the variation between countries. Therefore, remaining variation in AED use must originate from other differences in national health care systems. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.
Whistleblowing: An integrative literature review of data-based studies involving nurses.
Jackson, Debra; Hickman, Louise D; Hutchinson, Marie; Andrew, Sharon; Smith, James; Potgieter, Ingrid; Cleary, Michelle; Peters, Kath
2014-10-27
Abstract Aim To summarise and critique the research literature about whistleblowing and nurses. Background Whistleblowing is identified as a crucial issue in maintenance of healthcare standards and nurses are frequently involved in whistleblowing events. Despite the importance of this issue, to our knowledge an evaluation of this body of the data-based literature has not been undertaken. Method An integrative literature review approach was used to summarise and critique the research literature. A comprehensive search of five databases including Medline, CINAHL, PubMed and Health Science: Nursing/Academic Edition, and Google, were searched using terms including: 'whistleblow*', 'nurs*'. In addition, relevant journals were examined, as well as reference lists of retrieved papers. Papers published during the years 2007-2013 were selected for inclusion. Findings Fifteen papers were identified, capturing data from nurses in seven countries. The findings in this review demonstrate a growing body of research for the nursing profession at large to engage and respond appropriately to issues involving suboptimal patient care or organisational wrongdoing. Conclusions Nursing plays a key role in maintaining practice standards and in reporting care that is unacceptable although the repercussions to nurses who raise concerns are insupportable. Overall, whistleblowing and how it influences the individual, their family, work colleagues, nursing practice and policy overall, requires further national and international research attention.
Whistleblowing: An integrative literature review of data-based studies involving nurses.
Jackson, Debra; Hickman, Louise D; Hutchinson, Marie; Andrew, Sharon; Smith, James; Potgieter, Ingrid; Cleary, Michelle; Peters, Kath
2014-01-01
Abstract Aim: To summarise and critique the research literature about whistleblowing and nurses. Whistleblowing is identified as a crucial issue in maintenance of healthcare standards and nurses are frequently involved in whistleblowing events. Despite the importance of this issue, to our knowledge an evaluation of this body of the data-based literature has not been undertaken. An integrative literature review approach was used to summarise and critique the research literature. A comprehensive search of five databases including Medline, CINAHL, PubMed and Health Science: Nursing/Academic Edition, and Google, were searched using terms including: 'Whistleblow*,' 'nurs*.' In addition, relevant journals were examined, as well as reference lists of retrieved papers. Papers published during the years 2007-2013 were selected for inclusion. Fifteen papers were identified, capturing data from nurses in seven countries. The findings in this review demonstrate a growing body of research for the nursing profession at large to engage and respond appropriately to issues involving suboptimal patient care or organisational wrongdoing. Nursing plays a key role in maintaining practice standards and in reporting care that is unacceptable although the repercussions to nurses who raise concerns are insupportable. Overall, whistleblowing and how it influences the individual, their family, work colleagues, nursing practice and policy overall, requires further national and international research attention.
Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-01-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586
Lin, Long-Ze; Harnly, James M
2012-06-01
Chamomile (Matricaria chamomilla L.), tarragon (Artemisia dracunculus L.) and Mexican arnica (Heterotheca inuoides) are common compositae spices and herbs found in the US market. They contain flavonoids and hydroxycinnamates that are potentially beneficial to human health. A standardized LC-PDA-ESI/MS profiling method was used to identify 51 flavonoids and 17 hydroxycinnamates. Many of the identifications were confirmed with authentic standards or through references in the literature or the laboratory's database. More than half of the phenol compounds for each spice had not been previously reported. The phenolic profile can be used for plant authentication and to correlate with biological activities.
Harnly, James M.
2013-01-01
Chamomile (Matricaria chamomilla L.), tarragon (Artemisia dracunculus L.) and Mexican arnica (Heterotheca inuoides) are common compositae spices and herbs found in the US market. They contain flavonoids and hydroxycinnamates that are potentially beneficial to human health. A standardized LC-PDA-ESI/MS profiling method was used to identify 51 flavonoids and 17 hydroxycinnamates. Many of the identifications were confirmed with authentic standards or through references in the literature or the laboratory’s database. More than half of the phenol compounds for each spice had not been previously reported. The phenolic profile can be used for plant authentication and to correlate with biological activities. PMID:22816299
Microcomputer-Based Access to Machine-Readable Numeric Databases.
ERIC Educational Resources Information Center
Wenzel, Patrick
1988-01-01
Describes the use of microcomputers and relational database management systems to improve access to numeric databases by the Data and Program Library Service at the University of Wisconsin. The internal records management system, in-house reference tools, and plans to extend these tools to the entire campus are discussed. (3 references) (CLB)
Automated processing of shoeprint images based on the Fourier transform for use in forensic science.
de Chazal, Philip; Flynn, John; Reilly, Richard B
2005-03-01
The development of a system for automatically sorting a database of shoeprint images based on the outsole pattern in response to a reference shoeprint image is presented. The database images are sorted so that those from the same pattern group as the reference shoeprint are likely to be at the start of the list. A database of 476 complete shoeprint images belonging to 140 pattern groups was established with each group containing two or more examples. A panel of human observers performed the grouping of the images into pattern categories. Tests of the system using the database showed that the first-ranked database image belongs to the same pattern category as the reference image 65 percent of the time and that a correct match appears within the first 5 percent of the sorted images 87 percent of the time. The system has translational and rotational invariance so that the spatial positioning of the reference shoeprint images does not have to correspond with the spatial positioning of the shoeprint images of the database. The performance of the system for matching partial-prints was also determined.
Virgili, Gianni; Menchini, Francesca; Casazza, Giovanni; Hogg, Ruth; Das, Radha R; Wang, Xue; Michelessi, Manuele
2015-01-07
Diabetic macular oedema (DMO) is a thickening of the central retina, or the macula, and is associated with long-term visual loss in people with diabetic retinopathy (DR). Clinically significant macular oedema (CSMO) is the most severe form of DMO. Almost 30 years ago, the Early Treatment Diabetic Retinopathy Study (ETDRS) found that CSMO, diagnosed by means of stereoscopic fundus photography, leads to moderate visual loss in one of four people within three years. It also showed that grid or focal laser photocoagulation to the macula halves this risk. Recently, intravitreal injection of antiangiogenic drugs has also been used to try to improve vision in people with macular oedema due to DR.Optical coherence tomography (OCT) is based on optical reflectivity and is able to image retinal thickness and structure producing cross-sectional and three-dimensional images of the central retina. It is widely used because it provides objective and quantitative assessment of macular oedema, unlike the subjectivity of fundus biomicroscopic assessment which is routinely used by ophthalmologists instead of photography. Optical coherence tomography is also used for quantitative follow-up of the effects of treatment of CSMO. To determine the diagnostic accuracy of OCT for detecting DMO and CSMO, defined according to ETDRS in 1985, in patients referred to ophthalmologists after DR is detected. In the update of this review we also aimed to assess whether OCT might be considered the new reference standard for detecting DMO. We searched the Cochrane Database of Systematic Reviews (CDSR), the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA) and the NHS Economic Evaluation Database (NHSEED) (The Cochrane Library 2013, Issue 5), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to June 2013), EMBASE (January 1950 to June 2013), Web of Science Conference Proceedings Citation Index - Science (CPCI-S) (January 1990 to June 2013), BIOSIS Previews (January 1969 to June 2013), MEDION and the Aggressive Research Intelligence Facility database (ARIF). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 25 June 2013. We checked bibliographies of relevant studies for additional references. We selected studies that assessed the diagnostic accuracy of any OCT model for detecting DMO or CSMO in patients with DR who were referred to eye clinics. Diabetic macular oedema and CSMO were diagnosed by means of fundus biomicroscopy by ophthalmologists or stereophotography by ophthalmologists or other trained personnel. Three authors independently extracted data on study characteristics and measures of accuracy. We assessed data using random-effects hierarchical sROC meta-analysis models. We included 10 studies (830 participants, 1387 eyes), published between 1998 and 2012. Prevalence of CSMO was 19% to 65% (median 50%) in nine studies with CSMO as the target condition. Study quality was often unclear or at high risk of bias for QUADAS 2 items, specifically regarding study population selection and the exclusion of participants with poor quality images. Applicablity was unclear in all studies since professionals referring patients and results of prior testing were not reported. There was a specific 'unit of analysis' issue because both eyes of the majority of participants were included in the analyses as if they were independent.In nine studies providing data on CSMO (759 participants, 1303 eyes), pooled sensitivity was 0.78 (95% confidence interval (CI) 0.72 to 0.83) and specificity was 0.86 (95% CI 0.76 to 0.93). The median central retinal thickness cut-off we selected for data extraction was 250 µm (range 230 µm to 300 µm). Central CSMO was the target condition in all but two studies and thus our results cannot be applied to non-central CSMO.Data from three studies reporting accuracy for detection of DMO (180 participants, 343 eyes) were not pooled. Sensitivities and specificities were about 0.80 in two studies and were both 1.00 in the third study.Since this review was conceived, the role of OCT has changed and has become a key ingredient of decision-making at all levels of ophthalmic care in this field. Moreover, disagreements between OCT and fundus examination are informative, especially false positives which are referred to as subclinical DMO and are at higher risk of developing clinical CSMO. Using retinal thickness thresholds lower than 300 µm and ophthalmologist's fundus assessment as reference standard, central retinal thickness measured with OCT was not sufficiently accurate to diagnose the central type of CSMO in patients with DR referred to retina clinics. However, at least OCT false positives are generally cases of subclinical DMO that cannot be detected clinically but still suffer from increased risk of disease progression. Therefore, the increasing availability of OCT devices, together with their precision and the ability to inform on retinal layer structure, now make OCT widely recognised as the new reference standard for assessment of DMO, even in some screening settings. Thus, this review will not be updated further.
NASA Astrophysics Data System (ADS)
Viallon, J.; Moussay, P.; Wielgosz, R.; Bebic, J.; Norris, J. E.; Guenther, F.
2016-01-01
As part of the on-going key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Directorate of Measures and Precious Metals (DMDM) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM), via a transfer standard maintained by the National Institute of Standards and Technology (NIST). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, W.; von Laven, G.M.; Parker, T.
1993-09-01
The Bibliographic Retrieval System (BARS) is a data base management system specially designed to retrieve bibliographic references. Two databases are available, (i) the Sandia Shock Compression (SSC) database which contains over 5700 references to the literature related to stress waves in solids and their applications, and (ii) the Shock Physics Index (SPHINX) which includes over 8000 further references to stress waves in solids, material properties at intermediate and low rates, ballistic and hypervelocity impact, and explosive or shock fabrication methods. There is some overlap in the information in the two data bases.
De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users
NASA Astrophysics Data System (ADS)
Allaz, J. M.
2012-12-01
Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed amount of oxygen, or of cation (using an analysis in element or oxide weight-%); this latter includes re-calculation of H2O/CO2 based on stoichiometry, and oxygen correction for F and Cl. Another option offers a list of any available standards and possible peak or background interferences for a series of elements. (3) "X-ray maps" lists the different setups recommended for element mapping using WDS, and a map calculator to facilitate maps setups and to estimate the total mapping time. (4) "X-ray data" lists all x-ray lines for a specific element (K, L, M, absorption edges, and satellite peaks) in term of energy, wavelength and peak position. A check for possible interferences on peak or background is also possible. Theoretical x-ray peak positions for each crystal are calculated based on the 2d spacing of each crystal and the wavelength of each line. (5) "Agenda" menu displays the reservation dates for each month and for each EMP lab defined. It also offers a reservation request option, this request being sent by email to the EMP manager for approval. (6) Finally, "Admin" is password restricted, and contains all necessary options to manage the database through user-friendly forms. The installation of this database is made easy and knowledge of HTML, PHP, or MySQL is unnecessary to install, configure, manage, or use it. A working database is accessible at http://cub.geoloweb.ch.
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Power, O.; Stock, M.
2016-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.b, a comparison of the 10 V voltage reference standards of the BIPM and the National Standards Authority of Ireland - National Metrology Laboratory (NSAI - NML), Dublin, Ireland, was carried out in January and February 2016. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM7 (Z7) and BIPM9 (Z9), were transported by freight to NSAI-NML. At NSAI-NML, the reference standard for DC voltage at the 10 V level consists of a group of characterized Zener diode-based electronic voltage standards. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the group standard. At the BIPM the travelling standards were calibrated, before and after the measurements at NSAI-NML, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned to DC voltage standards by NSAI - NML, at the level of 10 V, at NSAI - NML, UNML, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of the 31 of January 2016. UNML - UBIPM = + 0.22 μV uc = 1.35 μV , at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NSAI-NML, based on KJ-90, and the uncertainty related to the comparison. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Power, O.
2015-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.b, a comparison of the 10 V voltage reference standards of the BIPM and the National Standards Authority of Ireland - National Metrology Laboratory (NSAI - NML), Dublin, Ireland, was carried out in February and March 2015. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM6 (Z6) and BIPMC (ZC), were transported by freight to NSAI-NML. At NSAI-NML, the reference standard for DC voltage at the 10 V level consists of a group of characterized Zener diode-based electronic voltage standards. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the group standard. At the BIPM the travelling standards were calibrated, before and after the measurements at NSAI-NML, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final resultof the comparison is presented as the difference between the values assigned to DC voltage standards by NSAI - NML, at the level of 10 V,at NSAI - NML, UNML, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 24 February 2015. UNML - UBIPM = - 0.82 mV; uc = 1.35 mV , at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NSAI-NML, based on KJ-90, and the uncertainty related to the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues
NASA Astrophysics Data System (ADS)
Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen
2014-06-01
Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.
Al-Rshaidat, Mamoon M D; Snider, Allison; Rosebraugh, Sydney; Devine, Amanda M; Devine, Thomas D; Plaisance, Laetitia; Knowlton, Nancy; Leray, Matthieu
2016-09-01
High-throughput sequencing (HTS) of DNA barcodes (metabarcoding), particularly when combined with standardized sampling protocols, is one of the most promising approaches for censusing overlooked cryptic invertebrate communities. We present biodiversity estimates based on sequencing of the cytochrome c oxidase subunit 1 (COI) gene for coral reefs of the Gulf of Aqaba, a semi-enclosed system in the northern Red Sea. Samples were obtained from standardized sampling devices (Autonomous Reef Monitoring Structures (ARMS)) deployed for 18 months. DNA barcoding of non-sessile specimens >2 mm revealed 83 OTUs in six phyla, of which only 25% matched a reference sequence in public databases. Metabarcoding of the 2 mm - 500 μm and sessile bulk fractions revealed 1197 OTUs in 15 animal phyla, of which only 4.9% matched reference barcodes. These results highlight the scarcity of COI data for cryptobenthic organisms of the Red Sea. Compared with data obtained using similar methods, our results suggest that Gulf of Aqaba reefs are less diverse than two Pacific coral reefs but much more diverse than an Atlantic oyster reef at a similar latitude. The standardized approaches used here show promise for establishing baseline data on biodiversity, monitoring the impacts of environmental change, and quantifying patterns of diversity at regional and global scales.
KEY COMPARISON: Final report of the SIM 60Co absorbed-dose-to-water comparison SIM.RI(I)-K4
NASA Astrophysics Data System (ADS)
Ross, C. K.; Shortt, K. R.; Saravi, M.; Meghzifene, A.; Tovar, V. M.; Barbosa, R. A.; da Silva, C. N.; Carrizales, L.; Seltzer, S. M.
2008-01-01
Transfer chambers were used to compare the standards for 60Co absorbed dose to water maintained by seven laboratories. Six of the laboratories were members of the Sistema Interamericano de Metrología (SIM) regional metrology organization while the seventh was the International Atomic Energy Agency (IAEA) laboratory in Vienna. The National Research Council (NRC) acted as the pilot laboratory for the comparison. Because of the participation of laboratories holding primary standards, the comparison results could be linked to the key comparison reference value maintained by the Bureau International des Poids et Mesures (BIPM). The results for all laboratories were within the expanded uncertainty (two standard deviations) of the reference value. The estimated relative standard uncertainty on the comparison between any pair of laboratories ranged from 0.6% to 1.4%. The largest discrepancy between any two laboratories was 1.3%. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI Section I, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
KEY COMPARISON: Final report of the SIM 60Co air-kerma comparison SIM.RI(I)-K1
NASA Astrophysics Data System (ADS)
Ross, C. K.; Shortt, K. R.; Saravi, M.; Meghzifene, A.; Tovar, V. M.; Barbosa, R. A.; da Silva, C. N.; Carrizales, L.; Seltzer, S. M.
2008-01-01
Transfer chambers were used to compare the standards for 60Co air kerma maintained by seven laboratories. Six of the laboratories are members of the Sistema Interamericano de Metrología (SIM) regional metrology organization while the seventh is the International Atomic Energy Agency (IAEA) laboratory in Vienna. The National Research Council (NRC) acted as the pilot laboratory for the comparison. Because of the participation of laboratories holding primary standards, the comparison results could be linked to the key comparison reference value maintained by the Bureau International des Poids et Mesures (BIPM). The results for all laboratories were within the expanded uncertainty (two standard deviations) of the reference value. The estimated relative standard uncertainty of the comparison between any pair of laboratories ranged from 0.5% to 1.0%. The largest discrepancy between any two laboratories was 1.0%. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCRI Section I, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
WOVOdat as a worldwide resource to improve eruption forecasts
NASA Astrophysics Data System (ADS)
Widiwijayanti, Christina; Costa, Fidel; Zar Win Nang, Thin; Tan, Karine; Newhall, Chris; Ratdomopurbo, Antonius
2015-04-01
During periods of volcanic unrest, volcanologists need to interpret signs of unrest to be able to forecast whether an eruption is likely to occur. Some volcanic eruptions display signs of impending eruption such as seismic activity, surface deformation, or gas emissions; but not all will give signs and not all signs are necessarily followed by an eruption. Volcanoes behave differently. Precursory signs of an eruption are sometimes very short, less than an hour, but can be also weeks, months, or even years. Some volcanoes are regularly active and closely monitored, while other aren't. Often, the record of precursors to historical eruptions of a volcano isn't enough to allow a forecast of its future activity. Therefore, volcanologists must refer to monitoring data of unrest and eruptions at similar volcanoes. WOVOdat is the World Organization of Volcano Observatories' Database of volcanic unrest - an international effort to develop common standards for compiling and storing data on volcanic unrests in a centralized database and freely web-accessible for reference during volcanic crises, comparative studies, and basic research on pre-eruption processes. WOVOdat will be to volcanology as an epidemiological database is to medicine. We have up to now incorporated about 15% of worldwide unrest data into WOVOdat, covering more than 100 eruption episodes, which includes: volcanic background data, eruptive histories, monitoring data (seismic, deformation, gas, hydrology, thermal, fields, and meteorology), monitoring metadata, and supporting data such as reports, images, maps and videos. Nearly all data in WOVOdat are time-stamped and geo-referenced. Along with creating a database on volcanic unrest, WOVOdat also developing web-tools to help users to query, visualize, and compare data, which further can be used for probabilistic eruption forecasting. Reference to WOVOdat will be especially helpful at volcanoes that have not erupted in historical or 'instrumental' time and thus for which no previous data exist. The more data in WOVOdat, the more useful it will be. We actively solicit relevant data contributions from volcano observatories, other institutions, and individual researchers. Detailed information and documentation about the database and how to use it can be found at www.wovodat.org.
Intake of energy and nutrients; harmonization of Food Composition Databases.
Martinez-Victoria, Emilio; Martinez de Victoria, Ignacio; Martinez-Burgos, M Alba
2015-02-26
Food composition databases (FCDBs) provide detailed information about the nutritional composition of foods. The conversion of food consumption into nutrient intake need a Food composition database (FCDB) which lists the mean nutritional values for a given food portion. The limitations of FCDBs are sometimes little known by the users. Multicentre studies have raised several methodology challenges which allow to standardize nutritional assessments in different populations and geographical areas for food composition and nutrient intake. Differences between FCDBs include those attributed to technical matters, such as description of foods, calculation of energy and definition of nutrients, analytical methods, and principles for recipe calculation. Such differences need to be identified and eliminated before comparing data from different studies, especially when dietary data is related to a health outcome. There are ongoing efforts since 1984 to standardize FCDBs over the world (INFOODS, EPIC, EuroFIR, etc.). Food composition data can be gathered from different sources like private company analysis, universities, government laboratories and food industry. They can also be borrowed from scientific literature or even from the food labelling. There are different proposals to evaluate the quality of food composition data. For the development of a FCDB it is fundamental document in the most detailed way, each of the data values of the different components and nutrients of a food. The objective of AECOSAN (Agencia Española de Consumo Seguridad Alimentaria y Nutrición) and BEDCA (Base de Datos Española de Composición de Alimentos) association was the development and support of a reference FCDB in Spain according to the standards to be defined in Europe. BEDCA is currently the only FCDB developed in Spain with compiled and documented data following EuroFIR standards. Copyright AULA MEDICA EDICIONES 2015. Published by AULA MEDICA. All rights reserved.
Document creation, linking, and maintenance system
Claghorn, Ronald [Pasco, WA
2011-02-15
A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.
Selecting a database for literature searches in nursing: MEDLINE or CINAHL?
Brazier, H; Begley, C M
1996-10-01
This study compares the usefulness of the MEDLINE and CINAHL databases for students on post-registration nursing courses. We searched for nine topics, using title words only. Identical searches of the two databases retrieved 1162 references, of which 88% were in MEDLINE, 33% in CINAHL and 20% in both sources. The relevance of the references was assessed by student reviewers. The positive predictive value of CINAHL (70%) was higher than that of MEDLINE (54%), but MEDLINE produced more than twice as many relevant references as CINAHL. The sensitivity of MEDLINE was 85% (95% CI 82-88%), and that of CINAHL was 41% (95% CI 37-45%). To assess the ease of obtaining the references, we developed an index of accessibility, based on the holdings of a number of Irish and British libraries. Overall, 47% of relevant references were available in the students' own library, and 64% could be obtained within 48 hours. There was no difference between the two databases overall, but when two topics relating specifically to the organization of nursing were excluded, references found in MEDLINE were significantly more accessible. We recommend that MEDLINE should be regarded as the first choice of bibliographic database for any subject other than one related strictly to the organization of nursing.
MetaBar - a tool for consistent contextual data acquisition and standards compliant submission.
Hankeln, Wolfgang; Buttigieg, Pier Luigi; Fink, Dennis; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver
2010-06-30
Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft Excel spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data.
MetaBar - a tool for consistent contextual data acquisition and standards compliant submission
2010-01-01
Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data. PMID:20591175
NASA Technical Reports Server (NTRS)
Decker, Ryan; Burns, Lee; Merry, Carl; Harrington, Brian
2008-01-01
NASA's Space Shuttle utilizes atmospheric thermodynamic properties to evaluate structural dynamics and vehicle flight performance impacts by the atmosphere during ascent. Statistical characteristics of atmospheric thermodynamic properties at Kennedy Space Center (KSC) used in Space. Shuttle Vehicle assessments are contained in the Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) Database. Database contains tabulations for monthly and annual means (mu), standard deviations (sigma) and skewness of wind and thermodynamic variables. Wind, Thermodynamic, Humidity and Hydrostatic parameters 1 km resolution interval from 0-30 km 2 km resolution interval 30-70 km Multiple revisions of the CCAFS RRA database have been developed since initial RRA published in 1963. 1971, 1983, 2006 Space Shuttle program utilized 1983 version for use in deriving "hot" and "cold" atmospheres, atmospheric density dispersions for use in vehicle certification analyses and selection of atmospheric thermodynamic profiles for use in vehicle ascent design and certification analyses. During STS-114 launch preparations in July 2005 atmospheric density observations between 50-80 kft exceeded density limits used for aerodynamic ascent heating constraints in vehicle certification analyses. Mission specific analyses were conducted and concluded that the density bias resulted in small changes to heating rates and integrated heat loading on the vehicle. In 2001, the Air Force Combat Climatology Center began developing an updated RRA for CCAFS.
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Pantelic-Babic, J.; Sofranac, Z.; Cincar Vujovic, T.
2016-01-01
A comparison of the Josephson array voltage standards of the Bureau International des Poids et Mesures (BIPM) and the Directorate of Measures and Precious Metals (DMDM), Belgrade, Serbia, was carried out in June 2015 at the level of 10 V. For this exercise, options A and B of the BIPM.EM-K10.b comparison protocol were applied. Option B required the BIPM to provide a reference voltage for measurement by the DMDM using its Josephson standard with its own measuring device. Option A required the DMDM to provide a reference voltage with its Josephson voltage standard for measurement by the BIPM using an analogue nanovoltmeter and associated measurement loop. Since no sufficiently stable voltage could be achieved in this configuration, a digital detector was used. In all cases the BIPM array was kept floating from ground. The final results were in good agreement within the combined relative standard uncertainty of 1.5 parts in 1010 for the nominal voltage of 10 V. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Vlad, D.
2015-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the Service Métrologie—Metrologische Dienst (SMD), Brussel, Belgium, was carried out from October to November 2014. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM4 (Z4) and BIPM5 (Z5), were transported by freight to SMD and also back to BIPM. At SMD, the reference standard for DC voltage is a Josephson Voltage Standard (JVS). The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at SMD, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned toDC voltage standards by SMD, at the level of 1.018 V and 10 V, at SMD, USMD, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 5 November 2014. USMD - UBIPM = 0.14 mV; uc = 0.07 mV, at 1 V USMD - UBIPM = 0.09 mV; uc = 0.49 mV , at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at SMD, based on KJ-90, and the uncertainty related to the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Sengebush, F.
2015-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the Justervesenet (JV), Kjeller, Norway, was carried out from January to February 2015. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM4 (Z4) and BIPM5 (Z5), were transported by freight to JV and also back to BIPM. At JV, the reference standard for DC voltage is a Josephson Voltage Standard. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at JV, withthe Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned toDC voltage standards by JV, at the level of 1.018 V and 10 V, at JV, UJV, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 28 January 2015. UJV - UBIPM = 0.23 μV uc = 0.03 μV , at 1 V UJV - UBIPM = 0.63 μV uc = 0.28 μV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at JV, based on KJ-90, and the uncertainty related to the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Pimsut, S.
2015-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the National Institute of Metrology (Thailand), NIMT, was carried out from October to December 2014. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPMA (ZA) and BIPM6 (Z6), were transported by freight to NIMT and back to BIPM. At NIMT, the reference standard for DC voltage is a Josephson Voltage Standard. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at NIMT, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned toDC voltage standards by NIMT, at the level of 1.018 V and 10 V, at NIMT, UNIMT, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 23 November 2014. UNIMT - UBIPM = 0.16 mV; uc = 0.14 mV, at 1 V UNIMT - UBIPM = - 0.03 mV; uc = 0.11 mV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NIMT, based on KJ-90, and the uncertainty related to the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
SFINX-a drug-drug interaction database designed for clinical decision support systems.
Böttiger, Ylva; Laine, Kari; Andersson, Marine L; Korhonen, Tuomas; Molin, Björn; Ovesjö, Marie-Louise; Tirkkonen, Tuire; Rane, Anders; Gustafsson, Lars L; Eiermann, Birgit
2009-06-01
The aim was to develop a drug-drug interaction database (SFINX) to be integrated into decision support systems or to be used in website solutions for clinical evaluation of interactions. Key elements such as substance properties and names, drug formulations, text structures and references were defined before development of the database. Standard operating procedures for literature searches, text writing rules and a classification system for clinical relevance and documentation level were determined. ATC codes, CAS numbers and country-specific codes for substances were identified and quality assured to ensure safe integration of SFINX into other data systems. Much effort was put into giving short and practical advice regarding clinically relevant drug-drug interactions. SFINX includes over 8,000 interaction pairs and is integrated into Swedish and Finnish computerised decision support systems. Over 31,000 physicians and pharmacists are receiving interaction alerts through SFINX. User feedback is collected for continuous improvement of the content. SFINX is a potentially valuable tool delivering instant information on drug interactions during prescribing and dispensing.
FIREDOC users manual, 3rd edition
NASA Astrophysics Data System (ADS)
Jason, Nora H.
1993-12-01
FIREDOC is the on-line bibliographic database which reflects the holdings (published reports, journal articles, conference proceedings, books, and audiovisual items) of the Fire Research Information Services (FRIS) at the Building and Fire Research Laboratory (BFRL), National Institute of Standards and Technology (NIST). This manual provides step-by-step procedures for entering and exiting the database via telecommunication lines, as well as a number of techniques for searching the database and processing the results of the searches. This Third Edition is necessitated by the change to a UNIX platform. The new computer allows for faster response time if searching via a modem and, in addition, offers internet accessibility. FIREDOC may be used with personal computers, using DOS or Windows, or with Macintosh computers and workstations. A new section on how to access Internet is included, and one on how to obtain the references of interest to you. Appendix F: Quick Guide to Getting Started will be useful to both modem and Internet users.
DNA variant databases improve test accuracy and phenotype prediction in Alport syndrome.
Savige, Judy; Ars, Elisabet; Cotton, Richard G H; Crockett, David; Dagher, Hayat; Deltas, Constantinos; Ding, Jie; Flinter, Frances; Pont-Kingdon, Genevieve; Smaoui, Nizar; Torra, Roser; Storey, Helen
2014-06-01
X-linked Alport syndrome is a form of progressive renal failure caused by pathogenic variants in the COL4A5 gene. More than 700 variants have been described and a further 400 are estimated to be known to individual laboratories but are unpublished. The major genetic testing laboratories for X-linked Alport syndrome worldwide have established a Web-based database for published and unpublished COL4A5 variants ( https://grenada.lumc.nl/LOVD2/COL4A/home.php?select_db=COL4A5 ). This conforms with the recommendations of the Human Variome Project: it uses the Leiden Open Variation Database (LOVD) format, describes variants according to the human reference sequence with standardized nomenclature, indicates likely pathogenicity and associated clinical features, and credits the submitting laboratory. The database includes non-pathogenic and recurrent variants, and is linked to another COL4A5 mutation database and relevant bioinformatics sites. Access is free. Increasing the number of COL4A5 variants in the public domain helps patients, diagnostic laboratories, clinicians, and researchers. The database improves the accuracy and efficiency of genetic testing because its variants are already categorized for pathogenicity. The description of further COL4A5 variants and clinical associations will improve our ability to predict phenotype and our understanding of collagen IV biochemistry. The database for X-linked Alport syndrome represents a model for databases in other inherited renal diseases.
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Wielgosz, Robert; Hodges, Joe; Norris, James E.
2017-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the National Institute of Standards and Technology (NIST) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Prototype of web-based database of surface wave investigation results for site classification
NASA Astrophysics Data System (ADS)
Hayashi, K.; Cakir, R.; Martin, A. J.; Craig, M. S.; Lorenzo, J. M.
2016-12-01
As active and passive surface wave methods are getting popular for evaluating site response of earthquake ground motion, demand on the development of database for investigation results is also increasing. Seismic ground motion not only depends on 1D velocity structure but also on 2D and 3D structures so that spatial information of S-wave velocity must be considered in ground motion prediction. The database can support to construct 2D and 3D underground models. Inversion of surface wave processing is essentially non-unique so that other information must be combined into the processing. The database of existed geophysical, geological and geotechnical investigation results can provide indispensable information to improve the accuracy and reliability of investigations. Most investigations, however, are carried out by individual organizations and investigation results are rarely stored in the unified and organized database. To study and discuss appropriate database and digital standard format for the surface wave investigations, we developed a prototype of web-based database to store observed data and processing results of surface wave investigations that we have performed at more than 400 sites in U.S. and Japan. The database was constructed on a web server using MySQL and PHP so that users can access to the database through the internet from anywhere with any device. All data is registered in the database with location and users can search geophysical data through Google Map. The database stores dispersion curves, horizontal to vertical spectral ratio and S-wave velocity profiles at each site that was saved in XML files as digital data so that user can review and reuse them. The database also stores a published 3D deep basin and crustal structure and user can refer it during the processing of surface wave data.
A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO2 (NDP-073)
Jones, Michael H [The Ohio State Univ., Columbus, OH (United States); Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
1999-01-01
To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO2-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP- 072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO2- exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).
Schousboe, John T; Tanner, S Bobo; Leslie, William D
2014-01-01
Whether to use young male or young female reference data to calculate bone mineral density (BMD) T-scores in men remains controversial. The third National Health and Nutrition Examination and Survey (NHANES III) data show that the mean and standard deviation of femoral neck and total hip BMD is greater in young men than young women, and therefore differences in T-scores at these sites using NHANES III female vs male norms becomes less as BMD decreases. In contrast, manufacturer-specific reference databases generally assume similar standard deviations of BMD in men and women. Using NHANES III reference data for the femoral neck and total hip, respectively we found that men with T-scores of -2.5 when young male norms are used have T-scores of -2.4 and -2.3 when young female norms are used. Using manufacturer-specific reference data, we found that men with T-scores of -2.5 when young male norms are used at the femoral neck, total hip, lumbar spine, or one-third of the forearm would have T-scores ranging from -2.4 to -0.4 when young female norms are used, depending on skeletal site and densitometer manufacturer. The change of proportions of men diagnosed with osteoporosis when young female norms are used instead of young male reference data differs substantially according to skeletal site and densitometer manufacturer. Copyright © 2014 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Jacob, Francis; Guertler, Rea; Naim, Stephanie; Nixdorf, Sheri; Fedier, André; Hacker, Neville F.; Heinzelmann-Schwarz, Viola
2013-01-01
Reverse Transcription - quantitative Polymerase Chain Reaction (RT-qPCR) is a standard technique in most laboratories. The selection of reference genes is essential for data normalization and the selection of suitable reference genes remains critical. Our aim was to 1) review the literature since implementation of the MIQE guidelines in order to identify the degree of acceptance; 2) compare various algorithms in their expression stability; 3) identify a set of suitable and most reliable reference genes for a variety of human cancer cell lines. A PubMed database review was performed and publications since 2009 were selected. Twelve putative reference genes were profiled in normal and various cancer cell lines (n = 25) using 2-step RT-qPCR. Investigated reference genes were ranked according to their expression stability by five algorithms (geNorm, Normfinder, BestKeeper, comparative ΔCt, and RefFinder). Our review revealed 37 publications, with two thirds patient samples and one third cell lines. qPCR efficiency was given in 68.4% of all publications, but only 28.9% of all studies provided RNA/cDNA amount and standard curves. GeNorm and Normfinder algorithms were used in 60.5% in combination. In our selection of 25 cancer cell lines, we identified HSPCB, RRN18S, and RPS13 as the most stable expressed reference genes. In the subset of ovarian cancer cell lines, the reference genes were PPIA, RPS13 and SDHA, clearly demonstrating the necessity to select genes depending on the research focus. Moreover, a cohort of at least three suitable reference genes needs to be established in advance to the experiments, according to the guidelines. For establishing a set of reference genes for gene normalization we recommend the use of ideally three reference genes selected by at least three stability algorithms. The unfortunate lack of compliance to the MIQE guidelines reflects that these need to be further established in the research community. PMID:23554992
Prevalence of hypertension among adolescents: systematic review and meta-analysis
Gonçalves, Vivian Siqueira Santos; Galvão, Taís Freire; de Andrade, Keitty Regina Cordeiro; Dutra, Eliane Said; Bertolin, Maria Natacha Toral; de Carvalho, Kenia Mara Baiocchi; Pereira, Mauricio Gomes
2016-01-01
ABSTRACT OBJECTIVE To estimate the prevalence of hypertension among adolescent Brazilian students. METHODS A systematic review of school-based cross-sectional studies was conducted. The articles were searched in the databases MEDLINE, Embase, Scopus, LILACS, SciELO, Web of Science, CAPES thesis database and Trip Database. In addition, we examined the lists of references of relevant studies to identify potentially eligible articles. No restrictions regarding publication date, language, or status applied. The studies were selected by two independent evaluators, who also extracted the data and assessed the methodological quality following eight criteria related to sampling, measuring blood pressure, and presenting results. The meta-analysis was calculated using a random effects model and analyses were performed to investigate heterogeneity. RESULTS We retrieved 1,577 articles from the search and included 22 in the review. The included articles corresponded to 14,115 adolescents, 51.2% (n = 7,230) female. We observed a variety of techniques, equipment, and references used. The prevalence of hypertension was 8.0% (95%CI 5.0–11.0; I2 = 97.6%), 9.3% (95%CI 5.6–13.6; I2 = 96.4%) in males and 6.5% (95%CI 4.2–9.1; I2 = 94.2%) in females. The meta-regression failed to identify the causes of the heterogeneity among studies. CONCLUSIONS Despite the differences found in the methodologies of the included studies, the results of this systematic review indicate that hypertension is prevalent in the Brazilian adolescent school population. For future investigations, we suggest the standardization of techniques, equipment, and references, aiming at improving the methodological quality of the studies. PMID:27253903
Vullo, Carlos M; Romero, Magdalena; Catelli, Laura; Šakić, Mustafa; Saragoni, Victor G; Jimenez Pleguezuelos, María Jose; Romanini, Carola; Anjos Porto, Maria João; Puente Prieto, Jorge; Bofarull Castro, Alicia; Hernandez, Alexis; Farfán, María José; Prieto, Victoria; Alvarez, David; Penacino, Gustavo; Zabalza, Santiago; Hernández Bolaños, Alejandro; Miguel Manterola, Irati; Prieto, Lourdes; Parsons, Thomas
2016-03-01
The GHEP-ISFG Working Group has recognized the importance of assisting DNA laboratories to gain expertise in handling DVI or missing persons identification (MPI) projects which involve the need for large-scale genetic profile comparisons. Eleven laboratories participated in a DNA matching exercise to identify victims from a hypothetical conflict with 193 missing persons. The post mortem database was comprised of 87 skeletal remain profiles from a secondary mass grave displaying a minimal number of 58 individuals with evidence of commingling. The reference database was represented by 286 family reference profiles with diverse pedigrees. The goal of the exercise was to correctly discover re-associations and family matches. The results of direct matching for commingled remains re-associations were correct and fully concordant among all laboratories. However, the kinship analysis for missing persons identifications showed variable results among the participants. There was a group of laboratories with correct, concordant results but nearly half of the others showed discrepant results exhibiting likelihood ratio differences of several degrees of magnitude in some cases. Three main errors were detected: (a) some laboratories did not use the complete reference family genetic data to report the match with the remains, (b) the identity and/or non-identity hypotheses were sometimes wrongly expressed in the likelihood ratio calculations, and (c) many laboratories did not properly evaluate the prior odds for the event. The results suggest that large-scale profile comparisons for DVI or MPI is a challenge for forensic genetics laboratories and the statistical treatment of DNA matching and the Bayesian framework should be better standardized among laboratories. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Harzbecker, Joseph, Jr.
1993-01-01
Describes the National Institute of Health's GenBank DNA sequence database and how it can be accessed through the Internet. A real reference question, which was answered successfully using the database, is reproduced to illustrate and elaborate on the potential of the Internet for information retrieval. (10 references) (KRN)
ERIC Educational Resources Information Center
Kurhan, Scott H.; Griffing, Elizabeth A.
2011-01-01
Reference services in public libraries are changing dramatically. The Internet, online databases, and shrinking budgets are all making it necessary for non-traditional reference staff to become familiar with online reference tools. Recognizing the need for cross-training, Chesapeake Public Library (CPL) developed a program called the Database…
Planning for CD-ROM in the Reference Department.
ERIC Educational Resources Information Center
Graves, Gail T.; And Others
1987-01-01
Outlines the evaluation criteria used by the reference department at the Williams Library at the University of Mississippi in selecting databases and hardware used in CD-ROM workstations. The factors discussed include database coverage, costs, and security. (CLB)
Structure elucidation of organic compounds aided by the computer program system SCANNET
NASA Astrophysics Data System (ADS)
Guzowska-Swider, B.; Hippe, Z. S.
1992-12-01
Recognition of chemical structure is a very important problem currently solved by molecular spectroscopy, particularly IR, UV, NMR and Raman spectroscopy, and mass spectrometry. Nowadays, solution of the problem is frequently aided by the computer. SCANNET is a computer program system for structure elucidation of organic compounds, developed by our group. The structure recognition of an unknown substance is made by comparing its spectrum with successive reference spectra of standard compounds, i.e. chemical compounds of known chemical structure, stored in a spectral database. The computer program system SCANNET consists of six different spectral databases for following the analytical methods: IR, UV, 13C-NMR, 1H-NMR and Raman spectroscopy, and mass spectrometry. A chemist, to elucidate a structure, can use one of these spectral methods or a combination of them and search the appropriate databases. As the result of searching each spectral database, the user obtains a list of chemical substances whose spectra are identical and/or similar to the spectrum input into the computer. The final information obtained from searching the spectral databases is in the form of a list of chemical substances having all the examined spectra, for each type of spectroscopy, identical or simlar to those of the unknown compound.
Normal Databases for the Relative Quantification of Myocardial Perfusion
Rubeaux, Mathieu; Xu, Yuan; Germano, Guido; Berman, Daniel S.; Slomka, Piotr J.
2016-01-01
Purpose of review Myocardial perfusion imaging (MPI) with SPECT is performed clinically worldwide to detect and monitor coronary artery disease (CAD). MPI allows an objective quantification of myocardial perfusion at stress and rest. This established technique relies on normal databases to compare patient scans against reference normal limits. In this review, we aim to introduce the process of MPI quantification with normal databases and describe the associated perfusion quantitative measures that are used. Recent findings New equipment and new software reconstruction algorithms have been introduced which require the development of new normal limits. The appearance and regional count variations of normal MPI scan may differ between these new scanners and standard Anger cameras. Therefore, these new systems may require the determination of new normal limits to achieve optimal accuracy in relative myocardial perfusion quantification. Accurate diagnostic and prognostic results rivaling those obtained by expert readers can be obtained by this widely used technique. Summary Throughout this review, we emphasize the importance of the different normal databases and the need for specific databases relative to distinct imaging procedures. use of appropriate normal limits allows optimal quantification of MPI by taking into account subtle image differences due to the hardware and software used, and the population studied. PMID:28138354
Tidball, Moira M; Tidball, Keith G; Curtis, Paul
2014-01-01
We highlighted gaps in nutritional data for wild game meat and wild caught fish that have a regulated harvesting season in New York State, and examined the possible role that wild game and fish play in current trends towards consumption of local, healthy meat sources. This project is part of larger study that examines family food decision-making, and explores possibilities for leveraging the locavore movement in support of consumption of wild game and fish.
TRENDS: A flight test relational database user's guide and reference manual
NASA Technical Reports Server (NTRS)
Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.
1994-01-01
This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 488
NASA Technical Reports Server (NTRS)
1999-01-01
This report lists reports, articles and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
ReprDB and panDB: minimalist databases with maximal microbial representation.
Zhou, Wei; Gay, Nicole; Oh, Julia
2018-01-18
Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.
NASA Astrophysics Data System (ADS)
Power, O.; Chayramy, R.; Solve, S.; Stock, M.
2014-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.b, a comparison of the 10 V voltage reference standards of the BIPM and the National Standards Authority of Ireland-National Metrology Laboratory (NSAI-NML), Dublin, Ireland, was carried out from January to February 2013. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM_8 (Z8) and BIPM_9 (Z9), were transported by freight to NSAI-NML. At NSAI-NML, the reference standard for DC voltage at the 10 V level consists of a group of characterized Zener diode-based electronic voltage standards. The output EMF (electromotive force) of each travelling standard was measured by direct comparison with the group standard. At the BIPM the travelling standards were calibrated, before and after the measurements at NSAI-NML, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the value assigned to DC voltage standard by NSAI-NML, at the level of 10 V, at NSAI-NML, UNML, and that assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 5 February 2013. UNML - UBIPM = -0.63 µV uc = 1.31 µV, at 10 V where uc is thecombined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NSAI-NML,based on KJ-90, and the uncertainty related to the comparison. The comparison results show that the voltage standards maintained by NSAI-NML and the BIPM were equivalent, within their stated standard uncertainties, on the mean date of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Abdel Mageed, Hala M.; Aladdin, Omar M.; Raouf, M. Helmy A.
2015-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the National Institute for Standards (NIS), Giza, Egypt, was carried out from August to September 2014. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPMB (ZB) and BIPMC (ZC), were transported as hand luggage on board an airplane to NIS and back to BIPM. At NIS, the reference standard for DC voltage is a Josephson Voltage Standard. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at NIS, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned to DC voltage standards by NIS, at the level of 1.018 V and 10 V, at NIS, UNIS, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of the 7 September 2014. UNIS - UBIPM = 0.09 µV uc = 0.08 µV, at 1 V UNIS - UBIPM = 0.22 µV uc = 0.14 µV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NIS, based on KJ-90, and the uncertainty related to the comparison. This is a satisfactory result. The comparison result shows that the voltage standards maintained by NIS and the BIPM were equivalent, within their stated standard uncertainties, on the mean date of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Power, O.; Stock, M.
2014-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.b, a comparison of the 10 V voltage reference standards of the BIPM and the National Standards Authority of Ireland-National Metrology Laboratory (NSAI-NML), Dublin, Ireland, was carried out in February and March 2014. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM_4 (Z4) and BIPM_5 (Z5), were transported by freight to NSAI-NML. At NSAI-NML, the reference standard for DC voltage at the 10 V level consists of a group of characterized Zener diode-based electronic voltage standards. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the group standard. At the BIPM the travelling standards were calibrated, before and after the measurements at NSAI-NML, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the value assigned to DC voltage standard by NSAI-NML, at the level of 10 V, at NSAI-NML, UNML, and that assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 10 March 2014. UNML - UBIPM = -0.64 µV uc = 1.35 µV, at 10 V where uc is thecombined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NSAI-NML,based on KJ-90, and the uncertainty related to the comparison. The comparison results show that the voltage standards maintained by NSAI-NML and the BIPM were equivalent, within their stated standard uncertainties, on the mean date of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
The land management and operations database (LMOD)
USDA-ARS?s Scientific Manuscript database
This paper presents the design, implementation, deployment, and application of the Land Management and Operations Database (LMOD). LMOD is the single authoritative source for reference land management and operation reference data within the USDA enterprise data warehouse. LMOD supports modeling appl...
Fire-induced water-repellent soils, an annotated bibliography
Kalendovsky, M.A.; Cannon, S.H.
1997-01-01
The development and nature of water-repellent, or hydrophobic, soils are important issues in evaluating hillslope response to fire. The following annotated bibliography was compiled to consolidate existing published research on the topic. Emphasis was placed on the types, causes, effects and measurement techniques of water repellency, particularly with respect to wildfires and prescribed burns. Each annotation includes a general summary of the respective publication, as well as highlights of interest to this focus. Although some references on the development of water repellency without fires, the chemistry of hydrophobic substances, and remediation of water-repellent conditions are included, coverage of these topics is not intended to be comprehensive. To develop this database, the GeoRef, Agricola, and Water Resources Abstracts databases were searched for appropriate references, and the bibliographies of each reference were then reviewed for additional entries. Additional references will be added to this bibliography as they become available. The annotated bibliography can be accessed on the Web at http://geohazards.cr.usgs.gov/html_files/landslides/ofr97-720/biblio.html. A database consisting of the references and keywords is available through a link at the above address. This database was compiled using EndNote2 plus software by Niles and Associates, and is necessary to search the database.
Materials, processes, and environmental engineering network
NASA Technical Reports Server (NTRS)
White, Margo M.
1993-01-01
The Materials, Processes, and Environmental Engineering Network (MPEEN) was developed as a central holding facility for materials testing information generated by the Materials and Processes Laboratory. It contains information from other NASA centers and outside agencies, and also includes the NASA Environmental Information System (NEIS) and Failure Analysis Information System (FAIS) data. Environmental replacement materials information is a newly developed focus of MPEEN. This database is the NASA Environmental Information System, NEIS, which is accessible through MPEEN. Environmental concerns are addressed regarding materials identified by the NASA Operational Environment Team, NOET, to be hazardous to the environment. An environmental replacement technology database is contained within NEIS. Environmental concerns about materials are identified by NOET, and control or replacement strategies are formed. This database also contains the usage and performance characteristics of these hazardous materials. In addition to addressing environmental concerns, MPEEN contains one of the largest materials databases in the world. Over 600 users access this network on a daily basis. There is information available on failure analysis, metals and nonmetals testing, materials properties, standard and commercial parts, foreign alloy cross-reference, Long Duration Exposure Facility (LDEF) data, and Materials and Processes Selection List data.
Postel, Alexander; Schmeiser, Stefanie; Zimmermann, Bernd; Becher, Paul
2016-01-01
Molecular epidemiology has become an indispensable tool in the diagnosis of diseases and in tracing the infection routes of pathogens. Due to advances in conventional sequencing and the development of high throughput technologies, the field of sequence determination is in the process of being revolutionized. Platforms for sharing sequence information and providing standardized tools for phylogenetic analyses are becoming increasingly important. The database (DB) of the European Union (EU) and World Organisation for Animal Health (OIE) Reference Laboratory for classical swine fever offers one of the world’s largest semi-public virus-specific sequence collections combined with a module for phylogenetic analysis. The classical swine fever (CSF) DB (CSF-DB) became a valuable tool for supporting diagnosis and epidemiological investigations of this highly contagious disease in pigs with high socio-economic impacts worldwide. The DB has been re-designed and now allows for the storage and analysis of traditionally used, well established genomic regions and of larger genomic regions including complete viral genomes. We present an application example for the analysis of highly similar viral sequences obtained in an endemic disease situation and introduce the new geographic “CSF Maps” tool. The concept of this standardized and easy-to-use DB with an integrated genetic typing module is suited to serve as a blueprint for similar platforms for other human or animal viruses. PMID:27827988
Nutrient estimation from an FFQ developed for a black Zimbabwean population
Merchant, Anwar T; Dehghan, Mahshid; Chifamba, Jephat; Terera, Getrude; Yusuf, Salim
2005-01-01
Background There is little information in the literature on methods of food composition database development to calculate nutrient intake from food frequency questionnaire (FFQ) data. The aim of this study is to describe the development of an FFQ and a food composition table to calculate nutrient intake in a Black Zimbabwean population. Methods Trained interviewers collected 24-hour dietary recalls (24 hr DR) from high and low income families in urban and rural Zimbabwe. Based on these data and input from local experts we developed an FFQ, containing a list of frequently consumed foods, standard portion sizes, and categories of consumption frequency. We created a food composition table of the foods found in the FFQ so that we could compute nutrient intake. We used the USDA nutrient database as the main resource because it is relatively complete, updated, and easily accessible. To choose the food item in the USDA nutrient database that most closely matched the nutrient content of the local food we referred to a local food composition table. Results Almost all the participants ate sadza (maize porridge) at least 5 times a week, and about half had matemba (fish) and caterpillar more than once a month. Nutrient estimates obtained from the FFQ data by using the USDA and Zimbabwean food composition tables were similar for total energy intake intra class correlation (ICC) = 0.99, and carbohydrate (ICC = 0.99), but different for vitamin A (ICC = 0.53), and total folate (ICC = 0.68). Conclusion We have described a standardized process of FFQ and food composition database development for a Black Zimbabwean population. PMID:16351722
Proteomic Identification of Monoclonal Antibodies from Serum
2015-01-01
Characterizing the in vivo dynamics of the polyclonal antibody repertoire in serum, such as that which might arise in response to stimulation with an antigen, is difficult due to the presence of many highly similar immunoglobulin proteins, each specified by distinct B lymphocytes. These challenges have precluded the use of conventional mass spectrometry for antibody identification based on peptide mass spectral matches to a genomic reference database. Recently, progress has been made using bottom-up analysis of serum antibodies by nanoflow liquid chromatography/high-resolution tandem mass spectrometry combined with a sample-specific antibody sequence database generated by high-throughput sequencing of individual B cell immunoglobulin variable domains (V genes). Here, we describe how intrinsic features of antibody primary structure, most notably the interspersed segments of variable and conserved amino acid sequences, generate recurring patterns in the corresponding peptide mass spectra of V gene peptides, greatly complicating the assignment of correct sequences to mass spectral data. We show that the standard method of decoy-based error modeling fails to account for the error introduced by these highly similar sequences, leading to a significant underestimation of the false discovery rate. Because of these effects, antibody-derived peptide mass spectra require increased stringency in their interpretation. The use of filters based on the mean precursor ion mass accuracy of peptide-spectrum matches is shown to be particularly effective in distinguishing between “true” and “false” identifications. These findings highlight important caveats associated with the use of standard database search and error-modeling methods with nonstandard data sets and custom sequence databases. PMID:24684310
Asling-Monemi, Kajsa; Peña, Rodolfo; Ellsberg, Mary Carroll; Persson, Lars Ake
2003-01-01
OBJECTIVE: To investigate the impact of violence against mothers on mortality risks for their offspring before 5 years of age in Nicaragua. METHODS: From a demographic database covering a random sample of urban and rural households in Le n, Nicaragua, we identified all live births among women aged 15-49 years. Cases were defined as those who had died before the age of 5 years, between January 1993 and June 1996. For each case, two referents, matched for sex and age at death, were selected from the database. A total of 110 mothers of the cases and 203 mothers of the referents were interviewed using a standard questionnaire covering mothers' experience of physical and sexual violence. The data were analysed for the risk associated with maternal experience of violence of infant and under-5 mortality. FINDINGS: A total of 61% of mothers of cases had a lifetime experience of physical and/or sexual violence compared with 37% of mothers of referents, with a significant association being found between such experiences and mortality among their offspring. Other factors associated with higher infant and under-5 mortality were mother's education (no formal education), age (older), and parity (multiparity). CONCLUSIONS: The results suggest an association between physical and sexual violence against mothers, either before or during pregnancy, and an increased risk of under-5 mortality of their offspring. The type and severity of violence was probably more relevant to the risk than the timing, and violence may impact child health through maternal stress or care-giving behaviours rather than through direct trauma itself. PMID:12640470
NASA Astrophysics Data System (ADS)
Benková, Miroslava; Makovnik, Stefan; Mickan, Bodo; Arias, Roberto; Chahine, Khaled; Funaki, Tatsuya; Li, Chunhui; Choi, Hae Man; Seredyuk, Denys; Su, Chun-Min; Windenberg, Christophe; Wright, John
2014-01-01
The comparison CCM.FF-K6.2011 was organized for the purpose of determination of the degree of equivalence of the national standards for low-pressure gas flow measurement over the range (2 to 100) m3/h. A rotary gas meter was used as a transfer standard. The measurements were provided at prescribed reference conditions. Eleven laboratories from four RMOs participated in this key comparison—EURAMET: PTB, Germany; SMU, Slovakia; LNE-LADG, France; SIM: NIST, USA; CENAM, Mexico; APMP: NMIJ AIST Japan; KRISS, Korea; NMI, Australia; NIM, China; CMS, Chinese Taipei; COOMET: GP GP Ivano-Frankivs'kstandart-metrologia, Ukraine and all participants reported independent traceability chains to the SI. All results were used in the determination of the key comparison reference value (KCRV) and the uncertainty of the KCRV. The reference value was determined at each flow separately following procedure A presented by M G Cox. The degree of equivalence with the KCRV was also calculated for each flow and laboratory. All reported results were consistent with the KCRV. This KCRV can now be used in the further regional comparisons. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...
Standardization of XML Database Exchanges and the James Webb Space Telescope Experience
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.
2007-01-01
Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.
larvalign: Aligning Gene Expression Patterns from the Larval Brain of Drosophila melanogaster.
Muenzing, Sascha E A; Strauch, Martin; Truman, James W; Bühler, Katja; Thum, Andreas S; Merhof, Dorit
2018-01-01
The larval brain of the fruit fly Drosophila melanogaster is a small, tractable model system for neuroscience. Genes for fluorescent marker proteins can be expressed in defined, spatially restricted neuron populations. Here, we introduce the methods for 1) generating a standard template of the larval central nervous system (CNS), 2) spatial mapping of expression patterns from different larvae into a reference space defined by the standard template. We provide a manually annotated gold standard that serves for evaluation of the registration framework involved in template generation and mapping. A method for registration quality assessment enables the automatic detection of registration errors, and a semi-automatic registration method allows one to correct registrations, which is a prerequisite for a high-quality, curated database of expression patterns. All computational methods are available within the larvalign software package: https://github.com/larvalign/larvalign/releases/tag/v1.0.
Dietary choline and betaine intakes vary in an adult multiethnic population.
Yonemori, Kim M; Lim, Unhee; Koga, Karin R; Wilkens, Lynne R; Au, Donna; Boushey, Carol J; Le Marchand, Loïc; Kolonel, Laurence N; Murphy, Suzanne P
2013-06-01
Choline and betaine are important nutrients for human health, but reference food composition databases for these nutrients became available only recently. We tested the feasibility of using these databases to estimate dietary choline and betaine intakes among ethnically diverse adults who participated in the Multiethnic Cohort (MEC) Study. Of the food items (n = 965) used to quantify intakes for the MEC FFQ, 189 items were exactly matched with items in the USDA Database for the Choline Content of Common Foods for total choline, choline-containing compounds, and betaine, and 547 items were matched to the USDA National Nutrient Database for Standard Reference for total choline (n = 547) and 148 for betaine. When a match was not found, choline and betaine values were imputed based on the same food with a different form (124 food items for choline, 300 for choline compounds, 236 for betaine), a similar food (n = 98, 284, and 227, respectively) or the closest item in the same food category (n = 6, 191, and 157, respectively), or the values were assumed to be zero (n = 1, 1, and 8, respectively). The resulting mean intake estimates for choline and betaine among 188,147 MEC participants (aged 45-75) varied by sex (372 and 154 mg/d in men, 304 and 128 mg/d in women, respectively; P-heterogeneity < 0.0001) and by race/ethnicity among Caucasians, African Americans, Japanese Americans, Latinos, and Native Hawaiians (P-heterogeneity < 0.0001), largely due to the variation in energy intake. Our findings demonstrate the feasibility of assessing choline and betaine intake and characterize the variation in intake that exists in a multiethnic population.
Dietary Choline and Betaine Intakes Vary in an Adult Multiethnic Population123
Yonemori, Kim M.; Lim, Unhee; Koga, Karin R.; Wilkens, Lynne R.; Au, Donna; Boushey, Carol J.; Le Marchand, Loïc; Kolonel, Laurence N.; Murphy, Suzanne P.
2013-01-01
Choline and betaine are important nutrients for human health, but reference food composition databases for these nutrients became available only recently. We tested the feasibility of using these databases to estimate dietary choline and betaine intakes among ethnically diverse adults who participated in the Multiethnic Cohort (MEC) Study. Of the food items (n = 965) used to quantify intakes for the MEC FFQ, 189 items were exactly matched with items in the USDA Database for the Choline Content of Common Foods for total choline, choline-containing compounds, and betaine, and 547 items were matched to the USDA National Nutrient Database for Standard Reference for total choline (n = 547) and 148 for betaine. When a match was not found, choline and betaine values were imputed based on the same food with a different form (124 food items for choline, 300 for choline compounds, 236 for betaine), a similar food (n = 98, 284, and 227, respectively) or the closest item in the same food category (n = 6, 191, and 157, respectively), or the values were assumed to be zero (n = 1, 1, and 8, respectively). The resulting mean intake estimates for choline and betaine among 188,147 MEC participants (aged 45–75) varied by sex (372 and 154 mg/d in men, 304 and 128 mg/d in women, respectively; P-heterogeneity < 0.0001) and by race/ethnicity among Caucasians, African Americans, Japanese Americans, Latinos, and Native Hawaiians (P-heterogeneity < 0.0001), largely due to the variation in energy intake. Our findings demonstrate the feasibility of assessing choline and betaine intake and characterize the variation in intake that exists in a multiethnic population. PMID:23616508
Error and Uncertainty in the Accuracy Assessment of Land Cover Maps
NASA Astrophysics Data System (ADS)
Sarmento, Pedro Alexandre Reis
Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None
Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T
2013-12-01
Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.
Wang, Zhendi; Li, K; Lambert, P; Yang, Chun
2007-01-12
On 15 August 2001, a tire fire took place at the Pneu Lavoie Facility in Gatineau, Quebec, in which 4000 to 6000 new and recycled tires were stored along with other potentially hazardous materials. Comprehensive gas chromatography-mass spectrometry (GC-MS) analyses were performed on the tire fire samples to facilitate detailed chemical composition characterization of toxic polycyclic aromatic hydrocarbons (PAHs) and other organic compounds in samples. It is found that significant amounts of PAHs, particularly the high-ring-number PAHs, were generated during the fire. In total, 165 PAH compounds including 13 isomers of molecular weight (MW) 302, 10 isomers of MW 278, 10 isomers of MW 276, 7 isomers of MW 252, 7 isomers of MW 228, and 8 isomers of MW 216 PAHs were positively identified in the tire fire wipe samples for the first time. Numerous S-, O-, and N-containing PAH compounds were also detected. The identification and characterization of the PAH isomers was mainly based on: (1) a positive match of mass spectral data of the PAH isomers with the NIST authentic mass spectra database; (2) a positive match of the GC retention indices (I) of PAHs with authentic standards and with those reported in the literature; (3) agreement of the PAH elution order with the NIST (US National Institute of Standards and Technology) Standard Reference Material 1597 for complex mixture of PAHs from coal tar; (4) a positive match of the distribution patterns of PAH isomers in the SIM mode between the tire fire samples and the NIST Standard Reference Materials and well-characterized reference oils. Quantitation of target PAHs was done on the GC-MS in the selected ion monitoring (SIM) mode using the internal standard method. The relative response factors (RRF) for target PAHs were obtained from analyses of authentic PAH standard compounds. Alkylated PAH homologues were quantitated using straight baseline integration of each level of alkylation.
San Miguel Moragas, Joan; Reddy, Rajgopal R; Hernández Alfaro, Federico; Mommaerts, Maurice Y
2015-07-01
The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with meta-regression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. IV. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Foster, Joseph M; Moreno, Pablo; Fabregat, Antonio; Hermjakob, Henning; Steinbeck, Christoph; Apweiler, Rolf; Wakelam, Michael J O; Vizcaíno, Juan Antonio
2013-01-01
Protein sequence databases are the pillar upon which modern proteomics is supported, representing a stable reference space of predicted and validated proteins. One example of such resources is UniProt, enriched with both expertly curated and automatic annotations. Taken largely for granted, similar mature resources such as UniProt are not available yet in some other "omics" fields, lipidomics being one of them. While having a seasoned community of wet lab scientists, lipidomics lies significantly behind proteomics in the adoption of data standards and other core bioinformatics concepts. This work aims to reduce the gap by developing an equivalent resource to UniProt called 'LipidHome', providing theoretically generated lipid molecules and useful metadata. Using the 'FASTLipid' Java library, a database was populated with theoretical lipids, generated from a set of community agreed upon chemical bounds. In parallel, a web application was developed to present the information and provide computational access via a web service. Designed specifically to accommodate high throughput mass spectrometry based approaches, lipids are organised into a hierarchy that reflects the variety in the structural resolution of lipid identifications. Additionally, cross-references to other lipid related resources and papers that cite specific lipids were used to annotate lipid records. The web application encompasses a browser for viewing lipid records and a 'tools' section where an MS1 search engine is currently implemented. LipidHome can be accessed at http://www.ebi.ac.uk/apweiler-srv/lipidhome.
Secure Indoor Localization Based on Extracting Trusted Fingerprint
Yin, Xixi; Zheng, Yanliu; Wang, Chun
2018-01-01
Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms. PMID:29401755
Secure Indoor Localization Based on Extracting Trusted Fingerprint.
Luo, Juan; Yin, Xixi; Zheng, Yanliu; Wang, Chun
2018-02-05
[-5]Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms.
Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert
2014-06-01
The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.
[Experience and present situation of Western China Gastric Cancer Collaboration].
Hu, Jiankun; Zhang, Weihan; Western China Gastric Cancer Collaboration, China
2017-03-25
The Western China Gastric Cancer Collaboration (WCGCC) was founded in Chongqing, China in 2011. At the early stage of the collaboration, there were only about 20 centers. While now, there are 36 centers from western area of China, including Sichuan, Chongqing, Yunnan, Shanxi, Guizhou, Gansu, Qinghai, Xinjiang, Ningxia and Tibet. During the past few years, the WCGCC organized routinely gastric cancer standardized treatment tours, training courses of mini-invasive surgical treatment of gastric cancer and the clinical research methodology for members of the collaboration. Meanwhile, the WCGCC built a multicenter database of gastric cancer since 2011 and the entering and management refer to national gastric cancer registration entering system of Japan Gastric Cancer Association. During the entering and collection of data, 190 items of data have unified definition and entering standard from Japan Gastric Cancer Guidelines. Nowadays, this database included about 11 872 gastric cancer cases, and in this paper we will introduce the initial results of these cases. Next, the collaboration will conduct some retrospective studies based on this database to analyze the clinicopathological characteristics of patients in the western area of China. Besides, the WCGCC performed a prospective study, also. The first randomized clinical trial of the collaboration aims to compare the postoperative quality of life between different reconstruction methods for total gastrectomy(WCGCC-1202, ClinicalTrials.gov Identifier: NCT02110628), which began in 2015, and now this study is in the recruitment period. In the next steps, we will improve the quality of the database, optimize the management processes. Meanwhile, we will engage in more exchanges and cooperation with the Chinese Cochrane Center, reinforce the foundation of the clinical trials research methodology. In aspect of standardized surgical treatment of gastric cancer, we will further strengthen communication with other international centers in order to improve both the treatment and research levels of gastric cancer in Western China.
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Simionescu, M.; Cîrneanu, L.
2014-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the Institut National de Metrologie (INM), Bucharest, Romania, was carried out from August to October 2013. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM_7 (Z7) and BIPM_8 (Z8), were transported by freight to INM. At INM, the reference standard for DC voltage is a Josephson Voltage Standard. The output EMF (electromotive force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at INM, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned to DC voltage standards by INM, at the level of 1.018 V and 10 V, at INM, UINM, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 6 September 2013. UINM - UBIPM = -0.014 µV uc = 0.051 µV, at 1 V UINM - UBIPM = -0.43 µV uc = 0.34 µV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at INM, based on KJ-90, and the uncertainty related to the comparison. These are satisfactory results. The comparison results show that the voltage standards maintained by INM and the BIPM were equivalent, within the comparison uncertainty, on the mean date of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Cassagne, Carole; Ranque, Stéphane; Normand, Anne-Cécile; Fourquet, Patrick; Thiebault, Sandrine; Planard, Chantal; Hendrickx, Marijke; Piarroux, Renaud
2011-01-01
MALDI-TOF MS recently emerged as a valuable identification tool for bacteria and yeasts and revolutionized the daily clinical laboratory routine. But it has not been established for routine mould identification. This study aimed to validate a standardized procedure for MALDI-TOF MS-based mould identification in clinical laboratory. First, pre-extraction and extraction procedures were optimized. With this standardized procedure, a 143 mould strains reference spectra library was built. Then, the mould isolates cultured from sequential clinical samples were prospectively subjected to this MALDI-TOF MS based-identification assay. MALDI-TOF MS-based identification was considered correct if it was concordant with the phenotypic identification; otherwise, the gold standard was DNA sequence comparison-based identification. The optimized procedure comprised a culture on sabouraud-gentamicin-chloramphenicol agar followed by a chemical extraction of the fungal colonies with formic acid and acetonitril. The identification was done using a reference database built with references from at least four culture replicates. For five months, 197 clinical isolates were analyzed; 20 were excluded because they were not identified at the species level. MALDI-TOF MS-based approach correctly identified 87% (154/177) of the isolates analyzed in a routine clinical laboratory activity. It failed in 12% (21/177), whose species were not represented in the reference library. MALDI-TOF MS-based identification was correct in 154 out of the remaining 156 isolates. One Beauveria bassiana was not identified and one Rhizopus oryzae was misidentified as Mucor circinelloides. This work's seminal finding is that a standardized procedure can also be used for MALDI-TOF MS-based identification of a wide array of clinically relevant mould species. It thus makes it possible to identify moulds in the routine clinical laboratory setting and opens new avenues for the development of an integrated MALDI-TOF MS-based solution for the identification of any clinically relevant microorganism.
Automatic summary generating technology of vegetable traceability for information sharing
NASA Astrophysics Data System (ADS)
Zhenxuan, Zhang; Minjing, Peng
2017-06-01
In order to solve problems of excessive data entries and consequent high costs for data collection in vegetable traceablility for farmers in traceability applications, the automatic summary generating technology of vegetable traceability for information sharing was proposed. The proposed technology is an effective way for farmers to share real-time vegetable planting information in social networking platforms to enhance their brands and obtain more customers. In this research, the influencing factors in the vegetable traceablility for customers were analyzed to establish the sub-indicators and target indicators and propose a computing model based on the collected parameter values of the planted vegetables and standard legal systems on food safety. The proposed standard parameter model involves five steps: accessing database, establishing target indicators, establishing sub-indicators, establishing standard reference model and computing scores of indicators. On the basis of establishing and optimizing the standards of food safety and traceability system, this proposed technology could be accepted by more and more farmers and customers.
NASA Astrophysics Data System (ADS)
Viallon, Joële; Idrees, Faraz; Moussay, Philippe; Wielgosz, Robert; Ntsasa, Napo G.; Tshilongo, James; Mphaphuli, Gumani E.; Norris, James E.; Hodges, Joseph T.
2018-01-01
As part of the on-going key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of South Africa maintained by the National Metrology Institute of South Africa (NMISA) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM), via a transfer standard maintained by the National Institute of Standards and Technology (NIST). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Allen, Felicity; Pon, Allison; Greiner, Russ; Wishart, David
2016-08-02
We describe a tool, competitive fragmentation modeling for electron ionization (CFM-EI) that, given a chemical structure (e.g., in SMILES or InChI format), computationally predicts an electron ionization mass spectrum (EI-MS) (i.e., the type of mass spectrum commonly generated by gas chromatography mass spectrometry). The predicted spectra produced by this tool can be used for putative compound identification, complementing measured spectra in reference databases by expanding the range of compounds able to be considered when availability of measured spectra is limited. The tool extends CFM-ESI, a recently developed method for computational prediction of electrospray tandem mass spectra (ESI-MS/MS), but unlike CFM-ESI, CFM-EI can handle odd-electron ions and isotopes and incorporates an artificial neural network. Tests on EI-MS data from the NIST database demonstrate that CFM-EI is able to model fragmentation likelihoods in low-resolution EI-MS data, producing predicted spectra whose dot product scores are significantly better than full enumeration "bar-code" spectra. CFM-EI also outperformed previously reported results for MetFrag, MOLGEN-MS, and Mass Frontier on one compound identification task. It also outperformed MetFrag in a range of other compound identification tasks involving a much larger data set, containing both derivatized and nonderivatized compounds. While replicate EI-MS measurements of chemical standards are still a more accurate point of comparison, CFM-EI's predictions provide a much-needed alternative when no reference standard is available for measurement. CFM-EI is available at https://sourceforge.net/projects/cfm-id/ for download and http://cfmid.wishartlab.com as a web service.
The NASA MSFC Earth Global Reference Atmospheric Model-2007 Version
NASA Technical Reports Server (NTRS)
Leslie, F.W.; Justus, C.G.
2008-01-01
Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA/Marshall Space Flight Center (MSFC) Global Reference Atmospheric Model (GRAM) was developed in response to the need for a design reference atmosphere that provides complete global geographical variability, and complete altitude coverage (surface to orbital altitudes) as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. A unique feature of GRAM is that, addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations in these atmospheric parameters (e.g. fluctuations due to turbulence and other atmospheric perturbation phenomena). A summary comparing GRAM features to characteristics and features of other reference or standard atmospheric models, can be found Guide to Reference and Standard Atmosphere Models. The original GRAM has undergone a series of improvements over the years with recent additions and changes. The software program is called Earth-GRAM2007 to distinguish it from similar programs for other bodies (e.g. Mars, Venus, Neptune, and Titan). However, in order to make this Technical Memorandum (TM) more readable, the software will be referred to simply as GRAM07 or GRAM unless additional clarity is needed. Section 1 provides an overview of the basic features of GRAM07 including the newly added features. Section 2 provides a more detailed description of GRAM07 and how the model output generated. Section 3 presents sample results. Appendices A and B describe the Global Upper Air Climatic Atlas (GUACA) data and the Global Gridded Air Statistics (GGUAS) database. Appendix C provides instructions for compiling and running GRAM07. Appendix D gives a description of the required NAMELIST format input. Appendix E gives sample output. Appendix F provides a list of available parameters to enable the user to generate special output. Appendix G gives an example and guidance on incorporating GRAM07 as a subroutine in other programs such as trajectory codes or orbital propagation routines.
Construction and comparative evaluation of different activity detection methods in brain FDG-PET.
Buchholz, Hans-Georg; Wenzel, Fabian; Gartenschläger, Martin; Thiele, Frank; Young, Stewart; Reuss, Stefan; Schreckenberger, Mathias
2015-08-18
We constructed and evaluated reference brain FDG-PET databases for usage by three software programs (Computer-aided diagnosis for dementia (CAD4D), Statistical Parametric Mapping (SPM) and NEUROSTAT), which allow a user-independent detection of dementia-related hypometabolism in patients' brain FDG-PET. Thirty-seven healthy volunteers were scanned in order to construct brain FDG reference databases, which reflect the normal, age-dependent glucose consumption in human brain, using either software. Databases were compared to each other to assess the impact of different stereotactic normalization algorithms used by either software package. In addition, performance of the new reference databases in the detection of altered glucose consumption in the brains of patients was evaluated by calculating statistical maps of regional hypometabolism in FDG-PET of 20 patients with confirmed Alzheimer's dementia (AD) and of 10 non-AD patients. Extent (hypometabolic volume referred to as cluster size) and magnitude (peak z-score) of detected hypometabolism was statistically analyzed. Differences between the reference databases built by CAD4D, SPM or NEUROSTAT were observed. Due to the different normalization methods, altered spatial FDG patterns were found. When analyzing patient data with the reference databases created using CAD4D, SPM or NEUROSTAT, similar characteristic clusters of hypometabolism in the same brain regions were found in the AD group with either software. However, larger z-scores were observed with CAD4D and NEUROSTAT than those reported by SPM. Better concordance with CAD4D and NEUROSTAT was achieved using the spatially normalized images of SPM and an independent z-score calculation. The three software packages identified the peak z-scores in the same brain region in 11 of 20 AD cases, and there was concordance between CAD4D and SPM in 16 AD subjects. The clinical evaluation of brain FDG-PET of 20 AD patients with either CAD4D-, SPM- or NEUROSTAT-generated databases from an identical reference dataset showed similar patterns of hypometabolism in the brain regions known to be involved in AD. The extent of hypometabolism and peak z-score appeared to be influenced by the calculation method used in each software package rather than by different spatial normalization parameters.
Singh, Surya K; Patel, Vivek H; Gupta, Balram
2017-06-19
The mainstay of diagnosis of osteoporosis is dual-energy X-ray absorptiometry (DXA) scan measuring areal bone mineral density (BMD) (g/cm 2 ). The aim of the present study was to compare the Indian Council of Medical Research database (ICMRD) and the Lunar ethnic reference database of DXA scans in the diagnosis of osteoporosis in male patients. In this retrospective study, all male patients who underwent a DXA scan were included. The areal BMD (g/cm 2 ) was measured at either the lumbar spine (L1-L4) or the total hip using the Lunar DXA machine (software version 8.50) manufactured by GE Medical Systems (Shanghai, China). The Indian Council of Medical Research published a reference data for BMD in the Indian population derived from the population-based study conducted in healthy Indian individuals, which was used to analyze the BMD result by Lunar DXA scan. The 2 results were compared for various values using statistical software SPSS for Windows (version 16; SPSS Inc., Chicago, IL). A total 238 male patients with a mean age of 57.2 yr (standard deviation ±15.9) were included. Overall, 26.4% (66/250) and 2.8% (7/250) of the subjects were classified in the osteoporosis group according to the Lunar database and the ICMRD, respectively. Out of the 250 sites of the DXA scan, 28.8% (19/66) and 60.0% (40/66) of the cases classified as osteoporosis by the Lunar database were reclassified as normal and osteopenia by ICMRD, respectively. In conclusion, the Indian Council of Medical Research data underestimated the degree of osteoporosis in male subjects that might result in deferring of treatment. In view of the discrepancy, the decision on the treatment of osteoporosis should be based on the multiple fracture risk factors and less reliably on the BMD T-score. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Irinyi, Laszlo; Serena, Carolina; Garcia-Hermoso, Dea; Arabatzis, Michael; Desnos-Ollivier, Marie; Vu, Duong; Cardinali, Gianluigi; Arthur, Ian; Normand, Anne-Cécile; Giraldo, Alejandra; da Cunha, Keith Cassia; Sandoval-Denis, Marcelo; Hendrickx, Marijke; Nishikaku, Angela Satie; de Azevedo Melo, Analy Salles; Merseguel, Karina Bellinghausen; Khan, Aziza; Parente Rocha, Juliana Alves; Sampaio, Paula; da Silva Briones, Marcelo Ribeiro; e Ferreira, Renata Carmona; de Medeiros Muniz, Mauro; Castañón-Olivares, Laura Rosio; Estrada-Barcenas, Daniel; Cassagne, Carole; Mary, Charles; Duan, Shu Yao; Kong, Fanrong; Sun, Annie Ying; Zeng, Xianyu; Zhao, Zuotao; Gantois, Nausicaa; Botterel, Françoise; Robbertse, Barbara; Schoch, Conrad; Gams, Walter; Ellis, David; Halliday, Catriona; Chen, Sharon; Sorrell, Tania C; Piarroux, Renaud; Colombo, Arnaldo L; Pais, Célia; de Hoog, Sybren; Zancopé-Oliveira, Rosely Maria; Taylor, Maria Lucia; Toriello, Conchita; de Almeida Soares, Célia Maria; Delhaes, Laurence; Stubbe, Dirk; Dromer, Françoise; Ranque, Stéphane; Guarro, Josep; Cano-Lira, Jose F; Robert, Vincent; Velegraki, Aristea; Meyer, Wieland
2015-05-01
Human and animal fungal pathogens are a growing threat worldwide leading to emerging infections and creating new risks for established ones. There is a growing need for a rapid and accurate identification of pathogens to enable early diagnosis and targeted antifungal therapy. Morphological and biochemical identification methods are time-consuming and require trained experts. Alternatively, molecular methods, such as DNA barcoding, a powerful and easy tool for rapid monophasic identification, offer a practical approach for species identification and less demanding in terms of taxonomical expertise. However, its wide-spread use is still limited by a lack of quality-controlled reference databases and the evolving recognition and definition of new fungal species/complexes. An international consortium of medical mycology laboratories was formed aiming to establish a quality controlled ITS database under the umbrella of the ISHAM working group on "DNA barcoding of human and animal pathogenic fungi." A new database, containing 2800 ITS sequences representing 421 fungal species, providing the medical community with a freely accessible tool at http://www.isham.org/ and http://its.mycologylab.org/ to rapidly and reliably identify most agents of mycoses, was established. The generated sequences included in the new database were used to evaluate the variation and overall utility of the ITS region for the identification of pathogenic fungi at intra-and interspecies level. The average intraspecies variation ranged from 0 to 2.25%. This highlighted selected pathogenic fungal species, such as the dermatophytes and emerging yeast, for which additional molecular methods/genetic markers are required for their reliable identification from clinical and veterinary specimens. © The Author 2015. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
Assessing operating characteristics of CAD algorithms in the absence of a gold standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.
2010-04-15
Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less
Hrovatin, Karin; Kunej, Tanja
2018-01-01
Erstwhile, sex was determined by observation, which is not always feasible. Nowadays, genetic methods are prevailing due to their accuracy, simplicity, low costs, and time-efficiency. However, there is no comprehensive review enabling overview and development of the field. The studies are heterogeneous, lacking a standardized reporting strategy. Therefore, our aim was to collect genetic sexing assays for mammals and assemble them in a catalogue with unified terminology. Publications were extracted from online databases using key words such as sexing and molecular. The collected data were supplemented with species and gene IDs and the type of sex-specific sequence variant (SSSV). We developed a catalogue and graphic presentation of diagnostic tests for molecular sex determination of mammals, based on 58 papers published from 2/1991 to 10/2016. The catalogue consists of five categories: species, genes, SSSVs, methods, and references. Based on the analysis of published literature, we propose minimal requirements for reporting, consisting of: species scientific name and ID, genetic sequence with name and ID, SSSV, methodology, genomic coordinates (e.g., restriction sites, SSSVs), amplification system, and description of detected amplicon and controls. The present study summarizes vast knowledge that has up to now been scattered across databases, representing the first step toward standardization regarding molecular sexing, enabling a better overview of existing tests and facilitating planned designs of novel tests. The project is ongoing; collecting additional publications, optimizing field development, and standardizing data presentation are needed.
Developmental Fluoride Neurotoxicity: A Systematic Review and Meta-Analysis
Sun, Guifan; Zhang, Ying; Grandjean, Philippe
2012-01-01
Background: Although fluoride may cause neurotoxicity in animal models and acute fluoride poisoning causes neurotoxicity in adults, very little is known of its effects on children’s neurodevelopment. Objective: We performed a systematic review and meta-analysis of published studies to investigate the effects of increased fluoride exposure and delayed neurobehavioral development. Methods: We searched the MEDLINE, EMBASE, Water Resources Abstracts, and TOXNET databases through 2011 for eligible studies. We also searched the China National Knowledge Infrastructure (CNKI) database, because many studies on fluoride neurotoxicity have been published in Chinese journals only. In total, we identified 27 eligible epidemiological studies with high and reference exposures, end points of IQ scores, or related cognitive function measures with means and variances for the two exposure groups. Using random-effects models, we estimated the standardized mean difference between exposed and reference groups across all studies. We conducted sensitivity analyses restricted to studies using the same outcome assessment and having drinking-water fluoride as the only exposure. We performed the Cochran test for heterogeneity between studies, Begg’s funnel plot, and Egger test to assess publication bias, and conducted meta-regressions to explore sources of variation in mean differences among the studies. Results: The standardized weighted mean difference in IQ score between exposed and reference populations was –0.45 (95% confidence interval: –0.56, –0.35) using a random-effects model. Thus, children in high-fluoride areas had significantly lower IQ scores than those who lived in low-fluoride areas. Subgroup and sensitivity analyses also indicated inverse associations, although the substantial heterogeneity did not appear to decrease. Conclusions: The results support the possibility of an adverse effect of high fluoride exposure on children’s neurodevelopment. Future research should include detailed individual-level information on prenatal exposure, neurobehavioral performance, and covariates for adjustment. PMID:22820538
AN ASSESSMENT OF GROUND TRUTH VARIABILITY USING A "VIRTUAL FIELD REFERENCE DATABASE"
A "Virtual Field Reference Database (VFRDB)" was developed from field measurment data that included location and time, physical attributes, flora inventory, and digital imagery (camera) documentation foy 1,01I sites in the Neuse River basin, North Carolina. The sampling f...
NASA Astrophysics Data System (ADS)
Avison, Janine; Barham, Richard
2014-01-01
This document and the accompanying spreadsheets constitute the final report for key comparison CCAUV.A-K5 on the pressure calibration of laboratory standard microphones in the frequency range from 2 Hz to 10 kHz. Twelve national measurement institutes took part in the key comparison and the National Physical Laboratory piloted the project. Two laboratory standard microphones IEC type LS1P were circulated to the participants and results in the form of regular calibration certificates were collected throughout the project. One of the microphones was subsequently deemed to have compromised stability for the purpose of deriving a reference value. Consequently the key comparison reference value (KCRV) has been made based on the weighted mean results for sensitivity level and for sensitivity phase from just one of the microphones. Corresponding degrees of equivalence (DoEs) have also been calculated and are presented. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCAUV, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Tephrabase: A tephrochronological data
NASA Astrophysics Data System (ADS)
Newton, Anthony
2015-04-01
Development of Tephrabase, a tephrochronological database,, began over 20 years ago and was it launched in June 1995 as one of the earliest scientific databases on the web. Tephrabase was designed from the start to include a wide range of tephrochronological data including location, depth of the layer, geochemical composition (major to trace elements), physical properties (colour, grainsize, and mineral components), dating (both absolute/historical and radiometric), details of eruptions and the history of volcanic centres, as well as a reference database. Currently, Tephrabase contains details of over 1000 sites where tephra layers have been found, 3500 tephra layers, 3500 geochemical analyses and 2500 references. Tephrabase was originally developed to include tephra layers in Iceland and those of Icelandic origin found in NW Europe, it also now includes data on tephra layers from central Mexico and from the Laacher See eruption. The latter was developed as a supplement to the Iceland-centric nature of the rest of Tephrabase. A further extension to Tephrabase has seen the development of an automated method of producing tephra stratigraphic columns, calculating sediment accumulation rates between dated tephra layers in multiple profiles and mapping tephra layers across the landscape. Whilst Tephrabase has been successful and continues to be developed and updated, there are several issues which need to be. More tephrochronological databases need to be developed and these should allow connected/shared searches. This would provide worldwide coverage, but also the flexibility to develop spin off small-scale extensions, such as those described above. Data uploading needs to be improved and simplified. This includes the need to clarify issues of quality control. Again, a common standards led approach to this seems appropriate. Researchers also need to be encouraged to contribute data to these databases. Tephrabase was designed to include a variety of data, including physical properties and trace element compositions of the tephra layers. However, Tephrabase is conspicuous by not containing these data. Tephrabase and other databases need to include these. Tephra databases need to not only record details about tephra layers, but should also be tools to understand environmental change and understand volcanic histories. These can be achieved through development of databases themselves and through the creations of portals which draw data from multiple data sources.
A carcinogenic potency database of the standardized results of animal bioassays
Gold, Lois Swirsky; Sawyer, Charles B.; Magaw, Renae; Backman, Georganne M.; De Veciana, Margarita; Levinson, Robert; Hooper, N. Kim; Havender, William R.; Bernstein, Leslie; Peto, Richard; Pike, Malcolm C.; Ames, Bruce N.
1984-01-01
The preceding paper described our numerical index of carcinogenic potency, the TD50 and the statistical procedures adopted for estimating it from experimental data. This paper presents the Carcinogenic Potency Database, which includes results of about 3000 long-term, chronic experiments of 770 test compounds. Part II is a discussion of the sources of our data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. Part III is a guide to the plot of results presented in Part IV. A number of appendices are provided to facilitate use of the database. The plot includes information about chronic cancer tests in mammals, such as dose and other aspects of experimental protocol, histopathology and tumor incidence, TD50 and its statistical significance, dose response, author's opinion and literature reference. The plot readily permits comparisons of carcinogenic potency and many other aspects of cancer tests; it also provides quantitative information about negative tests. The range of carcinogenic potency is over 10 million-fold. PMID:6525996
Optics survivability support, volume 2
NASA Astrophysics Data System (ADS)
Wild, N.; Simpson, T.; Busdeker, A.; Doft, F.
1993-01-01
This volume of the Optics Survivability Support Final Report contains plots of all the data contained in the computerized Optical Glasses Database. All of these plots are accessible through the Database, but are included here as a convenient reference. The first three pages summarize the types of glass included with a description of the radiation source, test date, and the original data reference. This information is included in the database as a macro button labeled 'LLNL DATABASE'. Following this summary is an Abbe chart showing which glasses are included and where they lie as a function of nu(sub d) and n(sub d). This chart is also callable through the database as a macro button labeled 'ABBEC'.
Wright, T.L.; Takahashi, T.J.
1998-01-01
The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.
Nuclear Science References Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.gov; Běták, E.; Singh, B.
2014-06-15
The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energymore » Agency (http://www-nds.iaea.org/nsr)« less
A standards-based clinical information system for HIV/AIDS.
Stitt, F W
1995-01-01
To create a clinical data repository to interface the Veteran's Administration (VA) Decentralized Hospital Computer Program (DHCP) and a departmental clinical information system for the management of HIV patients. This system supports record-keeping, decision-making, reporting, and analysis. The database development was designed to overcome two impediments to successful implementations of clinical databases: (i) lack of a standard reference data model, and; (ii) lack of a universal standard for medical concept representation. Health Level Seven (HL7) is a standard protocol that specifies the implementation of interfaces between two computer applications (sender and receiver) from different vendors or sources of electronic data exchange in the health care environment. This eliminates or substantially reduces the custom interface programming and program maintenance that would otherwise be required. HL7 defines the data to be exchanged, the timing of the interchange, and the communication of errors to the application. The formats are generic in nature and must be configured to meet the needs of the two applications involved. The standard conceptually operates at the seventh level of the ISO model for Open Systems Interconnection (OSI). The OSI simply defines the data elements that are exchanged as abstract messages, and does not prescribe the exact bit stream of the messages that flow over the network. Lower level network software developed according to the OSI model may be used to encode and decode the actual bit stream. The OSI protocols are not universally implemented and, therefore, a set of encoding rules for defining the exact representation of a message must be specified. The VA has created an HL7 module to assist DHCP applications in exchanging health care information with other applications using the HL7 protocol. The DHCP HL7 module consists of a set of utility routines and files that provide a generic interface to the HL7 protocol for all DHCP applications. The VA's DHCP core modules are in standard use at 169 hospitals, and the role of the VA system in health care delivery has been discussed elsewhere. This development was performed at the Miami VA Medical Center Special Immunology Unit, where a database was created for an HIV patient registry in 1987. Over 2,300 patient have been entered into a database that supports a problem-oriented summary of the patient's clinical record. The interface to the VA DHCP was designed and implemented to capture information from the patient treatment file, pharmacy, laboratory, radiology, and other modules. We obtained a suite of programs for implementing the HL7 encoding rules from Columbia-Presbyterian Medical Center in New York, written in ANSI C. This toolkit isolates our application programs from the details of the HL7 encoding rules, and allows them to deal with abstract messages and the programming level. While HL7 has become a standard for healthcare message exchange, SQL (Structured Query Language) is the standard for database definition, data manipulation, and query. The target database (Stitt F.W. The Problem-Oriented Medical Synopsis: a patient-centered clinical information system. Proc 17 SCAMC. 1993:88-93) provides clinical workstation functionality. Medical concepts are encoded using a preferred terminology derived from over 15 sources that include the Unified Medical Language System and SNOMed International ( Stitt F.W. The Problem-Oriented Medical Synopsis: coding, indexing, and classification sub-model. Proc 18 SCAMC, 1994: in press). The databases were modeled using the Information Engineering CASE tools, and were written using relational database utilities, including embedded SQL in C (ESQL/C). We linked ESQL/C programs to the HL7 toolkit to allow data to be inserted, deleted, or updated, under transaction control. A graphical format will be used to display the entity-rel
Montedori, Alessandro; Abraha, Iosief; Chiatti, Carlos; Cozzolino, Francesco; Orso, Massimiliano; Luchetta, Maria Laura; Rimland, Joseph M; Ambrosio, Giuseppe
2016-09-15
Administrative healthcare databases are useful to investigate the epidemiology, health outcomes, quality indicators and healthcare utilisation concerning peptic ulcers and gastrointestinal bleeding, but the databases need to be validated in order to be a reliable source for research. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases, 9th Revision and 10th version (ICD-9 and ICD-10) codes for peptic ulcer and upper gastrointestinal bleeding diagnoses. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched, using appropriate search strategies. We will include validation studies that used administrative data to identify peptic ulcer disease and upper gastrointestinal bleeding diagnoses or studies that evaluated the validity of peptic ulcer and upper gastrointestinal bleeding codes in administrative data. The following inclusion criteria will be used: (a) the presence of a reference standard case definition for the diseases of interest; (b) the presence of at least one test measure (eg, sensitivity, etc) and (c) the use of an administrative database as a source of data. Pairs of reviewers will independently abstract data using standardised forms and will evaluate quality using the checklist of the Standards for Reporting of Diagnostic Accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA-P) 2015 statement. Ethics approval is not required given that this is a protocol for a systematic review. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide for researchers validating administrative healthcare databases to determine appropriate case definitions for peptic ulcer disease and upper gastrointestinal bleeding, as well as to perform outcome research using administrative healthcare databases of these conditions. CRD42015029216. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Interpretation guidelines of a standard Y-chromosome STR 17-plex PCR-CE assay for crime casework.
Roewer, Lutz; Geppert, Maria
2012-01-01
Y-STR analysis is an invaluable tool to examine evidence in sexual assault cases and in other forensic casework. Unambiguous detection of the male component in DNA mixtures with a high female background is still the main field of application of forensic Y-STR haplotyping. In the last years, powerful technologies including a 17-locus multiplex PCR assay have been introduced in the forensic laboratories. At the same time, statistical methods have been developed and adapted for interpretation of a nonrecombining, linear marker as the Y-chromosome which shows a strongly clustered geographical distribution due to the linear inheritance and the patrilocality of ancestral groups. Large population databases, namely the Y-STR Haplotype Reference Database (YHRD), have been established to assess the evidentiary value of Y-STR matches by means of frequency estimation methods (counting and extrapolation).
Jobs within a 30-minute transit ride - Service
This mapping service summarizes the total number of jobs that can be reached within 30 minutes by transit. EPA modeled accessibility via transit by calculating total travel time between block group centroids inclusive of walking to/from transit stops, wait times, and transfers. Block groups that can be accessed in 30 minutes or less from the origin block group are considered accessible. Values reflect public transit service in December 2012 and employment counts in 2010. Coverage is limited to census block groups within metropolitan regions served by transit agencies who share their service data in a standardized format called GTFS.All variable names refer to variables in EPA's Smart Location Database. For instance EmpTot10_sum summarizes total employment (EmpTot10) in block groups that are reachable within a 30-minute transit and walking commute. See Smart Location Database User Guide for full variable descriptions.
Jobs within a 30-minute transit ride - Download
A collection of performance indicators for consistently comparing neighborhoods (census block groups) across the US in regards to their accessibility to jobs or workers via public transit service. Accessibility was modeled by calculating total travel time between block group centroids inclusive of walking to/from transit stops, wait times, and transfers. Block groups that can be accessed in 30 minutes or less from the origin block group are considered accessible. Indicators reflect public transit service in December 2012 and employment/worker counts in 2010. Coverage is limited to census block groups within metropolitan regions served by transit agencies who share their service data in a standardized format called GTFS.All variable names refer to variables in EPA's Smart Location Database. For instance EmpTot10_sum summarizes total employment (EmpTot10) in block groups that are reachable within a 30-minute transit and walking commute. See Smart Location Database User Guide for full variable descriptions.
APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES
An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
(LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...
Aerospace Medicine and Biology: A Continuing Bibliography. Supplement 476
NASA Technical Reports Server (NTRS)
1998-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1998-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
Thermodynamics of Enzyme-Catalyzed Reactions Database
National Institute of Standards and Technology Data Gateway
SRD 74 Thermodynamics of Enzyme-Catalyzed Reactions Database (Web, free access) The Thermodynamics of Enzyme-Catalyzed Reactions Database contains thermodynamic data on enzyme-catalyzed reactions that have been recently published in the Journal of Physical and Chemical Reference Data (JPCRD). For each reaction the following information is provided: the reference for the data, the reaction studied, the name of the enzyme used and its Enzyme Commission number, the method of measurement, the data and an evaluation thereof.
Almeida, Mathieu; Hébert, Agnès; Abraham, Anne-Laure; Rasmussen, Simon; Monnet, Christophe; Pons, Nicolas; Delbès, Céline; Loux, Valentin; Batto, Jean-Michel; Leonard, Pierre; Kennedy, Sean; Ehrlich, Stanislas Dusko; Pop, Mihai; Montel, Marie-Christine; Irlinger, Françoise; Renault, Pierre
2014-12-13
Microbial communities of traditional cheeses are complex and insufficiently characterized. The origin, safety and functional role in cheese making of these microbial communities are still not well understood. Metagenomic analysis of these communities by high throughput shotgun sequencing is a promising approach to characterize their genomic and functional profiles. Such analyses, however, critically depend on the availability of appropriate reference genome databases against which the sequencing reads can be aligned. We built a reference genome catalog suitable for short read metagenomic analysis using a low-cost sequencing strategy. We selected 142 bacteria isolated from dairy products belonging to 137 different species and 67 genera, and succeeded to reconstruct the draft genome of 117 of them at a standard or high quality level, including isolates from the genera Kluyvera, Luteococcus and Marinilactibacillus, still missing from public database. To demonstrate the potential of this catalog, we analysed the microbial composition of the surface of two smear cheeses and one blue-veined cheese, and showed that a significant part of the microbiota of these traditional cheeses was composed of microorganisms newly sequenced in our study. Our study provides data, which combined with publicly available genome references, represents the most expansive catalog to date of cheese-associated bacteria. Using this extended dairy catalog, we revealed the presence in traditional cheese of dominant microorganisms not deliberately inoculated, mainly Gram-negative genera such as Pseudoalteromonas haloplanktis or Psychrobacter immobilis, that may contribute to the characteristics of cheese produced through traditional methods.
A Relational Database System for Student Use.
ERIC Educational Resources Information Center
Fertuck, Len
1982-01-01
Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)
Clauson, Kevin A; Polen, Hyla H; Peak, Amy S; Marsh, Wallace A; DiScala, Sandra L
2008-11-01
Clinical decision support tools (CDSTs) on personal digital assistants (PDAs) and online databases assist healthcare practitioners who make decisions about dietary supplements. To assess and compare the content of PDA dietary supplement databases and their online counterparts used as CDSTs. A total of 102 question-and-answer pairs were developed within 10 weighted categories of the most clinically relevant aspects of dietary supplement therapy. PDA versions of AltMedDex, Lexi-Natural, Natural Medicines Comprehensive Database, and Natural Standard and their online counterparts were assessed by scope (percent of correct answers present), completeness (3-point scale), ease of use, and a composite score integrating all 3 criteria. Descriptive statistics and inferential statistics, including a chi(2) test, Scheffé's multiple comparison test, McNemar's test, and the Wilcoxon signed rank test were used to analyze data. The scope scores for PDA databases were: Natural Medicines Comprehensive Database 84.3%, Natural Standard 58.8%, Lexi-Natural 50.0%, and AltMedDex 36.3%, with Natural Medicines Comprehensive Database statistically superior (p < 0.01). Completeness scores were: Natural Medicines Comprehensive Database 78.4%, Natural Standard 51.0%, Lexi-Natural 43.5%, and AltMedDex 29.7%. Lexi-Natural was superior in ease of use (p < 0.01). Composite scores for PDA databases were: Natural Medicines Comprehensive Database 79.3, Natural Standard 53.0, Lexi-Natural 48.0, and AltMedDex 32.5, with Natural Medicines Comprehensive Database superior (p < 0.01). There was no difference between the scope for PDA and online database pairs with Lexi-Natural (50.0% and 53.9%, respectively) or Natural Medicines Comprehensive Database (84.3% and 84.3%, respectively) (p > 0.05), whereas differences existed for AltMedDex (36.3% vs 74.5%, respectively) and Natural Standard (58.8% vs 80.4%, respectively) (p < 0.01). For composite scores, AltMedDex and Natural Standard online were better than their PDA counterparts (p < 0.01). Natural Medicines Comprehensive Database achieved significantly higher scope, completeness, and composite scores compared with other dietary supplement PDA CDSTs in this study. There was no difference between the PDA and online databases for Lexi-Natural and Natural Medicines Comprehensive Database, whereas online versions of AltMedDex and Natural Standard were significantly better than their PDA counterparts.
2016-01-01
The widespread use of ultrasonography places it in a key position for use in the risk stratification of thyroid nodules. The French proposal is a five-tier system, our version of a thyroid imaging reporting and database system (TI-RADS), which includes a standardized vocabulary and report and a quantified risk assessment. It allows the selection of the nodules that should be referred for fine-needle aspiration biopsies. Effort should be directed towards merging the different risk stratification systems utilized around the world and testing this unified system with multi-center studies. PMID:26324117
NASA Astrophysics Data System (ADS)
Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.
2002-11-01
The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.
Computational Thermochemistry of Jet Fuels and Rocket Propellants
NASA Technical Reports Server (NTRS)
Crawford, T. Daniel
2002-01-01
The design of new high-energy density molecules as candidates for jet and rocket fuels is an important goal of modern chemical thermodynamics. The NASA Glenn Research Center is home to a database of thermodynamic data for over 2000 compounds related to this goal, in the form of least-squares fits of heat capacities, enthalpies, and entropies as functions of temperature over the range of 300 - 6000 K. The chemical equilibrium with applications (CEA) program written and maintained by researchers at NASA Glenn over the last fifty years, makes use of this database for modeling the performance of potential rocket propellants. During its long history, the NASA Glenn database has been developed based on experimental results and data published in the scientific literature such as the standard JANAF tables. The recent development of efficient computational techniques based on quantum chemical methods provides an alternative source of information for expansion of such databases. For example, it is now possible to model dissociation or combustion reactions of small molecules to high accuracy using techniques such as coupled cluster theory or density functional theory. Unfortunately, the current applicability of reliable computational models is limited to relatively small molecules containing only around a dozen (non-hydrogen) atoms. We propose to extend the applicability of coupled cluster theory- often referred to as the 'gold standard' of quantum chemical methods- to molecules containing 30-50 non-hydrogen atoms. The centerpiece of this work is the concept of local correlation, in which the description of the electron interactions- known as electron correlation effects- are reduced to only their most important localized components. Such an advance has the potential to greatly expand the current reach of computational thermochemistry and thus to have a significant impact on the theoretical study of jet and rocket propellants.
Endo, Akira; Shiraishi, Atsushi; Fushimi, Kiyohide; Murata, Kiyoshi; Otomo, Yasuhiro
2017-06-07
The aim of this study was to evaluate the associations of severe trauma patient volume with survival benefit and health care costs. The effect of trauma patient volume on survival benefit is inconclusive, and reports on its effects on health care costs are scarce. We conducted a retrospective observational study, including trauma patients who were transferred to government-approved tertiary emergency hospitals, or hospitals with an intensive care unit that provided an equivalent quality of care, using a Japanese nationwide administrative database. We categorized hospitals according to their annual severe trauma patient volumes [1 to 50 (reference), 51 to 100, 101 to 150, 151 to 200, and ≥201]. We evaluated the associations of volume categories with in-hospital survival and total cost per admission using a mixed-effects model adjusting for patient severity and hospital characteristics. A total of 116,329 patients from 559 hospitals were analyzed. Significantly increased in-hospital survival rates were observed in the second, third, fourth, and highest volume categories compared with the reference category [94.2% in the highest volume category vs 88.8% in the reference category, adjusted odds ratio (95% confidence interval, 95% CI) = 1.75 (1.49-2.07)]. Furthermore, significantly lower costs (in US dollars) were observed in the second and fourth categories [mean (standard deviation) for fourth vs reference = $17,800 ($17,378) vs $20,540 ($32,412), adjusted difference (95% CI) = -$2559 (-$3896 to -$1221)]. Hospitals with high volumes of severe trauma patients were significantly associated with a survival benefit and lower total cost per admission.
Code of Federal Regulations, 2012 CFR
2012-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2011 CFR
2011-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2014 CFR
2014-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
Code of Federal Regulations, 2013 CFR
2013-10-01
... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...
An Online Resource for Flight Test Safety Planning
NASA Technical Reports Server (NTRS)
Lewis, Greg
2007-01-01
A viewgraph presentation describing an online database for flight test safety techniques is shown. The topics include: 1) Goal; 2) Test Hazard Analyses; 3) Online Database Background; 4) Data Gathering; 5) NTPS Role; 6) Organizations; 7) Hazard Titles; 8) FAR Paragraphs; 9) Maneuver Name; 10) Identified Hazard; 11) Matured Hazard Titles; 12) Loss of Control Causes; 13) Mitigations; 14) Database Now Open to the Public; 15) FAR Reference Search; 16) Record Field Search; 17) Keyword Search; and 18) Results of FAR Reference Search.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Oostdik, Kathryn; Lenz, Kristy; Nye, Jeffrey; Schelling, Kristin; Yet, Donald; Bruski, Scott; Strong, Joshua; Buchanan, Clint; Sutton, Joel; Linner, Jessica; Frazier, Nicole; Young, Hays; Matthies, Learden; Sage, Amber; Hahn, Jeff; Wells, Regina; Williams, Natasha; Price, Monica; Koehler, Jody; Staples, Melisa; Swango, Katie L; Hill, Carolyn; Oyerly, Karen; Duke, Wendy; Katzilierakis, Lesley; Ensenberger, Martin G; Bourdeau, Jeanne M; Sprecher, Cynthia J; Krenke, Benjamin; Storts, Douglas R
2014-09-01
The original CODIS database based on 13 core STR loci has been overwhelmingly successful for matching suspects with evidence. Yet there remain situations that argue for inclusion of more loci and increased discrimination. The PowerPlex(®) Fusion System allows simultaneous amplification of the following loci: Amelogenin, D3S1358, D1S1656, D2S441, D10S1248, D13S317, Penta E, D16S539, D18S51, D2S1338, CSF1PO, Penta D, TH01, vWA, D21S11, D7S820, D5S818, TPOX, DYS391, D8S1179, D12S391, D19S433, FGA, and D22S1045. The comprehensive list of loci amplified by the system generates a profile compatible with databases based on either the expanded CODIS or European Standard Set (ESS) requirements. Developmental validation testing followed SWGDAM guidelines and demonstrated the quality and robustness of the PowerPlex(®) Fusion System across a number of variables. Consistent and high-quality results were compiled using data from 12 separate forensic and research laboratories. The results verify that the PowerPlex(®) Fusion System is a robust and reliable STR-typing multiplex suitable for human identification. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
EPA's Toxicity Reference Databases (ToxRefDB) was developed by the National Center for Computational Toxicology in partnership with EPA's Office of Pesticide Programs, to store data derived from in vivo animal toxicity studies [www.epa.gov/ncct/toxrefdb/]. The initial build of To...
Design of a diagnostic encyclopaedia using AIDA.
van Ginneken, A M; Smeulders, A W; Jansen, W
1987-01-01
Diagnostic Encyclopaedia Workstation (DEW) is the name of a digital encyclopaedia constructed to contain reference knowledge with respect to the pathology of the ovary. Comparing DEW with the common sources of reference knowledge (i.e. books) leads to the following advantages of DEW: it contains more verbal knowledge, pictures and case histories, and it offers information adjusted to the needs of the user. Based on an analysis of the structure of this reference knowledge we have chosen AIDA to develop a relational database and we use a video-disc player to contain the pictorial part of the database. The system consists of a database input version and a read-only run version. The design of the database input version is discussed. Reference knowledge for ovary pathology requires 1-3 Mbytes of memory. At present 15% of this amount is available. The design of the run version is based on an analysis of which information must necessarily be specified to the system by the user to access a desired item of information. Finally, the use of AIDA in constructing DEW is evaluated.
The purpose of this SOP is to outline a standard approach to naming and defining variables, data types, and data entry forms. This procedure applies to all working databases created during the NHEXAS project and the "Border" study. Keywords: databases; standards.
The National...
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
NASA Astrophysics Data System (ADS)
Power, O.; Solve, S.; Chayramy, R.; Stock, M.
2012-01-01
As part of the on-going BIPM key comparison BIPM.EM-K11.b, a comparison of the 10 V voltage reference standards of the BIPM and the National Standards Authority of Ireland-National Metrology Laboratory (NSAI-NML), Dublin, Ireland, was carried out from February to March 2012. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM_C (ZC) and BIPM_D (ZD), were transported by freight to NSAI-NML. At NSAI-NML, the reference standard for DC voltage at the 10 V level consists of a group of characterized Zener diode-based electronic voltage standards. The output EMF (electromotive force) of each travelling standard was measured by direct comparison with the group standard. At the BIPM the travelling standards were calibrated, before and after the measurements at NSAI-NML, with the Josephson voltage standard. Results of all measurements were corrected for the dependence of the output voltages on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the value assigned to DC voltage standard by NSAI-NML, at the level of 10 V, at NSAI-NML, UNML, and that assigned by the BIPM, at the BIPM, UBIPM, at the reference date of 23 February 2012. UNML - UBIPM = +0.83 µV, uc = 1.35 µV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NSAI-NML, based on KJ-90, and the uncertainty related to the comparison. The final result is impacted by the anomalous offset between the NSAI-NML results for the two transfer standards. The reason for this offset hasn't been determined. However, the difference remains within the total combined standard uncertainty. Therefore, the comparison result shows that the voltage standards maintained by NSAI-NML and the BIPM were equivalent, within their stated expanded uncertainties, on the mean date of the comparison. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Tsybovskii, I S; Veremeichik, V M; Kotova, S A; Kritskaya, S V; Evmenenko, S A; Udina, I G
2017-02-01
For the Republic of Belarus, development of a forensic reference database on the basis of 18 autosomal microsatellites (STR) using a population dataset (N = 1040), “familial” genotypic dataset (N = 2550) obtained from expertise performance of paternity testing, and a dataset of genotypes from a criminal registration database (N = 8756) is described. Population samples studied consist of 80% ethnic Belarusians and 20% individuals of other nationality or of mixed origin (by questionnaire data). Genotypes of 12346 inhabitants of the Republic of Belarus from 118 regional samples studied by 18 autosomal microsatellites are included in the sample: 16 tetranucleotide STR (D2S1338, TPOX, D3S1358, CSF1PO, D5S818, D8S1179, D7S820, THO1, vWA, D13S317, D16S539, D18S51, D19S433, D21S11, F13B, and FGA) and two pentanucleotide STR (Penta D and Penta E). The samples studied are in Hardy–Weinberg equilibrium according to distribution of genotypes by 18 STR. Significant differences were not detected between discrete populations or between samples from various historical ethnographic regions of the Republic of Belarus (Western and Eastern Polesie, Podneprovye, Ponemanye, Poozerye, and Center), which indicates the absence of prominent genetic differentiation. Statistically significant differences between the studied genotypic datasets also were not detected, which made it possible to combine the datasets and consider the total sample as a unified forensic reference database for 18 “criminalistic” STR loci. Differences between reference database of the Republic of Belarus and Russians and Ukrainians by the distribution of the range of autosomal STR also were not detected, corresponding to a close genetic relationship of the three Eastern Slavic nations mediated by common origin and intense mutual migrations. Significant differences by separate STR loci between the reference database of Republic of Belarus and populations of Southern and Western Slavs were observed. The necessity of using original reference database for support of forensic expertise practice in the Republic of Belarus was demonstrated.
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
Maes, Dirk; Vanreusel, Wouter; Herremans, Marc; Vantieghem, Pieter; Brosens, Dimitri; Gielen, Karin; Beck, Olivier; Van Dyck, Hans; Desmet, Peter; Natuurpunt, Vlinderwerkgroep
2016-01-01
Abstract In this data paper, we describe two datasets derived from two sources, which collectively represent the most complete overview of butterflies in Flanders and the Brussels Capital Region (northern Belgium). The first dataset (further referred to as the INBO dataset – http://doi.org/10.15468/njgbmh) contains 761,660 records of 70 species and is compiled by the Research Institute for Nature and Forest (INBO) in cooperation with the Butterfly working group of Natuurpunt (Vlinderwerkgroep). It is derived from the database Vlinderdatabank at the INBO, which consists of (historical) collection and literature data (1830-2001), for which all butterfly specimens in institutional and available personal collections were digitized and all entomological and other relevant publications were checked for butterfly distribution data. It also contains observations and monitoring data for the period 1991-2014. The latter type were collected by a (small) butterfly monitoring network where butterflies were recorded using a standardized protocol. The second dataset (further referred to as the Natuurpunt dataset – http://doi.org/10.15468/ezfbee) contains 612,934 records of 63 species and is derived from the database http://waarnemingen.be, hosted at the nature conservation NGO Natuurpunt in collaboration with Stichting Natuurinformatie. This dataset contains butterfly observations by volunteers (citizen scientists), mainly since 2008. Together, these datasets currently contain a total of 1,374,594 records, which are georeferenced using the centroid of their respective 5 × 5 km² Universal Transverse Mercator (UTM) grid cell. Both datasets are published as open data and are available through the Global Biodiversity Information Facility (GBIF). PMID:27199606
Maes, Dirk; Vanreusel, Wouter; Herremans, Marc; Vantieghem, Pieter; Brosens, Dimitri; Gielen, Karin; Beck, Olivier; Van Dyck, Hans; Desmet, Peter; Natuurpunt, Vlinderwerkgroep
2016-01-01
In this data paper, we describe two datasets derived from two sources, which collectively represent the most complete overview of butterflies in Flanders and the Brussels Capital Region (northern Belgium). The first dataset (further referred to as the INBO dataset - http://doi.org/10.15468/njgbmh) contains 761,660 records of 70 species and is compiled by the Research Institute for Nature and Forest (INBO) in cooperation with the Butterfly working group of Natuurpunt (Vlinderwerkgroep). It is derived from the database Vlinderdatabank at the INBO, which consists of (historical) collection and literature data (1830-2001), for which all butterfly specimens in institutional and available personal collections were digitized and all entomological and other relevant publications were checked for butterfly distribution data. It also contains observations and monitoring data for the period 1991-2014. The latter type were collected by a (small) butterfly monitoring network where butterflies were recorded using a standardized protocol. The second dataset (further referred to as the Natuurpunt dataset - http://doi.org/10.15468/ezfbee) contains 612,934 records of 63 species and is derived from the database http://waarnemingen.be, hosted at the nature conservation NGO Natuurpunt in collaboration with Stichting Natuurinformatie. This dataset contains butterfly observations by volunteers (citizen scientists), mainly since 2008. Together, these datasets currently contain a total of 1,374,594 records, which are georeferenced using the centroid of their respective 5 × 5 km² Universal Transverse Mercator (UTM) grid cell. Both datasets are published as open data and are available through the Global Biodiversity Information Facility (GBIF).
Souza, C A; Oliveira, T C; Crovella, S; Santos, S M; Rabêlo, K C N; Soriano, E P; Carvalho, M V D; Junior, A F Caldas; Porto, G G; Campello, R I C; Antunes, A A; Queiroz, R A; Souza, S M
2017-04-28
The use of Y chromosome haplotypes, important for the detection of sexual crimes in forensics, has gained prominence with the use of databases that incorporate these genetic profiles in their system. Here, we optimized and validated an amplification protocol for Y chromosome profile retrieval in reference samples using lesser materials than those in commercial kits. FTA ® cards (Flinders Technology Associates) were used to support the oral cells of male individuals, which were amplified directly using the SwabSolution reagent (Promega). First, we optimized and validated the process to define the volume and cycling conditions. Three reference samples and nineteen 1.2 mm-diameter perforated discs were used per sample. Amplification of one or two discs (samples) with the PowerPlex ® Y23 kit (Promega) was performed using 25, 26, and 27 thermal cycles. Twenty percent, 32%, and 100% reagent volumes, one disc, and 26 cycles were used for the control per sample. Thereafter, all samples (N = 270) were amplified using 27 cycles, one disc, and 32% reagents (optimized conditions). Data was analyzed using a study of equilibrium values between fluorophore colors. In the samples analyzed with 20% volume, an imbalance was observed in peak heights, both inside and in-between each dye. In samples amplified with 32% reagents, the values obtained for the intra-color and inter-color standard balance calculations for verification of the quality of the analyzed peaks were similar to those of samples amplified with 100% of the recommended volume. The quality of the profiles obtained with 32% reagents was suitable for insertion into databases.
Diet History Questionnaire: Database Revision History
The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.
Initiative for standardization of reporting genetics of male infertility.
Traven, Eva; Ogrinc, Ana; Kunej, Tanja
2017-02-01
The number of publications on research of male infertility is increasing. Technologies used in research of male infertility generate complex results and various types of data that need to be appropriately managed, arranged, and made available to other researchers for further use. In our previous study, we collected over 800 candidate loci for male fertility in seven mammalian species. However, the continuation of the work towards a comprehensive database of candidate genes associated with different types of idiopathic human male infertility is challenging due to fragmented information, obtained from a variety of technologies and various omics approaches. Results are published in different forms and usually need to be excavated from the text, which hinders the gathering of information. Standardized reporting of genetic anomalies as well as causative and risk factors of male infertility therefore presents an important issue. The aim of the study was to collect examples of diverse genomic loci published in association with human male infertility and to propose a standardized format for reporting genetic causes of male infertility. From the currently available data we have selected 75 studies reporting 186 representative genomic loci which have been proposed as genetic risk factors for male infertility. Based on collected and formatted data, we suggested a first step towards unification of reporting the genetics of male infertility in original and review studies. The proposed initiative consists of five relevant data types: 1) genetic locus, 2) race/ethnicity, number of participants (infertile/controls), 3) methodology, 4) phenotype (clinical data, disease ontology, and disease comorbidity), and 5) reference. The proposed form for standardized reporting presents a baseline for further optimization with additional genetic and clinical information. This data standardization initiative will enable faster multi-omics data integration, database development and sharing, establishing more targeted hypotheses, and facilitating biomarker discovery.
Widdifield, Jessica; Bernatsky, Sasha; Paterson, J Michael; Tu, Karen; Ng, Ryan; Thorne, J Carter; Pope, Janet E; Bombardier, Claire
2013-10-01
Health administrative data can be a valuable tool for disease surveillance and research. Few studies have rigorously evaluated the accuracy of administrative databases for identifying rheumatoid arthritis (RA) patients. Our aim was to validate administrative data algorithms to identify RA patients in Ontario, Canada. We performed a retrospective review of a random sample of 450 patients from 18 rheumatology clinics. Using rheumatologist-reported diagnosis as the reference standard, we tested and validated different combinations of physician billing, hospitalization, and pharmacy data. One hundred forty-nine rheumatology patients were classified as having RA and 301 were classified as not having RA based on our reference standard definition (study RA prevalence 33%). Overall, algorithms that included physician billings had excellent sensitivity (range 94-100%). Specificity and positive predictive value (PPV) were modest to excellent and increased when algorithms included multiple physician claims or specialist claims. The addition of RA medications did not significantly improve algorithm performance. The algorithm of "(1 hospitalization RA code ever) OR (3 physician RA diagnosis codes [claims] with ≥1 by a specialist in a 2-year period)" had a sensitivity of 97%, specificity of 85%, PPV of 76%, and negative predictive value of 98%. Most RA patients (84%) had an RA diagnosis code present in the administrative data within ±1 year of a rheumatologist's documented diagnosis date. We demonstrated that administrative data can be used to identify RA patients with a high degree of accuracy. RA diagnosis date and disease duration are fairly well estimated from administrative data in jurisdictions of universal health care insurance. Copyright © 2013 by the American College of Rheumatology.
Analysis of the NMI01 marker for a population database of cannabis seeds.
Shirley, Nicholas; Allgeier, Lindsay; Lanier, Tommy; Coyle, Heather Miller
2013-01-01
We have analyzed the distribution of genotypes at a single hexanucleotide short tandem repeat (STR) locus in a Cannabis sativa seed database along with seed-packaging information. This STR locus is defined by the polymerase chain reaction amplification primers CS1F and CS1R and is referred to as NMI01 (for National Marijuana Initiative) in our study. The population database consists of seed seizures of two categories: seed samples from labeled and unlabeled packages regarding seed bank source. Of a population database of 93 processed seeds including 12 labeled Cannabis varieties, the observed genotypes generated from single seeds exhibited between one and three peaks (potentially six alleles if in homozygous state). The total number of observed genotypes was 54 making this marker highly specific and highly individualizing even among seeds of common lineage. Cluster analysis associated many but not all of the handwritten labeled seed varieties tested to date as well as the National Park seizure to our known reference database containing Mr. Nice Seedbank and Sensi Seeds commercially packaged reference samples. © 2012 American Academy of Forensic Sciences.
The EpiSLI Database: A Publicly Available Database on Speech and Language
ERIC Educational Resources Information Center
Tomblin, J. Bruce
2010-01-01
Purpose: This article describes a database that was created in the process of conducting a large-scale epidemiologic study of specific language impairment (SLI). As such, this database will be referred to as the EpiSLI database. Children with SLI have unexpected and unexplained difficulties learning and using spoken language. Although there is no…
Vanlierde, A; Soyeurt, H; Gengler, N; Colinet, F G; Froidmont, E; Kreuzer, M; Grandl, F; Bell, M; Lund, P; Olijhoek, D W; Eugène, M; Martin, C; Kuhla, B; Dehareng, F
2018-05-09
Evaluation and mitigation of enteric methane (CH 4 ) emissions from ruminant livestock, in particular from dairy cows, have acquired global importance for sustainable, climate-smart cattle production. Based on CH 4 reference measurements obtained with the SF 6 tracer technique to determine ruminal CH 4 production, a current equation permits evaluation of individual daily CH 4 emissions of dairy cows based on milk Fourier transform mid-infrared (FT-MIR) spectra. However, the respiration chamber (RC) technique is considered to be more accurate than SF 6 to measure CH 4 production from cattle. This study aimed to develop an equation that allows estimating CH 4 emissions of lactating cows recorded in an RC from corresponding milk FT-MIR spectra and to challenge its robustness and relevance through validation processes and its application on a milk spectral database. This would permit confirming the conclusions drawn with the existing equation based on SF 6 reference measurements regarding the potential to estimate daily CH 4 emissions of dairy cows from milk FT-MIR spectra. A total of 584 RC reference CH 4 measurements (mean ± standard deviation of 400 ± 72 g of CH 4 /d) and corresponding standardized milk mid-infrared spectra were obtained from 148 individual lactating cows between 7 and 321 d in milk in 5 European countries (Germany, Switzerland, Denmark, France, and Northern Ireland). The developed equation based on RC measurements showed calibration and cross-validation coefficients of determination of 0.65 and 0.57, respectively, which is lower than those obtained earlier by the equation based on 532 SF 6 measurements (0.74 and 0.70, respectively). This means that the RC-based model is unable to explain the variability observed in the corresponding reference data as well as the SF 6 -based model. The standard errors of calibration and cross-validation were lower for the RC model (43 and 47 g/d vs. 66 and 70 g/d for the SF 6 version, respectively), indicating that the model based on RC data was closer to actual values. The root mean squared error (RMSE) of calibration of 42 g/d represents only 10% of the overall daily CH 4 production, which is 23 g/d lower than the RMSE for the SF 6 -based equation. During the external validation step an RMSE of 62 g/d was observed. When the RC equation was applied to a standardized spectral database of milk recordings collected in the Walloon region of Belgium between January 2012 and December 2017 (1,515,137 spectra from 132,658 lactating cows in 1,176 different herds), an average ± standard deviation of 446 ± 51 g of CH 4 /d was estimated, which is consistent with the range of the values measured using both RC and SF 6 techniques. This study confirmed that milk FT-MIR spectra could be used as a potential proxy to estimate daily CH 4 emissions from dairy cows provided that the variability to predict is covered by the model. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Idrees, Faraz; Wielgosz, Robert; Sanchez, Carmen; Morillo Gomez, Pilar
2015-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Instituto de Salud Carlos III (ISCIII) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Idrees, Faraz; Wielgosz, Robert; Morillo Gomez, Pilar; Sánchez, Carmen
2011-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Instituto de Salud Carlos III (ISCIII) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Wielgosz, Robert; Morillo Gomez, Pilar; Sánchez Blaya, Carmen
2009-01-01
As part of the on-going key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Instituto de Salud Carlos III (ISCIII) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone mole fraction range of 0 nmol/mol to 500 nmol/mol. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Wielgosz, Robert; Sanchez, Carmen; Morillo Gomez, Pilar
2017-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Instituto de Salud Carlos III (ISCIII) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Idrees, Faraz; Wielgosz, Robert; Morillo Gomez, Pilar; Sánchez, Carmen
2013-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Instituto de Salud Carlos III (ISCIII) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Idrees, Faraz; Moussay, Philippe; Wielgosz, Robert; Sweeney, Bryan; Quincey, Paul
2018-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone standard of the United Kingdom maintained by the National Physical Laboratory (NPL) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range of 0 nmol/mol to 500 nmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Moussay, Philippe; Wielgosz, Robert; Heikens, Dita; van der Veen, Adrian
2017-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Netherlands maintained by the Van Swinden (VSL) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range from 0 nmol/mol to 500 nmol/mol Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
CODATA recommended values of the fundamental constants
NASA Astrophysics Data System (ADS)
Mohr, Peter J.; Taylor, Barry N.
2000-11-01
A review is given of the latest Committee on Data for Science and Technology (CODATA) adjustment of the values of the fundamental constants. The new set of constants, referred to as the 1998 values, replaces the values recommended for international use by CODATA in 1986. The values of the constants, and particularly the Rydberg constant, are of relevance to the calculation of precise atomic spectra. The standard uncertainty (estimated standard deviation) of the new recommended value of the Rydberg constant, which is based on precision frequency metrology and a detailed analysis of the theory, is approximately 1/160 times the uncertainty of the 1986 value. The new set of recommended values as well as a searchable bibliographic database that gives citations to the relevant literature is available on the World Wide Web at physics.nist.gov/constants and physics.nist.gov/constantsbib, respectively. .
Mellor, David; Fuller-Tyszkiewicz, Matthew; McCabe, Marita P; Ricciardelli, Lina A; Skouteris, Helen; Mussap, Alexander J
2014-01-01
This study aimed to identify cultural-level variables that may influence the extent to which adolescents from different cultural groups are dissatisfied with their bodies. A sample of 1730 male and 2000 female adolescents from Australia, Fiji, Malaysia, Tonga, Tongans in New Zealand, China, Chile, and Greece completed measures of body satisfaction, and the sociocultural influences on body image and body change questionnaire, and self-reported height and weight. Country gross domestic product and national obesity were recorded using global databases. Prevalence of obesity/overweight and cultural endorsement of appearance standards explained variance in individual-level body dissatisfaction (BD) scores, even after controlling for the influence of individual differences in body mass index and internalization of appearance standards. Cultural-level variables may account for the development of adolescent BD.
Gradishar, William; Johnson, KariAnne; Brown, Krystal; Mundt, Erin; Manley, Susan
2017-07-01
There is a growing move to consult public databases following receipt of a genetic test result from a clinical laboratory; however, the well-documented limitations of these databases call into question how often clinicians will encounter discordant variant classifications that may introduce uncertainty into patient management. Here, we evaluate discordance in BRCA1 and BRCA2 variant classifications between a single commercial testing laboratory and a public database commonly consulted in clinical practice. BRCA1 and BRCA2 variant classifications were obtained from ClinVar and compared with the classifications from a reference laboratory. Full concordance and discordance were determined for variants whose ClinVar entries were of the same pathogenicity (pathogenic, benign, or uncertain). Variants with conflicting ClinVar classifications were considered partially concordant if ≥1 of the listed classifications agreed with the reference laboratory classification. Four thousand two hundred and fifty unique BRCA1 and BRCA2 variants were available for analysis. Overall, 73.2% of classifications were fully concordant and 12.3% were partially concordant. The remaining 14.5% of variants had discordant classifications, most of which had a definitive classification (pathogenic or benign) from the reference laboratory compared with an uncertain classification in ClinVar (14.0%). Here, we show that discrepant classifications between a public database and single reference laboratory potentially account for 26.7% of variants in BRCA1 and BRCA2 . The time and expertise required of clinicians to research these discordant classifications call into question the practicality of checking all test results against a database and suggest that discordant classifications should be interpreted with these limitations in mind. With the increasing use of clinical genetic testing for hereditary cancer risk, accurate variant classification is vital to ensuring appropriate medical management. There is a growing move to consult public databases following receipt of a genetic test result from a clinical laboratory; however, we show that up to 26.7% of variants in BRCA1 and BRCA2 have discordant classifications between ClinVar and a reference laboratory. The findings presented in this paper serve as a note of caution regarding the utility of database consultation. © AlphaMed Press 2017.
Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier
2017-06-13
Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 490
NASA Technical Reports Server (NTRS)
1999-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1999-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract. Two indexes-subject and author are included after the abstract section.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 498
NASA Technical Reports Server (NTRS)
2000-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1999-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 487
NASA Technical Reports Server (NTRS)
1999-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1999-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract. Two indexes-subject and author are included after the abstract section.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 482
NASA Technical Reports Server (NTRS)
1999-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1999-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
Aerospace Medicine and Biology: A Continuing Bibliography With Indexes. Supplement 502
NASA Technical Reports Server (NTRS)
2000-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-2000-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract. Two indexes-subject and author are included after the abstract section.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 489
NASA Technical Reports Server (NTRS)
1999-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1999-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 477
NASA Technical Reports Server (NTRS)
1998-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1998-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 478
NASA Technical Reports Server (NTRS)
1998-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-1998-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract.
Aerospace Medicine and Biology: A Continuing Bibliography with Indexes. Supplement 504
NASA Technical Reports Server (NTRS)
2000-01-01
This supplemental issue of Aerospace Medicine and Biology, A Continuing Bibliography with Indexes (NASA/SP-2000-7011) lists reports, articles, and other documents recently announced in the NASA STI Database. In its subject coverage, Aerospace Medicine and Biology concentrates on the biological, physiological, psychological, and environmental effects to which humans are subjected during and following simulated or actual flight in the Earth's atmosphere or in interplanetary space. References describing similar effects on biological organisms of lower order are also included. Such related topics as sanitary problems, pharmacology, toxicology, safety and survival, life support systems, exobiology, and personnel factors receive appropriate attention. Applied research receives the most emphasis, but references to fundamental studies and theoretical principles related to experimental development also qualify for inclusion. Each entry in the publication consists of a standard bibliographic citation accompanied, in most cases, by an abstract. Two indexes- subject and author are included after the abstract section.
Cooper, Laurel; Meier, Austin; Laporte, Marie-Angélique; Elser, Justin L; Mungall, Chris; Sinn, Brandon T; Cavaliere, Dario; Carbon, Seth; Dunn, Nathan A; Smith, Barry; Qu, Botong; Preece, Justin; Zhang, Eugene; Todorovic, Sinisa; Gkoutos, Georgios; Doonan, John H; Stevenson, Dennis W; Arnaud, Elizabeth
2018-01-01
Abstract The Planteome project (http://www.planteome.org) provides a suite of reference and species-specific ontologies for plants and annotations to genes and phenotypes. Ontologies serve as common standards for semantic integration of a large and growing corpus of plant genomics, phenomics and genetics data. The reference ontologies include the Plant Ontology, Plant Trait Ontology and the Plant Experimental Conditions Ontology developed by the Planteome project, along with the Gene Ontology, Chemical Entities of Biological Interest, Phenotype and Attribute Ontology, and others. The project also provides access to species-specific Crop Ontologies developed by various plant breeding and research communities from around the world. We provide integrated data on plant traits, phenotypes, and gene function and expression from 95 plant taxa, annotated with reference ontology terms. The Planteome project is developing a plant gene annotation platform; Planteome Noctua, to facilitate community engagement. All the Planteome ontologies are publicly available and are maintained at the Planteome GitHub site (https://github.com/Planteome) for sharing, tracking revisions and new requests. The annotated data are freely accessible from the ontology browser (http://browser.planteome.org/amigo) and our data repository. PMID:29186578
Vitamin and Mineral Supplement Fact Sheets
... Dictionary of Dietary Supplement Terms Dietary Supplement Label Database (DSLD) Información en español Consumer information in Spanish ... Analytical Methods and Reference Materials Dietary Supplement Label Database (DSLD) Dietary Supplement Ingredient Database (DSID) Computer Access ...
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Matlejoane, A. M.; Magagula, L.; Stock, M.
2018-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1.018 V and 10 V voltage reference standards of the BIPM and the National Metrology Institute of South Africa, NMISA (South Africa), was carried out from April to June 2017. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPMA (ZA) and BIPMB (ZB), were transported by freight to NMISA and back to BIPM. In order to keep the Zeners powered during their transportation phase, a voltage stabilizer developed by BIPM was connected in parallel to the internal battery. It consists of a set of two batteries, electrically protected from surcharge-discharge, easy to recharge and is designed to power two transfer standards for ten consecutive days. At NMISA, the reference standard for DC voltage is a Josephson Voltage Standard. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at NMISA, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned to DC voltage standards by NMISA, at the level of 1.018 V and 10 V, at NMISA, UNMISA, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference dates of the 19th and 18th of May 2017, respectively. UNMISA - UBIPM = + 0.07 μV uc = 0.02 μV, at 1.018 V UNMISA - UBIPM = + 0.001 μV uc = 0.34 μV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NMISA, based on KJ-90, and the uncertainty related to the comparison. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Ben Salah, B.; Mallat, A.; Abene, L.; Stock, M.
2016-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the Laboratoire de Métrologie Electrique, DEFNAT (Tunisia), was carried out from February to March 2016. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPMC (ZC) and BIPM6 (Z6), were transported by freight to DEFNAT and back to BIPM. In order to keep the Zeners powered during their transportation phase, a BIPM in-house voltage stabiliser was connected in parallel to the internal battery. The voltage stabiliser consists of a set of two batteries, electrically protected from surcharge-discharge, easy to recharge and is designed to power two transfer standards for 10 consecutive days. At DEFNAT, the reference standard for DC voltage is a Josephson Voltage Standard. The output EMF (Electromotive Force) of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at DEFNAT, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned to DC voltage standards by DEFNAT, at the level of 1.018 V and 10 V, at DEFNAT, UDEFNAT, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of the 26th of February 2016. UDEFNAT - UBIPM = + 0.07 μV uc = 0.04 μV, at 1.018 V UDEFNAT - UBIPM = + 0.38 μV uc = 0.10 μV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at NSAI-NML, based on KJ-90, and the uncertainty related to the comparison. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Daiba, Akito; Inaba, Niro; Ando, Satoshi; Kajiyama, Naoki; Yatsuhashi, Hiroshi; Terasaki, Hiroshi; Ito, Atsushi; Ogasawara, Masanori; Abe, Aki; Yoshioka, Junichi; Hayashida, Kazuhiro; Kaneko, Shuichi; Kohara, Michinori; Ito, Satoru
2004-03-19
We have designed and established a low-density (295 genes) cDNA microarray for the prediction of IFN efficacy in hepatitis C patients. To obtain a precise and consistent microarray data, we collected a data set from three spots for each gene (mRNA) and using three different scanning conditions. We also established an artificial reference RNA representing pseudo-inflammatory conditions from established hepatocyte cell lines supplemented with synthetic RNAs to 48 inflammatory genes. We also developed a novel algorithm that replaces the standard hierarchical-clustering method and allows handling of the large data set with ease. This algorithm utilizes a standard space database (SSDB) as a key scale to calculate the Mahalanobis distance (MD) from the center of gravity in the SSDB. We further utilized sMD (divided by parameter k: MD/k) to reduce MD number as a predictive value. The efficacy prediction of conventional IFN mono-therapy was 100% for non-responder (NR) vs. transient responder (TR)/sustained responder (SR) (P < 0.0005). Finally, we show that this method is acceptable for clinical application.
Zhang, Peifen; Dreher, Kate; Karthikeyan, A.; Chi, Anjo; Pujar, Anuradha; Caspi, Ron; Karp, Peter; Kirkup, Vanessa; Latendresse, Mario; Lee, Cynthia; Mueller, Lukas A.; Muller, Robert; Rhee, Seung Yon
2010-01-01
Metabolic networks reconstructed from sequenced genomes or transcriptomes can help visualize and analyze large-scale experimental data, predict metabolic phenotypes, discover enzymes, engineer metabolic pathways, and study metabolic pathway evolution. We developed a general approach for reconstructing metabolic pathway complements of plant genomes. Two new reference databases were created and added to the core of the infrastructure: a comprehensive, all-plant reference pathway database, PlantCyc, and a reference enzyme sequence database, RESD, for annotating metabolic functions of protein sequences. PlantCyc (version 3.0) includes 714 metabolic pathways and 2,619 reactions from over 300 species. RESD (version 1.0) contains 14,187 literature-supported enzyme sequences from across all kingdoms. We used RESD, PlantCyc, and MetaCyc (an all-species reference metabolic pathway database), in conjunction with the pathway prediction software Pathway Tools, to reconstruct a metabolic pathway database, PoplarCyc, from the recently sequenced genome of Populus trichocarpa. PoplarCyc (version 1.0) contains 321 pathways with 1,807 assigned enzymes. Comparing PoplarCyc (version 1.0) with AraCyc (version 6.0, Arabidopsis [Arabidopsis thaliana]) showed comparable numbers of pathways distributed across all domains of metabolism in both databases, except for a higher number of AraCyc pathways in secondary metabolism and a 1.5-fold increase in carbohydrate metabolic enzymes in PoplarCyc. Here, we introduce these new resources and demonstrate the feasibility of using them to identify candidate enzymes for specific pathways and to analyze metabolite profiling data through concrete examples. These resources can be searched by text or BLAST, browsed, and downloaded from our project Web site (http://plantcyc.org). PMID:20522724
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Ferdynus, C; Huiart, L
2016-09-01
Administrative health databases such as the French National Heath Insurance Database - SNIIRAM - are a major tool to answer numerous public health research questions. However the use of such data requires complex and time-consuming data management. Our objective was to develop and make available a tool to optimize cohort constitution within administrative health databases. We developed a process to extract, transform and load (ETL) data from various heterogeneous sources in a standardized data warehouse. This data warehouse is architected as a star schema corresponding to an i2b2 star schema model. We then evaluated the performance of this ETL using data from a pharmacoepidemiology research project conducted in the SNIIRAM database. The ETL we developed comprises a set of functionalities for creating SAS scripts. Data can be integrated into a standardized data warehouse. As part of the performance assessment of this ETL, we achieved integration of a dataset from the SNIIRAM comprising more than 900 million lines in less than three hours using a desktop computer. This enables patient selection from the standardized data warehouse within seconds of the request. The ETL described in this paper provides a tool which is effective and compatible with all administrative health databases, without requiring complex database servers. This tool should simplify cohort constitution in health databases; the standardization of warehouse data facilitates collaborative work between research teams. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Murugaiyan, J; Ahrholdt, J; Kowbel, V; Roesler, U
2012-05-01
The possibility of using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) for rapid identification of pathogenic and non-pathogenic species of the genus Prototheca has been recently demonstrated. A unique reference database of MALDI-TOF MS profiles for type and reference strains of the six generally accepted Prototheca species was established. The database quality was reinforced after the acquisition of 27 spectra for selected Prototheca strains, with three biological and technical replicates for each of 18 type and reference strains of Prototheca and four strains of Chlorella. This provides reproducible and unique spectra covering a wide m/z range (2000-20 000 Da) for each of the strains used in the present study. The reproducibility of the spectra was further confirmed by employing composite correlation index calculation and main spectra library (MSP) dendrogram creation, available with MALDI Biotyper software. The MSP dendrograms obtained were comparable with the 18S rDNA sequence-based dendrograms. These reference spectra were successfully added to the Bruker database, and the efficiency of identification was evaluated by cross-reference-based and unknown Prototheca identification. It is proposed that the addition of further strains would reinforce the reference spectra library for rapid identification of Prototheca strains to the genus and species/genotype level. © 2011 The Authors. Clinical Microbiology and Infection © 2011 European Society of Clinical Microbiology and Infectious Diseases.
Design of a Multi Dimensional Database for the Archimed DataWarehouse.
Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine
2005-01-01
The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.
National Water Quality Standards Database (NWQSD)
The National Water Quality Standards Database (WQSDB) provides access to EPA and state water quality standards (WQS) information in text, tables, and maps. This data source was last updated in December 2007 and will no longer be updated.
Annual Review of Database Development: 1992.
ERIC Educational Resources Information Center
Basch, Reva
1992-01-01
Reviews recent trends in databases and online systems. Topics discussed include new access points for established databases; acquisitions, consolidations, and competition between vendors; European coverage; international services; online reference materials, including telephone directories; political and legal materials and public records;…
Detection and Rectification of Distorted Fingerprints.
Si, Xuanbin; Feng, Jianjiang; Zhou, Jie; Luo, Yuxuan
2015-03-01
Elastic distortion of fingerprints is one of the major causes for false non-match. While this problem affects all fingerprint recognition applications, it is especially dangerous in negative recognition applications, such as watchlist and deduplication applications. In such applications, malicious users may purposely distort their fingerprints to evade identification. In this paper, we proposed novel algorithms to detect and rectify skin distortion based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem, for which the registered ridge orientation map and period map of a fingerprint are used as the feature vector and a SVM classifier is trained to perform the classification task. Distortion rectification (or equivalently distortion field estimation) is viewed as a regression problem, where the input is a distorted fingerprint and the output is the distortion field. To solve this problem, a database (called reference database) of various distorted reference fingerprints and corresponding distortion fields is built in the offline stage, and then in the online stage, the nearest neighbor of the input fingerprint is found in the reference database and the corresponding distortion field is used to transform the input fingerprint into a normal one. Promising results have been obtained on three databases containing many distorted fingerprints, namely FVC2004 DB1, Tsinghua Distorted Fingerprint database, and the NIST SD27 latent fingerprint database.
Building a Patient-Reported Outcome Metric Database: One Hospital's Experience.
Rana, Adam J
2016-06-01
A number of provisions exist within the Patient Protection and Affordable Care Act that focus on improving the delivery of health care in the United States, including quality of care. From a total joint arthroplasty perspective, the issue of quality increasingly refers to quantifying patient-reported outcome metrics (PROMs). This article describes one hospital's experience in building and maintaining an electronic PROM database for a practice of 6 board-certified orthopedic surgeons. The surgeons advocated to and worked with the hospital to contract with a joint registry database company and hire a research assistant. They implemented a standardized process for all surgical patients to fill out patient-reported outcome questionnaires at designated intervals. To date, the group has collected patient-reported outcome metric data for >4500 cases. The data are frequently used in different venues at the hospital including orthopedic quality metric and research meetings. In addition, the results were used to develop an annual outcome report. The annual report is given to patients and primary care providers, and portions of it are being used in discussions with insurance carriers. Building an electronic database to collect PROMs is a group undertaking and requires a physician champion. A considerable amount of work needs to be done up front to make its introduction a success. Once established, a PROM database can provide a significant amount of information and data that can be effectively used in multiple capacities. Copyright © 2016 Elsevier Inc. All rights reserved.
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
Techniques of Photometry and Astrometry with APASS, Gaia, and Pan-STARRs Results (Abstract)
NASA Astrophysics Data System (ADS)
Green, W.
2017-12-01
(Abstract only) The databases with the APASS DR9, Gaia DR1, and the Pan-STARRs 3pi DR1 data releases are publicly available for use. There is a bit of data-mining involved to download and manage these reference stars. This paper discusses the use of these databases to acquire accurate photometric references as well as techniques for improving results. Images are prepared in the usual way: zero, dark, flat-fields, and WCS solutions with Astrometry.net. Images are then processed with Sextractor to produce an ASCII table of identifying photometric features. The database manages photometics catalogs and images converted to ASCII tables. Scripts convert the files into SQL and assimilate them into database tables. Using SQL techniques, each image star is merged with reference data to produce publishable results. The VYSOS has over 13,000 images of the ONC5 field to process with roughly 100 total fields in the campaign. This paper provides the overview for this daunting task.
DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis
NASA Astrophysics Data System (ADS)
Pernigotti, D.; Belis, C. A.
2018-05-01
DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.
Decelle, Johan; Romac, Sarah; Stern, Rowena F; Bendif, El Mahdi; Zingone, Adriana; Audic, Stéphane; Guiry, Michael D; Guillou, Laure; Tessier, Désiré; Le Gall, Florence; Gourvil, Priscillia; Dos Santos, Adriana L; Probert, Ian; Vaulot, Daniel; de Vargas, Colomban; Christen, Richard
2015-11-01
Photosynthetic eukaryotes have a critical role as the main producers in most ecosystems of the biosphere. The ongoing environmental metabarcoding revolution opens the perspective for holistic ecosystems biological studies of these organisms, in particular the unicellular microalgae that often lack distinctive morphological characters and have complex life cycles. To interpret environmental sequences, metabarcoding necessarily relies on taxonomically curated databases containing reference sequences of the targeted gene (or barcode) from identified organisms. To date, no such reference framework exists for photosynthetic eukaryotes. In this study, we built the PhytoREF database that contains 6490 plastidial 16S rDNA reference sequences that originate from a large diversity of eukaryotes representing all known major photosynthetic lineages. We compiled 3333 amplicon sequences available from public databases and 879 sequences extracted from plastidial genomes, and generated 411 novel sequences from cultured marine microalgal strains belonging to different eukaryotic lineages. A total of 1867 environmental Sanger 16S rDNA sequences were also included in the database. Stringent quality filtering and a phylogeny-based taxonomic classification were applied for each 16S rDNA sequence. The database mainly focuses on marine microalgae, but sequences from land plants (representing half of the PhytoREF sequences) and freshwater taxa were also included to broaden the applicability of PhytoREF to different aquatic and terrestrial habitats. PhytoREF, accessible via a web interface (http://phytoref.fr), is a new resource in molecular ecology to foster the discovery, assessment and monitoring of the diversity of photosynthetic eukaryotes using high-throughput sequencing. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Yu, Li-Juan; Wan, Wenchao; Karton, Amir
2016-11-01
We evaluate the performance of standard and modified MPn procedures for a wide set of thermochemical and kinetic properties, including atomization energies, structural isomerization energies, conformational energies, and reaction barrier heights. The reference data are obtained at the CCSD(T)/CBS level by means of the Wn thermochemical protocols. We find that none of the MPn-based procedures show acceptable performance for the challenging W4-11 and BH76 databases. For the other thermochemical/kinetic databases, the MP2.5 and MP3.5 procedures provide the most attractive accuracy-to-computational cost ratios. The MP2.5 procedure results in a weighted-total-root-mean-square deviation (WTRMSD) of 3.4 kJ/mol, whilst the computationally more expensive MP3.5 procedure results in a WTRMSD of 1.9 kJ/mol (the same WTRMSD obtained for the CCSD(T) method in conjunction with a triple-zeta basis set). We also assess the performance of the computationally economical CCSD(T)/CBS(MP2) method, which provides the best overall performance for all the considered databases, including W4-11 and BH76.
van Prehn, Joffrey; van Veen, Suzanne Q; Schelfaut, Jacqueline J G; Wessels, Els
2016-05-01
We compared the Vitek MS and Microflex MALDI-TOF mass spectrometry platform for species differentiation within the Streptococcus mitis group with PCR assays targeted at lytA, Spn9802, and recA as reference standard. The Vitek MS correctly identified 10/11 Streptococcus pneumoniae, 13/13 Streptococcus pseudopneumoniae, and 12/13 S. mitis/oralis. The Microflex correctly identified 9/11 S. pneumoniae, 0/13 S. pseudopneumoniae, and 13/13 S. mitis/oralis. MALDI-TOF is a powerful tool for species determination within the mitis group. Diagnostic accuracy varies depending on platform and database used. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Green Space, Violence, and Crime: A Systematic Review.
Bogar, Sandra; Beyer, Kirsten M
2016-04-01
To determine the state of evidence on relationships among urban green space, violence, and crime in the United States. Major bibliographic databases were searched for studies meeting inclusion criteria. Additional studies were culled from study references and authors' personal collections. Comparison among studies was limited by variations in study design and measurement and results were mixed. However, more evidence supports the positive impact of green space on violence and crime, indicating great potential for green space to shape health-promoting environments. Numerous factors influence the relationships among green space, crime, and violence. Additional research and standardization among research studies are needed to better understand these relationships. © The Author(s) 2015.
U.S. initiatives to strengthen forensic science & international standards in forensic DNA.
Butler, John M
2015-09-01
A number of initiatives are underway in the United States in response to the 2009 critique of forensic science by a National Academy of Sciences committee. This article provides a broad review of activities including efforts of the White House National Science and Technology Council Subcommittee on Forensic Science and a partnership between the Department of Justice (DOJ) and the National Institute of Standards and Technology (NIST) to create the National Commission on Forensic Science and the Organization of Scientific Area Committees. These initiatives are seeking to improve policies and practices of forensic science. Efforts to fund research activities and aid technology transition and training in forensic science are also covered. The second portion of the article reviews standards in place or in development around the world for forensic DNA. Documentary standards are used to help define written procedures to perform testing. Physical standards serve as reference materials for calibration and traceability purposes when testing is performed. Both documentary and physical standards enable reliable data comparison, and standard data formats and common markers or testing regions are crucial for effective data sharing. Core DNA markers provide a common framework and currency for constructing DNA databases with compatible data. Recent developments in expanding core DNA markers in Europe and the United States are discussed. Published by Elsevier Ireland Ltd.
U.S. initiatives to strengthen forensic science & international standards in forensic DNA
Butler, John M.
2015-01-01
A number of initiatives are underway in the United States in response to the 2009 critique of forensic science by a National Academy of Sciences committee. This article provides a broad review of activities including efforts of the White House National Science and Technology Council Subcommittee on Forensic Science and a partnership between the Department of Justice (DOJ) and the National Institute of Standards and Technology (NIST) to create the National Commission on Forensic Science and the Organization of Scientific Area Committees. These initiatives are seeking to improve policies and practices of forensic science. Efforts to fund research activities and aid technology transition and training in forensic science are also covered. The second portion of the article reviews standards in place or in development around the world for forensic DNA. Documentary standards are used to help define written procedures to perform testing. Physical standards serve as reference materials for calibration and traceability purposes when testing is performed. Both documentary and physical standards enable reliable data comparison, and standard data formats and common markers or testing regions are crucial for effective data sharing. Core DNA markers provide a common framework and currency for constructing DNA databases with compatible data. Recent developments in expanding core DNA markers in Europe and the United States are discussed. PMID:26164236
Code of Federal Regulations, 2014 CFR
2014-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Evaluation of consumer drug information databases.
Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J
1999-01-01
To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.
NASA Astrophysics Data System (ADS)
Rochat, Bertrand
2017-04-01
High-resolution (HR) MS instruments recording HR-full scan allow analysts to go further beyond pre-acquisition choices. Untargeted acquisition can reveal unexpected compounds or concentrations and can be performed for preliminary diagnosis attempt. Then, revealed compounds will have to be identified for interpretations. Whereas the need of reference standards is mandatory to confirm identification, the diverse information collected from HRMS allows identifying unknown compounds with relatively high degree of confidence without reference standards injected in the same analytical sequence. However, there is a necessity to evaluate the degree of confidence in putative identifications, possibly before further targeted analyses. This is why a confidence scale and a score in the identification of (non-peptidic) known-unknown, defined as compounds with entries in database, is proposed for (LC-) HRMS data. The scale is based on two representative documents edited by the European Commission (2007/657/EC) and the Metabolomics Standard Initiative (MSI), in an attempt to build a bridge between the communities of metabolomics and screening labs. With this confidence scale, an identification (ID) score is determined as [a number, a letter, and a number] (e.g., 2D3), from the following three criteria: I, a General Identification Category (1, confirmed, 2, putatively identified, 3, annotated compounds/classes, and 4, unknown); II, a Chromatography Class based on the relative retention time (from the narrowest tolerance, A, to no chromatographic references, D); and III, an Identification Point Level (1, very high, 2, high, and 3, normal level) based on the number of identification points collected. Three putative identification examples of known-unknown will be presented.
Content validity of manual spinal palpatory exams - A systematic review
Najm, Wadie I; Seffinger, Michael A; Mishra, Shiraz I; Dickerson, Vivian M; Adams, Alan; Reinsch, Sibylle; Murphy, Linda S; Goodman, Arnold F
2003-01-01
Background Many health care professionals use spinal palpatory exams as a primary and well-accepted part of the evaluation of spinal pathology. However, few studies have explored the validity of spinal palpatory exams. To evaluate the status of the current scientific evidence, we conducted a systematic review to assess the content validity of spinal palpatory tests used to identify spinal neuro-musculoskeletal dysfunction. Methods Review of eleven databases and a hand search of peer-reviewed literature, published between 1965–2002, was undertaken. Two blinded reviewers abstracted pertinent data from the retrieved papers, using a specially developed quality-scoring instrument. Five papers met the inclusion/exclusion criteria. Results Three of the five papers included in the review explored the content validity of motion tests. Two of these papers focused on identifying the level of fixation (decreased mobility) and one focused on range of motion. All three studies used a mechanical model as a reference standard. Two of the five papers included in the review explored the validity of pain assessment using the visual analogue scale or the subjects' own report as reference standards. Overall the sensitivity of studies looking at range of motion tests and pain varied greatly. Poor sensitivity was reported for range of motion studies regardless of the examiner's experience. A slightly better sensitivity (82%) was reported in one study that examined cervical pain. Conclusions The lack of acceptable reference standards may have contributed to the weak sensitivity findings. Given the importance of spinal palpatory tests as part of the spinal evaluation and treatment plan, effort is required by all involved disciplines to create well-designed and implemented studies in this area. PMID:12734016
Jaakkimainen, R Liisa; Bronskill, Susan E; Tierney, Mary C; Herrmann, Nathan; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra; Widdifield, Jessica; Tu, Karen
2016-08-10
Population-based surveillance of Alzheimer's and related dementias (AD-RD) incidence and prevalence is important for chronic disease management and health system capacity planning. Algorithms based on health administrative data have been successfully developed for many chronic conditions. The increasing use of electronic medical records (EMRs) by family physicians (FPs) provides a novel reference standard by which to evaluate these algorithms as FPs are the first point of contact and providers of ongoing medical care for persons with AD-RD. We used FP EMR data as the reference standard to evaluate the accuracy of population-based health administrative data in identifying older adults with AD-RD over time. This retrospective chart abstraction study used a random sample of EMRs for 3,404 adults over 65 years of age from 83 community-based FPs in Ontario, Canada. AD-RD patients identified in the EMR were used as the reference standard against which algorithms identifying cases of AD-RD in administrative databases were compared. The highest performing algorithm was "one hospitalization code OR (three physician claims codes at least 30 days apart in a two year period) OR a prescription filled for an AD-RD specific medication" with sensitivity 79.3% (confidence interval (CI) 72.9-85.8%), specificity 99.1% (CI 98.8-99.4%), positive predictive value 80.4% (CI 74.0-86.8%), and negative predictive value 99.0% (CI 98.7-99.4%). This resulted in an age- and sex-adjusted incidence of 18.1 per 1,000 persons and adjusted prevalence of 72.0 per 1,000 persons in 2010/11. Algorithms developed from health administrative data are sensitive and specific for identifying older adults with AD-RD.
Shrestha, Swastina; Dave, Amish J; Losina, Elena; Katz, Jeffrey N
2016-07-07
Administrative health care data are frequently used to study disease burden and treatment outcomes in many conditions including osteoarthritis (OA). OA is a chronic condition with significant disease burden affecting over 27 million adults in the US. There are few studies examining the performance of administrative data algorithms to diagnose OA. The purpose of this study is to perform a systematic review of administrative data algorithms for OA diagnosis; and, to evaluate the diagnostic characteristics of algorithms based on restrictiveness and reference standards. Two reviewers independently screened English-language articles published in Medline, Embase, PubMed, and Cochrane databases that used administrative data to identify OA cases. Each algorithm was classified as restrictive or less restrictive based on number and type of administrative codes required to satisfy the case definition. We recorded sensitivity and specificity of algorithms and calculated positive likelihood ratio (LR+) and positive predictive value (PPV) based on assumed OA prevalence of 0.1, 0.25, and 0.50. The search identified 7 studies that used 13 algorithms. Of these 13 algorithms, 5 were classified as restrictive and 8 as less restrictive. Restrictive algorithms had lower median sensitivity and higher median specificity compared to less restrictive algorithms when reference standards were self-report and American college of Rheumatology (ACR) criteria. The algorithms compared to reference standard of physician diagnosis had higher sensitivity and specificity than those compared to self-reported diagnosis or ACR criteria. Restrictive algorithms are more specific for OA diagnosis and can be used to identify cases when false positives have higher costs e.g. interventional studies. Less restrictive algorithms are more sensitive and suited for studies that attempt to identify all cases e.g. screening programs.
2016-04-01
Reference Material 2806b for Light Obscuration Particle Countering April 2016 UNCLASSIFIED UNCLASSIFIED Joel Schmitigal 27809 Standard Form 298 (Rev...Standard Reference Material 2806b for Light Obscuration Particle Countering 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Reference Material 2806a to Standard Reference Material 2806b for Light Obscuration Particle Countering Joel Schmitigal Force Projection
Kelishadi, Roya; Marateb, Hamid Reza; Mansourian, Marjan; Ardalan, Gelayol; Heshmat, Ramin; Adeli, Khosrow
2016-08-01
This study aimed to determine for the first time the age- and gender-specific reference intervals for biomarkers of bone, metabolism, nutrition, and obesity in a nationally representative sample of the Iranian children and adolescents. We assessed the data of blood samples obtained from healthy Iranian children and adolescents, aged 7 to 19 years. The reference intervals of glucose, lipid profile, liver enzymes, zinc, copper, chromium, magnesium, and 25-hydroxy vitamin D [25(OH)D] were determined according to the Clinical & Laboratory Standards Institute C28-A3 guidelines. The reference intervals were partitioned using the Harris-Boyd method according to age and gender. The study population consisted of 4800 school students (50% boys, mean age of 13.8 years). Twelve chemistry analyses were partitioned by age and gender, displaying the range of results between the 2.5th to 97.5th percentiles. Significant differences existed only between boys and girls at 18 to 19 years of age for low density lipoprotein-cholesterol. 25(OH)D had the only reference interval that was similar to all age groups and both sexes. This study presented the first national database of reference intervals for a number of biochemical markers in Iranian children and adolescents. It is the first report of its kind from the Middle East and North Africa. The findings underscore the importance of providing reference intervals in different ethnicities and in various regions.
Critical assessment of human metabolic pathway databases: a stepping stone for future integration
2011-01-01
Background Multiple pathway databases are available that describe the human metabolic network and have proven their usefulness in many applications, ranging from the analysis and interpretation of high-throughput data to their use as a reference repository. However, so far the various human metabolic networks described by these databases have not been systematically compared and contrasted, nor has the extent to which they differ been quantified. For a researcher using these databases for particular analyses of human metabolism, it is crucial to know the extent of the differences in content and their underlying causes. Moreover, the outcomes of such a comparison are important for ongoing integration efforts. Results We compared the genes, EC numbers and reactions of five frequently used human metabolic pathway databases. The overlap is surprisingly low, especially on reaction level, where the databases agree on 3% of the 6968 reactions they have combined. Even for the well-established tricarboxylic acid cycle the databases agree on only 5 out of the 30 reactions in total. We identified the main causes for the lack of overlap. Importantly, the databases are partly complementary. Other explanations include the number of steps a conversion is described in and the number of possible alternative substrates listed. Missing metabolite identifiers and ambiguous names for metabolites also affect the comparison. Conclusions Our results show that each of the five networks compared provides us with a valuable piece of the puzzle of the complete reconstruction of the human metabolic network. To enable integration of the networks, next to a need for standardizing the metabolite names and identifiers, the conceptual differences between the databases should be resolved. Considerable manual intervention is required to reach the ultimate goal of a unified and biologically accurate model for studying the systems biology of human metabolism. Our comparison provides a stepping stone for such an endeavor. PMID:21999653
A survey of the current status of web-based databases indexing Iranian journals.
Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza
2009-05-01
The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.
The Toxicity Reference Database (ToxRefDB) contains approximately 30 years and $2 billion worth of animal studies. ToxRefDB allows scientists and the interested public to search and download thousands of animal toxicity testing results for hundreds of chemicals that were previously found only in paper documents. Currently, there are 474 chemicals in ToxRefDB, primarily the data rich pesticide active ingredients, but the number will continue to expand.
Online Database Coverage of Pharmaceutical Journals.
ERIC Educational Resources Information Center
Snow, Bonnie
1984-01-01
Describes compilation of data concerning pharmaceutical journal coverage in online databases which aid information providers in collection development and database selection. Methodology, results (a core collection, overlap, timeliness, geographic scope), and implications are discussed. Eight references and a list of 337 journals indexed online in…
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
16 CFR § 1102.6 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Background and Definitions § 1102.6 Definitions. (a... Database. (2) Commission or CPSC means the Consumer Product Safety Commission. (3) Consumer product means a... private labeler. (7) Publicly Available Consumer Product Safety Information Database, also referred to as...
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Karbasy, Kimiya; Lin, Danny C C; Stoianov, Alexandra; Chan, Man Khun; Bevilacqua, Victoria; Chen, Yunqi; Adeli, Khosrow
2016-04-01
The CALIPER program is a national research initiative aimed at closing the gaps in pediatric reference intervals. CALIPER previously reported reference intervals for endocrine and special chemistry markers on Abbott immunoassays. We now report new pediatric reference intervals for immunoassays on the Beckman Coulter Immunoassay Systems and assess platform-specific differences in reference values. A total of 711 healthy children and adolescents from birth to <19 years of age were recruited from the community. Serum samples were collected for measurement of 29 biomarkers on the Beckman Coulter Immunoassay Systems. Statistically relevant age and/or gender-based partitions were determined, outliers removed, and reference intervals calculated in accordance with Clinical and Laboratory Standards Institute (CLSI) EP28-A3c guidelines. Complex profiles were observed for all 29 analytes, necessitating unique age and/or sex-specific partitions. Overall, changes in analyte concentrations observed over the course of development were similar to trends previously reported, and are consistent with biochemical and physiological changes that occur during childhood. Marked differences were observed for some assays including progesterone, luteinizing hormone and follicle-stimulating hormone where reference intervals were higher than those reported on Abbott immunoassays and parathyroid hormone where intervals were lower. This study highlights the importance of determining reference intervals specific for each analytical platform. The CALIPER Pediatric Reference Interval database will enable accurate diagnosis and laboratory assessment of children monitored by Beckman Coulter Immunoassay Systems in health care institutions worldwide. These reference intervals must however be validated by individual labs for the local pediatric population as recommended by CLSI.
MOCAT: A Metagenomics Assembly and Gene Prediction Toolkit
Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R.; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/. PMID:23082188
A Correction for IUE UV Flux Distributions from Comparisons with CALSPEC
NASA Astrophysics Data System (ADS)
Bohlin, Ralph C.; Bianchi, Luciana
2018-04-01
A collection of spectral energy distributions (SEDs) is available in the Hubble Space Telescope (HST) CALSPEC database that is based on calculated model atmospheres for pure hydrogen white dwarfs (WDs). A much larger set (∼100,000) of UV SEDs covering the range (1150–3350 Å) with somewhat lower quality are available in the IUE database. IUE low-dispersion flux distributions are compared with CALSPEC to provide a correction that places IUE fluxes on the CALSPEC scale. While IUE observations are repeatable to only 4%–10% in regions of good sensitivity, the average flux corrections have a precision of 2%–3%. Our re-calibration places the IUE flux scale on the current UV reference standard and is relevant for any project based on IUE archival data, including our planned comparison of GALEX to the corrected IUE fluxes. IUE SEDs may be used to plan observations and cross-calibrate data from future missions, so the IUE flux calibration must be consistent with HST instrumental calibrations to the best possible precision.
Life Cycle Assessment for desalination: a review on methodology feasibility and reliability.
Zhou, Jin; Chang, Victor W-C; Fane, Anthony G
2014-09-15
As concerns of natural resource depletion and environmental degradation caused by desalination increase, research studies of the environmental sustainability of desalination are growing in importance. Life Cycle Assessment (LCA) is an ISO standardized method and is widely applied to evaluate the environmental performance of desalination. This study reviews more than 30 desalination LCA studies since 2000s and identifies two major issues in need of improvement. The first is feasibility, covering three elements that support the implementation of the LCA to desalination, including accounting methods, supporting databases, and life cycle impact assessment approaches. The second is reliability, addressing three essential aspects that drive uncertainty in results, including the incompleteness of the system boundary, the unrepresentativeness of the database, and the omission of uncertainty analysis. This work can serve as a preliminary LCA reference for desalination specialists, but will also strengthen LCA as an effective method to evaluate the environment footprint of desalination alternatives. Copyright © 2014 Elsevier Ltd. All rights reserved.
MOCAT: a metagenomics assembly and gene prediction toolkit.
Kultima, Jens Roat; Sunagawa, Shinichi; Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.
Data-Based Decision Making in Education: Challenges and Opportunities
ERIC Educational Resources Information Center
Schildkamp, Kim, Ed.; Lai, Mei Kuin, Ed.; Earl, Lorna, Ed.
2013-01-01
In a context where schools are held more and more accountable for the education they provide, data-based decision making has become increasingly important. This book brings together scholars from several countries to examine data-based decision making. Data-based decision making in this book refers to making decisions based on a broad range of…
Gorman, Sean K; Slavik, Richard S; Lam, Stefanie
2012-01-01
Background: Clinicians commonly rely on tertiary drug information references to guide drug dosages for patients who are receiving continuous renal replacement therapy (CRRT). It is unknown whether the dosage recommendations in these frequently used references reflect the most current evidence. Objective: To determine the presence and accuracy of drug dosage recommendations for patients undergoing CRRT in 4 drug information references. Methods: Medications commonly prescribed during CRRT were identified from an institutional medication inventory database, and evidence-based dosage recommendations for this setting were developed from the primary and secondary literature. The American Hospital Formulary System—Drug Information (AHFS–DI), Micromedex 2.0 (specifically the DRUGDEX and Martindale databases), and the 5th edition of Drug Prescribing in Renal Failure (DPRF5) were assessed for the presence of drug dosage recommendations in the CRRT setting. The dosage recommendations in these tertiary references were compared with the recommendations derived from the primary and secondary literature to determine concordance. Results: Evidence-based drug dosage recommendations were developed for 33 medications administered in patients undergoing CRRT. The AHFS–DI provided no dosage recommendations specific to CRRT, whereas the DPRF5 provided recommendations for 27 (82%) of the medications and the Micromedex 2.0 application for 20 (61%) (13 [39%] in the DRUGDEX database and 16 [48%] in the Martindale database, with 9 medications covered by both). The dosage recommendations were in concordance with evidence-based recommendations for 12 (92%) of the 13 medications in the DRUGDEX database, 26 (96%) of the 27 in the DPRF5, and all 16 (100%) of those in the Martindale database. Conclusions: One prominent tertiary drug information resource provided no drug dosage recommendations for patients undergoing CRRT. However, 2 of the databases in an Internet-based medical information application and the latest edition of a renal specialty drug information resource provided recommendations for a majority of the medications investigated. Most dosage recommendations were similar to those derived from the primary and secondary literature. The most recent edition of the DPRF is the preferred source of information when prescribing dosage regimens for patients receiving CRRT. PMID:22783029
PMAG: Relational Database Definition
NASA Astrophysics Data System (ADS)
Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.
2002-12-01
The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner. PMID:25077800
SpliceDisease database: linking RNA splicing and disease.
Wang, Juan; Zhang, Jie; Li, Kaibo; Zhao, Wei; Cui, Qinghua
2012-01-01
RNA splicing is an important aspect of gene regulation in many organisms. Splicing of RNA is regulated by complicated mechanisms involving numerous RNA-binding proteins and the intricate network of interactions among them. Mutations in cis-acting splicing elements or its regulatory proteins have been shown to be involved in human diseases. Defects in pre-mRNA splicing process have emerged as a common disease-causing mechanism. Therefore, a database integrating RNA splicing and disease associations would be helpful for understanding not only the RNA splicing but also its contribution to disease. In SpliceDisease database, we manually curated 2337 splicing mutation disease entries involving 303 genes and 370 diseases, which have been supported experimentally in 898 publications. The SpliceDisease database provides information including the change of the nucleotide in the sequence, the location of the mutation on the gene, the reference Pubmed ID and detailed description for the relationship among gene mutations, splicing defects and diseases. We standardized the names of the diseases and genes and provided links for these genes to NCBI and UCSC genome browser for further annotation and genomic sequences. For the location of the mutation, we give direct links of the entry to the respective position/region in the genome browser. The users can freely browse, search and download the data in SpliceDisease at http://cmbi.bjmu.edu.cn/sdisease.
Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.
Final report on key comparison APMP.M.P-K13 in hydraulic gauge pressure from 50 MPa to 500 MPa
NASA Astrophysics Data System (ADS)
Kajikawa, Hiroaki; Kobata, Tokihiko; Yadav, Sanjay; Jian, Wu; Changpan, Tawat; Owen, Neville; Yanhua, Li; Hung, Chen-Chuan; Ginanjar, Gigin; Choi, In-Mook
2015-01-01
This report describes the results of a key comparison of hydraulic high-pressure standards at nine National Metrology Institutes (NMIs: NMIJ/AIST, NPLI, NMC/A*STAR, NIMT, NMIA, NIM, CMS/ITRI, KIM-LIPI, and KRISS) within the framework of the Asia-Pacific Metrology Programme (APMP) in order to determine their degrees of equivalence in the pressure range from 50 MPa to 500 MPa in gauge mode. The pilot institute was the National Metrology Institute of Japan (NMIJ/AIST). All participating institutes used hydraulic pressure balances as their pressure standards. A set of pressure balance with a free-deformational piston-cylinder assembly was used as the transfer standard. Three piston-cylinder assemblies, only one at a time, were used to complete the measurements in the period from November 2010 to January 2013. Ten participants completed their measurements and reported the pressure-dependent effective areas of the transfer standard at specified pressures with the associated uncertainties. Since one of the participants withdrew its results, the measurement results of the nine participants were finally compared. The results were linked to the CCM.P-K13 reference values through the results of two linking laboratories, NMIJ/AIST and NPLI. The degrees of equivalence were evaluated by the relative deviations of the participants' results from the CCM.P-K13 key comparison reference values, and their associated combined expanded (k=2) uncertainties. The results of all the nine participating NMIs agree with the CCM.P-K13 reference values within their expanded (k=2) uncertainties in the entire pressure range from 50 MPa to 500 MPa. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Viallon, Joële; Idrees, Faraz; Moussay, Philippe; Wielgosz, Robert; Fentanes, Oscar; Benítez, Ángeles; Ordoñez, Daniel
2018-01-01
As part of the ongoing key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of Mexico maintained by the National Institute of Ecology and Climate Change (INECC) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone amount-of-substance fraction range from 0 nmol/mol to 500 nmol/mol. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
The Italian VLBI Network: First Results and Future Perspectives
NASA Astrophysics Data System (ADS)
Stagni, Matteo; Negusini, Monia; Bianco, Giuseppe; Sarti, Pierguido
2016-12-01
A first 24-hour Italian VLBI geodetic experiment, involving the Medicina, Noto, and Matera antennas, shaped as an IVS standard EUROPE, was successfully performed. In 2014, starting from the correlator output, a geodetic database was created and a typical solution of a small network was achieved, here presented. From this promising result we have planned new observations in 2016, involving the three Italian geodetic antennas. This could be the beginning of a possible routine activity, creating a data set that can be combined with GNSS observations to contribute to the National Geodetic Reference Datum. Particular care should be taken in the scheduling of the new experiments in order to optimize the number of usable observations. These observations can be used to study and plan future experiments in which the time and frequency standards can be given by an optical fiber link, thus having a common clock at different VLBI stations.
NASA Astrophysics Data System (ADS)
Pascoe, C. L.
2017-12-01
The Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modelling teams since 1995. This has led to a better understanding of past, present, and future climate. The 2017 sixth phase of the CMIP process (CMIP6) consists of a suite of common experiments, and 21 separate CMIP-Endorsed Model Intercomparison Projects (MIPs) making a total of 244 separate experiments. Precise descriptions of the suite of CMIP6 experiments have been captured in a Common Information Model (CIM) database by the Earth System Documentation Project (ES-DOC). The database contains descriptions of forcings, model configuration requirements, ensemble information and citation links, as well as text descriptions and information about the rationale for each experiment. The database was built from statements about the experiments found in the academic literature, the MIP submissions to the World Climate Research Programme (WCRP), WCRP summary tables and correspondence with the principle investigators for each MIP. The database was collated using spreadsheets which are archived in the ES-DOC Github repository and then rendered on the ES-DOC website. A diagramatic view of the workflow of building the database of experiment metadata for CMIP6 is shown in the attached figure.The CIM provides the formalism to collect detailed information from diverse sources in a standard way across all the CMIP6 MIPs. The ES-DOC documentation acts as a unified reference for CMIP6 information to be used both by data producers and consumers. This is especially important given the federated nature of the CMIP6 project. Because the CIM allows forcing constraints and other experiment attributes to be referred to by more than one experiment, we can streamline the process of collecting information from modelling groups about how they set up their models for each experiment. End users of the climate model archive will be able to ask questions enabled by the interconnectedness of the metadata such as "Which MIPs make use of experiment A?" and "Which experiments use forcing constraint B?".
Bulla, O; Poncet, A; Alberio, L; Asmis, L M; Gähler, A; Graf, L; Nagler, M; Studt, J-D; Tsakiris, D A; Fontana, P
2017-07-01
Measuring factor VIII (FVIII) activity can be challenging when it has been modified, such as when FVIII is pegylated to increase its circulating half-life. Use of a product-specific reference standard may help avoid this issue. Evaluate the impact of using a product-specific reference standard for measuring the FVIII activity of BAX 855 - a pegylated FVIII - in eight of Switzerland's main laboratories. Factor VIII-deficient plasma, spiked with five different concentrations of BAX 855, plus a control FVIII sample, was sent to the participating laboratories. They measured FVIII activity by using either with a one-stage (OSA) or the chromogenic assay (CA) against their local or a product-specific reference standard. When using a local reference standard, there was an overestimation of BAX 855 activity compared to the target concentrations, both with the OSA and CA. The use of a product-specific reference standard reduced this effect: mean recovery ranged from 127.7% to 213.5% using the OSA with local reference standards, compared to 110% to 183.8% with a product-specific reference standard, and from 146.3% to 182.4% using the CA with local reference standards compared to 72.7% to 103.7% with a product-specific reference standard. In this in vitro study, the type of reference standard had a major impact on the measurement of BAX 855 activity. Evaluation was more accurate and precise when using a product-specific reference standard. © 2017 John Wiley & Sons Ltd.
Challenges in developing medicinal plant databases for sharing ethnopharmacological knowledge.
Ningthoujam, Sanjoy Singh; Talukdar, Anupam Das; Potsangbam, Kumar Singh; Choudhury, Manabendra Dutta
2012-05-07
Major research contributions in ethnopharmacology have generated vast amount of data associated with medicinal plants. Computerized databases facilitate data management and analysis making coherent information available to researchers, planners and other users. Web-based databases also facilitate knowledge transmission and feed the circle of information exchange between the ethnopharmacological studies and public audience. However, despite the development of many medicinal plant databases, a lack of uniformity is still discernible. Therefore, it calls for defining a common standard to achieve the common objectives of ethnopharmacology. The aim of the study is to review the diversity of approaches in storing ethnopharmacological information in databases and to provide some minimal standards for these databases. Survey for articles on medicinal plant databases was done on the Internet by using selective keywords. Grey literatures and printed materials were also searched for information. Listed resources were critically analyzed for their approaches in content type, focus area and software technology. Necessity for rapid incorporation of traditional knowledge by compiling primary data has been felt. While citation collection is common approach for information compilation, it could not fully assimilate local literatures which reflect traditional knowledge. Need for defining standards for systematic evaluation, checking quality and authenticity of the data is felt. Databases focussing on thematic areas, viz., traditional medicine system, regional aspect, disease and phytochemical information are analyzed. Issues pertaining to data standard, data linking and unique identification need to be addressed in addition to general issues like lack of update and sustainability. In the background of the present study, suggestions have been made on some minimum standards for development of medicinal plant database. In spite of variations in approaches, existence of many overlapping features indicates redundancy of resources and efforts. As the development of global data in a single database may not be possible in view of the culture-specific differences, efforts can be given to specific regional areas. Existing scenario calls for collaborative approach for defining a common standard in medicinal plant database for knowledge sharing and scientific advancement. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Herlin, Christian; Doucet, Jean Charles; Bigorre, Michèle; Khelifa, Hatem Cheikh; Captier, Guillaume
2013-10-01
Treacher Collins syndrome (TCS) is a severe and complex craniofacial malformation affecting the facial skeleton and soft tissues. The palate as well as the external and middle ear are also affected, but his prognosis is mainly related to neonatal airway management. Methods of zygomatico-orbital reconstruction are numerous and currently use primarily autologous bone, lyophilized cartilage, alloplastic implants, or even free flaps. This work developed a reliable "customized" method of zygomatico-orbital bony reconstruction using a generic reference model tailored to each patient. From a standard computed tomography (CT) acquisition, we studied qualitatively and quantitatively the skeleton of four individuals with TCS whose age was between 6 and 20 years. In parallel, we studied 40 controls at the same age to obtain a morphometric database of reference. Surgical simulation was carried out using validated software used in craniofacial surgery. The zygomatic hypoplasia was very important quantitatively and morphologically in all TCS individuals. Orbital involvement was mainly morphological, with volumes comparable to the controls of the same age. The control database was used to create three-dimensional computer models to be used in the manufacture of cutting guides for autologous cranial bone grafts or alloplastic implants perfectly adapted to each patient's morphology. Presurgical simulation was also used to fabricate custom positioning guides permitting a simple and reliable surgical procedure. The use of a virtual database allowed us to design a reliable and reproducible skeletal reconstruction method for this rare and complex syndrome. The use of presurgical simulation tools seem essential in this type of craniofacial malformation to increase the reliability of these uncommon and complex surgical procedures, and to ensure stable results over time. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Rudnick, Paul A.; Markey, Sanford P.; Roth, Jeri; Mirokhin, Yuri; Yan, Xinjian; Tchekhovskoi, Dmitrii V.; Edwards, Nathan J.; Thangudu, Ratna R.; Ketchum, Karen A.; Kinsinger, Christopher R.; Mesri, Mehdi; Rodriguez, Henry; Stein, Stephen E.
2016-01-01
The Clinical Proteomic Tumor Analysis Consortium (CPTAC) has produced large proteomics datasets from the mass spectrometric interrogation of tumor samples previously analyzed by The Cancer Genome Atlas (TCGA) program. The availability of the genomic and proteomic data is enabling proteogenomic study for both reference (i.e., contained in major sequence databases) and non-reference markers of cancer. The CPTAC labs have focused on colon, breast, and ovarian tissues in the first round of analyses; spectra from these datasets were produced from 2D LC-MS/MS analyses and represent deep coverage. To reduce the variability introduced by disparate data analysis platforms (e.g., software packages, versions, parameters, sequence databases, etc.), the CPTAC Common Data Analysis Platform (CDAP) was created. The CDAP produces both peptide-spectrum-match (PSM) reports and gene-level reports. The pipeline processes raw mass spectrometry data according to the following: (1) Peak-picking and quantitative data extraction, (2) database searching, (3) gene-based protein parsimony, and (4) false discovery rate (FDR)-based filtering. The pipeline also produces localization scores for the phosphopeptide enrichment studies using the PhosphoRS program. Quantitative information for each of the datasets is specific to the sample processing, with PSM and protein reports containing the spectrum-level or gene-level (“rolled-up”) precursor peak areas and spectral counts for label-free or reporter ion log-ratios for 4plex iTRAQ™. The reports are available in simple tab-delimited formats and, for the PSM-reports, in mzIdentML. The goal of the CDAP is to provide standard, uniform reports for all of the CPTAC data, enabling comparisons between different samples and cancer types as well as across the major ‘omics fields. PMID:26860878
Adams, Denise; Wu, Taixiang; Yang, Xunzhe; Tai, Shusheng; Vohra, Sunita
2009-10-07
Chronic fatigue is increasingly common. Conventional medical care is limited in treating chronic fatigue, leading some patients to use traditional Chinese medicine therapies, including herbal medicine. To assess the effectiveness of traditional Chinese medicine herbal products in treating idiopathic chronic fatigue and chronic fatigue syndrome. The following databases were searched for terms related to traditional Chinese medicine, chronic fatigue, and clinical trials: CCDAN Controlled Trials Register (July 2009), MEDLINE (1966-2008), EMBASE (1980-2008), AMED (1985-2008), CINAHL (1982-2008), PSYCHINFO (1985-2008), CENTRAL (Issue 2 2008), the Chalmers Research Group PedCAM Database (2004), VIP Information (1989-2008), CNKI (1976-2008), OCLC Proceedings First (1992-2008), Conference Papers Index (1982-2008), and Dissertation Abstracts (1980-2008). Reference lists of included studies and review articles were examined and experts in the field were contacted for knowledge of additional studies. Selection criteria included published or unpublished randomized controlled trials (RCTs) of participants diagnosed with idiopathic chronic fatigue or chronic fatigue syndrome comparing traditional Chinese medicinal herbs with placebo, conventional standard of care (SOC), or no treatment/wait lists. The outcome of interest was fatigue. 13 databases were searched for RCTs investigating TCM herbal products for the treatment of chronic fatigue. Over 2400 references were located. Studies were screened and assessed for inclusion criteria by two authors. No studies that met all inclusion criteria were identified. Although studies examining the use of TCM herbal products for chronic fatigue were located, methodologic limitations resulted in the exclusion of all studies. Of note, many of the studies labelled as RCTs and conducted in China did not utilize rigorous randomization procedures. Improvements in methodology in future studies is required for meaningful synthesis of data.
Computational assessment of model-based wave separation using a database of virtual subjects.
Hametner, Bernhard; Schneider, Magdalena; Parragh, Stephanie; Wassertheurer, Siegfried
2017-11-07
The quantification of arterial wave reflection is an important area of interest in arterial pulse wave analysis. It can be achieved by wave separation analysis (WSA) if both the aortic pressure waveform and the aortic flow waveform are known. For better applicability, several mathematical models have been established to estimate aortic flow solely based on pressure waveforms. The aim of this study is to investigate and verify the model-based wave separation of the ARCSolver method on virtual pulse wave measurements. The study is based on an open access virtual database generated via simulations. Seven cardiac and arterial parameters were varied within physiological healthy ranges, leading to a total of 3325 virtual healthy subjects. For assessing the model-based ARCSolver method computationally, this method was used to perform WSA based on the aortic root pressure waveforms of the virtual patients. Asa reference, the values of WSA using both the pressure and flow waveforms provided by the virtual database were taken. The investigated parameters showed a good overall agreement between the model-based method and the reference. Mean differences and standard deviations were -0.05±0.02AU for characteristic impedance, -3.93±1.79mmHg for forward pressure amplitude, 1.37±1.56mmHg for backward pressure amplitude and 12.42±4.88% for reflection magnitude. The results indicate that the mathematical blood flow model of the ARCSolver method is a feasible surrogate for a measured flow waveform and provides a reasonable way to assess arterial wave reflection non-invasively in healthy subjects. Copyright © 2017 Elsevier Ltd. All rights reserved.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Data tables for the 1993 National Transit Database section 15 report year
DOT National Transportation Integrated Search
1994-12-01
The Data Tables For the 1993 National Transit Database Section 15 Report Year is one of three publications comprising the 1993 Annual Report. Also referred to as the National Transit Database Reporting System, it is administered by the Federal Transi...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FISH AND... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
US EPA Nonattainment Areas and Designations
This web service contains the following state level layers:Ozone 8-hr (1997 standard), Ozone 8-hr (2008 standard), Lead (2008 standard), SO2 1-hr (2010 standard), PM2.5 24hr (2006 standard), PM2.5 Annual (1997 standard), PM2.5 Annual (2012 standard), and PM10 (1987 standard). Full FGDC metadata records for each layer may be found by clicking the layer name at the web service endpoint (https://gispub.epa.gov/arcgis/rest/services/OAR_OAQPS/NonattainmentAreas/MapServer) and viewing the layer description. These layers identify areas in the U.S. where air pollution levels have not met the National Ambient Air Quality Standards (NAAQS) for criteria air pollutants and have been designated nonattainment?? areas (NAA). The data are updated weekly from an OAQPS internal database. However, that does not necessarily mean the data have changed. The EPA Office of Air Quality Planning and Standards (OAQPS) has set National Ambient Air Quality Standards for six principal pollutants, which are called criteria pollutants. Under provisions of the Clean Air Act, which is intended to improve the quality of the air we breathe, EPA is required to set National Ambient Air Quality Standards for six common air pollutants. These commonly found air pollutants (also known as criteria pollutants) are found all over the United States. They are particle pollution (often referred to as particulate matter), ground-level ozone, carbon monoxide, sulfur oxides, nitrogen oxides, and lead. For each
NASA Astrophysics Data System (ADS)
Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile
2013-04-01
The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .
Data pre-processing in record linkage to find the same companies from different databases
NASA Astrophysics Data System (ADS)
Gunawan, D.; Lubis, M. S.; Arisandi, D.; Azzahry, B.
2018-03-01
As public agencies, the Badan Pelayanan Perizinan Terpadu (BPPT) and the Badan Lingkungan Hidup (BLH) of Medan city manage process to obtain a business license from the public. However, each agency might have a different corporate data because of a separate data input process, even though the data may refer to the same company’s data. Therefore, it is required to identify and correlate data that refer to the same company which lie in different data sources. This research focuses on data pre-processing such as data cleaning, text pre-processing, indexing and record comparison. In addition, this research implements data matching using support vector machine algorithm. The result of this algorithm will be used to record linkage of data that can be used to identify and connect the company’s data based on the degree of similarity of each data. Previous data will be standardized in accordance with the format and structure appropriate to the stage of preprocessing data. After analyzing data pre-processing, we found that both database structures are not designed to support data integration. We decide that the data matching can be done with blocking criteria such as company name and the name of the owner (or applicant). In addition to data pre-processing, the result of data classification with a high level of similarity as many as 90 pairs of records.
Influence of gender in the recognition of basic facial expressions: A critical literature review
Forni-Santos, Larissa; Osório, Flávia L
2015-01-01
AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation. PMID:26425447
Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher
NASA Technical Reports Server (NTRS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.
2017-01-01
Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.
James Webb Space Telescope XML Database: From the Beginning to Today
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Fatig, Curtis C.
2005-01-01
The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.
Mikaelyan, Aram; Köhler, Tim; Lampert, Niclas; Rohland, Jeffrey; Boga, Hamadi; Meuser, Katja; Brune, Andreas
2015-10-01
Recent developments in sequencing technology have given rise to a large number of studies that assess bacterial diversity and community structure in termite and cockroach guts based on large amplicon libraries of 16S rRNA genes. Although these studies have revealed important ecological and evolutionary patterns in the gut microbiota, classification of the short sequence reads is limited by the taxonomic depth and resolution of the reference databases used in the respective studies. Here, we present a curated reference database for accurate taxonomic analysis of the bacterial gut microbiota of dictyopteran insects. The Dictyopteran gut microbiota reference Database (DictDb) is based on the Silva database but was significantly expanded by the addition of clones from 11 mostly unexplored termite and cockroach groups, which increased the inventory of bacterial sequences from dictyopteran guts by 26%. The taxonomic depth and resolution of DictDb was significantly improved by a general revision of the taxonomic guide tree for all important lineages, including a detailed phylogenetic analysis of the Treponema and Alistipes complexes, the Fibrobacteres, and the TG3 phylum. The performance of this first documented version of DictDb (v. 3.0) using the revised taxonomic guide tree in the classification of short-read libraries obtained from termites and cockroaches was highly superior to that of the current Silva and RDP databases. DictDb uses an informative nomenclature that is consistent with the literature also for clades of uncultured bacteria and provides an invaluable tool for anyone exploring the gut community structure of termites and cockroaches. Copyright © 2015 Elsevier GmbH. All rights reserved.
CHERNOLITTM. Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caff, F., Jr.; Kennedy, R.A.; Mahaffey, J.A.
1992-03-02
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world`s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Chernobyl Bibliographic Search System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Jr, F.; Kennedy, R. A.; Mahaffey, J. A.
1992-05-11
The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world''s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less
Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2016-06-01
Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.
NASA Technical Reports Server (NTRS)
McMillin, Naomi; Allen, Jerry; Erickson, Gary; Campbell, Jim; Mann, Mike; Kubiatko, Paul; Yingling, David; Mason, Charlie
1999-01-01
The objective was to experimentally evaluate the longitudinal and lateral-directional stability and control characteristics of the Reference H configuration at supersonic and transonic speeds. A series of conventional and alternate control devices were also evaluated at supersonic and transonic speeds. A database on the conventional and alternate control devices was to be created for use in the HSR program.
Critical Need for Plutonium and Uranium Isotopic Standards with Lower Uncertainties
Mathew, Kattathu Joseph; Stanley, Floyd E.; Thomas, Mariam R.; ...
2016-09-23
Certified reference materials (CRMs) traceable to national and international safeguards database are a critical prerequisite for ensuring that nuclear measurement systems are free of systematic biases. CRMs are used to validate measurement processes associated with nuclear analytical laboratories. Diverse areas related to nuclear safeguards are impacted by the quality of the CRM standards available to analytical laboratories. These include: nuclear forensics, radio-chronometry, national and international safeguards, stockpile stewardship, nuclear weapons infrastructure and nonproliferation, fuel fabrication, waste processing, radiation protection, and environmental monitoring. For the past three decades the nuclear community is confronted with the strange situation that improvements in measurementmore » data quality resulting from the improved accuracy and precision achievable with modern multi-collector mass spectrometers could not be fully exploited due to large uncertainties associated with CRMs available from New Brunswick Laboratory (NBL) that are used for instrument calibration and measurement control. Similar conditions prevail for both plutonium and uranium isotopic standards and for impurity element standards in uranium matrices. Herein, the current status of U and Pu isotopic standards available from NBL is reviewed. Critical areas requiring improvement in the quality of the nuclear standards to enable the U. S. and international safeguards community to utilize the full potential of modern multi-collector mass spectrometer instruments are highlighted.« less
Critical Need for Plutonium and Uranium Isotopic Standards with Lower Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathew, Kattathu Joseph; Stanley, Floyd E.; Thomas, Mariam R.
Certified reference materials (CRMs) traceable to national and international safeguards database are a critical prerequisite for ensuring that nuclear measurement systems are free of systematic biases. CRMs are used to validate measurement processes associated with nuclear analytical laboratories. Diverse areas related to nuclear safeguards are impacted by the quality of the CRM standards available to analytical laboratories. These include: nuclear forensics, radio-chronometry, national and international safeguards, stockpile stewardship, nuclear weapons infrastructure and nonproliferation, fuel fabrication, waste processing, radiation protection, and environmental monitoring. For the past three decades the nuclear community is confronted with the strange situation that improvements in measurementmore » data quality resulting from the improved accuracy and precision achievable with modern multi-collector mass spectrometers could not be fully exploited due to large uncertainties associated with CRMs available from New Brunswick Laboratory (NBL) that are used for instrument calibration and measurement control. Similar conditions prevail for both plutonium and uranium isotopic standards and for impurity element standards in uranium matrices. Herein, the current status of U and Pu isotopic standards available from NBL is reviewed. Critical areas requiring improvement in the quality of the nuclear standards to enable the U. S. and international safeguards community to utilize the full potential of modern multi-collector mass spectrometer instruments are highlighted.« less
Manheim, F.T.; Buchholtz ten Brink, Marilyn R.; Mecray, E.L.
1998-01-01
A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1955 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray literature). Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations of Cu, Hg, and Zn of 40 to 60% over a 17-year period.A comprehensive database of sediment chemistry and environmental parameters has been compiled for Boston Harbor and Massachusetts Bay. This work illustrates methodologies for rescuing and validating sediment data from heterogeneous historical sources. It greatly expands spatial and temporal data coverage of estuarine and coastal sediments. The database contains about 3500 samples containing inorganic chemical, organic, texture and other environmental data dating from 1995 to 1994. Cooperation with local and federal agencies as well as universities was essential in locating and screening documents for the database. More than 80% of references utilized came from sources with limited distribution (gray Task sharing was facilitated by a comprehensive and clearly defined data dictionary for sediments. It also served as a data entry template and flat file format for data processing and as a basis for interpretation and graphical illustration. Standard QA/QC protocols are usually inapplicable to historical sediment data. In this work outliers and data quality problems were identified by batch screening techniques that also provide visualizations of data relationships and geochemical affinities. No data were excluded, but qualifying comments warn users of problem data. For Boston Harbor, the proportion of irreparable or seriously questioned data was remarkably small (<5%), although concentration values for metals and organic contaminants spanned 3 orders of magnitude for many elements or compounds. Data from the historical database provide alternatives to dated cores for measuring changes in surficial sediment contamination level with time. The data indicate that spatial inhomogeneity in harbor environments can be large with respect to sediment-hosted contaminants. Boston Inner Harbor surficial sediments showed decreases in concentrations Cu, Hg, and Zn of 40 to 60% over a 17-year period.
Database of Standardized Questionnaires About Walking & Bicycling
This database contains questionnaire items and a list of validation studies for standardized items related to walking and biking. The items come from multiple national and international physical activity questionnaires.
A systematic review and metaanalysis of energy intake and weight gain in pregnancy.
Jebeile, Hiba; Mijatovic, Jovana; Louie, Jimmy Chun Yu; Prvan, Tania; Brand-Miller, Jennie C
2016-04-01
Gestational weight gain within the recommended range produces optimal pregnancy outcomes, yet many women exceed the guidelines. Official recommendations to increase energy intake by ∼ 1000 kJ/day in pregnancy may be excessive. To determine by metaanalysis of relevant studies whether greater increments in energy intake from early to late pregnancy corresponded to greater or excessive gestational weight gain. We systematically searched electronic databases for observational and intervention studies published from 1990 to the present. The databases included Ovid Medline, Cochrane Library, Excerpta Medica DataBASE (EMBASE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), and Science Direct. In addition we hand-searched reference lists of all identified articles. Studies were included if they reported gestational weight gain and energy intake in early and late gestation in women of any age with a singleton pregnancy. Search also encompassed journals emerging from both developed and developing countries. Studies were individually assessed for quality based on the Quality Criteria Checklist obtained from the Evidence Analysis Manual: Steps in the academy evidence analysis process. Publication bias was plotted by the use of a funnel plot with standard mean difference against standard error. Identified studies were meta-analyzed and stratified by body mass index, study design, dietary methodology, and country status (developed/developing) by the use of a random-effects model. Of 2487 articles screened, 18 studies met inclusion criteria. On average, women gained 12.0 (2.8) kg (standardized mean difference = 1.306, P < .0005) yet reported only a small increment in energy intake that did not reach statistical significance (∼475 kJ/day, standard mean difference = 0.266, P = .016). Irrespective of baseline body mass index, study design, dietary methodology, or country status, changes in energy intake were not significantly correlated to the amount of gestational weight gain (r = 0.321, P = .11). Despite rapid physiologic weight gain, women report little or no change in energy intake during pregnancy. Current recommendations to increase energy intake by ∼ 1000 kJ/day may, therefore, encourage excessive weight gain and adverse pregnancy outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.
2009-12-01
Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.
WOVOdat, A Worldwide Volcano Unrest Database, to Improve Eruption Forecasts
NASA Astrophysics Data System (ADS)
Widiwijayanti, C.; Costa, F.; Win, N. T. Z.; Tan, K.; Newhall, C. G.; Ratdomopurbo, A.
2015-12-01
WOVOdat is the World Organization of Volcano Observatories' Database of Volcanic Unrest. An international effort to develop common standards for compiling and storing data on volcanic unrests in a centralized database and freely web-accessible for reference during volcanic crises, comparative studies, and basic research on pre-eruption processes. WOVOdat will be to volcanology as an epidemiological database is to medicine. Despite the large spectrum of monitoring techniques, the interpretation of monitoring data throughout the evolution of the unrest and making timely forecasts remain the most challenging tasks for volcanologists. The field of eruption forecasting is becoming more quantitative, based on the understanding of the pre-eruptive magmatic processes and dynamic interaction between variables that are at play in a volcanic system. Such forecasts must also acknowledge and express the uncertainties, therefore most of current research in this field focused on the application of event tree analysis to reflect multiple possible scenarios and the probability of each scenario. Such forecasts are critically dependent on comprehensive and authoritative global volcano unrest data sets - the very information currently collected in WOVOdat. As the database becomes more complete, Boolean searches, side-by-side digital and thus scalable comparisons of unrest, pattern recognition, will generate reliable results. Statistical distribution obtained from WOVOdat can be then used to estimate the probabilities of each scenario after specific patterns of unrest. We established main web interface for data submission and visualizations, and have now incorporated ~20% of worldwide unrest data into the database, covering more than 100 eruptive episodes. In the upcoming years we will concentrate in acquiring data from volcano observatories develop a robust data query interface, optimizing data mining, and creating tools by which WOVOdat can be used for probabilistic eruption forecasting. The more data in WOVOdat, the more useful it will be.
NASA Astrophysics Data System (ADS)
Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
Parkhill, Anne; Hill, Kelvin
2009-03-01
The Australian National Stroke Foundation appointed a search specialist to find the best available evidence for the second edition of its Clinical Guidelines for Acute Stroke Management. To identify the relative effectiveness of differing evidence sources for the guideline update. We searched and reviewed references from five valid evidence sources for clinical and economic questions: (i) electronic databases; (ii) reference lists of relevant systematic reviews, guidelines, and/or primary studies; (iii) table of contents of a number of key journals for the last 6 months; (iv) internet/grey literature; and (v) experts. Reference sources were recorded, quantified, and analysed. In the clinical portion of the guidelines document, there was a greater use of previous knowledge and sources other than electronic databases for evidence, while there was a greater use of electronic databases for the economic section. The results confirmed that searchers need to be aware of the context and range of sources for evidence searches. For best available evidence, searchers cannot rely solely on electronic databases and need to encompass many different media and sources.
PHASE I MATERIALS PROPERTY DATABASE DEVELOPMENT FOR ASME CODES AND STANDARDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Weiju; Lin, Lianshan
2013-01-01
To support the ASME Boiler and Pressure Vessel Codes and Standard (BPVC) in modern information era, development of a web-based materials property database is initiated under the supervision of ASME Committee on Materials. To achieve efficiency, the project heavily draws upon experience from development of the Gen IV Materials Handbook and the Nuclear System Materials Handbook. The effort is divided into two phases. Phase I is planned to deliver a materials data file warehouse that offers a depository for various files containing raw data and background information, and Phase II will provide a relational digital database that provides advanced featuresmore » facilitating digital data processing and management. Population of the database will start with materials property data for nuclear applications and expand to data covering the entire ASME Code and Standards including the piping codes as the database structure is continuously optimized. The ultimate goal of the effort is to establish a sound cyber infrastructure that support ASME Codes and Standards development and maintenance.« less
Fault displacement hazard assessment for nuclear installations based on IAEA safety standards
NASA Astrophysics Data System (ADS)
Fukushima, Y.
2016-12-01
In the IAEA Safety NS-R-3, surface fault displacement hazard assessment (FDHA) is required for the siting of nuclear installations. If any capable faults exist in the candidate site, IAEA recommends the consideration of alternative sites. However, due to the progress in palaeoseismological investigations, capable faults may be found in existing site. In such a case, IAEA recommends to evaluate the safety using probabilistic FDHA (PFDHA), which is an empirical approach based on still quite limited database. Therefore a basic and crucial improvement is to increase the database. In 2015, IAEA produced a TecDoc-1767 on Palaeoseismology as a reference for the identification of capable faults. Another IAEA Safety Report 85 on ground motion simulation based on fault rupture modelling provides an annex introducing recent PFDHAs and fault displacement simulation methodologies. The IAEA expanded the project of FDHA for the probabilistic approach and the physics based fault rupture modelling. The first approach needs a refinement of the empirical methods by building a world wide database, and the second approach needs to shift from kinematic to the dynamic scheme. Both approaches can complement each other, since simulated displacement can fill the gap of a sparse database and geological observations can be useful to calibrate the simulations. The IAEA already supported a workshop in October 2015 to discuss the existing databases with the aim of creating a common worldwide database. A consensus of a unified database was reached. The next milestone is to fill the database with as many fault rupture data sets as possible. Another IAEA work group had a WS in November 2015 to discuss the state-of-the-art PFDHA as well as simulation methodologies. Two groups jointed a consultancy meeting in February 2016, shared information, identified issues, discussed goals and outputs, and scheduled future meetings. Now we may aim at coordinating activities for the whole FDHA tasks jointly.
MALDI-TOF mass spectrometry as a potential tool for Trichomonas vaginalis identification.
Calderaro, Adriana; Piergianni, Maddalena; Montecchini, Sara; Buttrini, Mirko; Piccolo, Giovanna; Rossi, Sabina; Arcangeletti, Maria Cristina; Medici, Maria Cristina; Chezzi, Carlo; De Conto, Flora
2016-06-10
Trichomonas vaginalis is a flagellated protozoan causing trichomoniasis, a sexually transmitted human infection, with around 276.4 million new cases estimated by World Health Organization. Culture is the gold standard method for the diagnosis of T. vaginalis infection. Recently, immunochromatographic assays as well as PCR assays for the detection of T. vaginalis antigen or DNA, respectively, have been also available. Although the well-known genome sequence of T. vaginalis has made possible the application of proteomic studies, few data are available about the overall proteomic expression profiling of T. vaginalis. The aim of this study was to investigate the potential application of MALDI-TOF MS as a new tool for the identification of T. vaginalis. Twenty-one isolates were analysed by MALDI-TOF MS after the creation of a Main Spectrum Profile (MSP) from a T. vaginalis reference strain (G3) and its subsequent supplementation in the Bruker Daltonics database, not including any profile of protozoa. This was achieved after the development of a new identification method created by modifying the range setting (6-10 kDa) for the MALDI-TOF MS analysis in order to exclude the overlapping of peaks derived from the culture media used in this study. Two MSP reference spectra were created in 2 different range: 3-15 kDa (standard range setting) and 6-10 kDa (new range setting). Both MSP spectra were deposited in the MALDI BioTyper database for further identification of additional T. vaginalis strains. All the 21 strains analysed in this study were correctly identified by using the new identification method. In this study it was demonstrated that changes in the MALDI-TOF MS standard parameters usually used to identify bacteria and fungi allowed the identification of the protozoan T. vaginalis. This study shows the usefulness of MALDI-TOF MS in the reliable identification of microorganism grown on complex liquid media such as the protozoan T. vaginalis, on the basis of the proteic profile and not on the basis of single markers, by using a "new range setting" different from that developed for bacteria and fungi.
Bloch-Mouillet, E
1999-01-01
This paper aims to provide technical and practical advice about finding references using Current Contents on disk (Macintosh or PC) or via the Internet (FTP). Seven editions are published each week. They are all organized in the same way and have the same search engine. The Life Sciences edition, extensively used in medical research, is presented here in detail, as an example. This methodological note explains, in French, how to use this reference database. It is designed to be a practical guide for browsing and searching the database, and particularly for creating search profiles adapted to the needs of researchers.
Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.
ERIC Educational Resources Information Center
Conlon, Sumali Pin-Ngern; And Others
1993-01-01
Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)
Evaluation of Database Coverage: A Comparison of Two Methodologies.
ERIC Educational Resources Information Center
Tenopir, Carol
1982-01-01
Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…
Comprehensive Thematic T-matrix Reference Database: a 2013-2014 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Wriedt, Thomas; Videen, Gorden
2014-01-01
This paper is the sixth update to the comprehensive thematic database of peer-reviewedT-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists several earlier publications not incorporated in the original database and previous updates.
Development and applications of the EntomopathogenID MLSA database for use in agricultural systems
USDA-ARS?s Scientific Manuscript database
The current study reports the development and application of a publicly accessible, curated database of Hypocrealean entomopathogenic fungi sequence data. The goal was to provide a platform for users to easily access sequence data from reference strains. The database can be used to accurately identi...
ERIC Educational Resources Information Center
Cotton, P. L.
1987-01-01
Defines two types of online databases: source, referring to those intended to be complete in themselves, whether full-text or abstracts; and bibliographic, meaning those that are not complete. Predictions are made about the future growth rate of these two types of databases, as well as full-text versus abstract databases. (EM)
NASA Astrophysics Data System (ADS)
Solve, S.; Chayramy, R.; Stock, M.; Pantelic-Babic, J.; Sofranac, Z.; Zivkovic, V.
2015-01-01
As part of the ongoing BIPM key comparison BIPM.EM-K11.a and b, a comparison of the 1 V and 10 V voltage reference standards of the BIPM and the Directorate of Measures and Precious Metals (DMDM), Beograd, Serbia, was carried out from January to March 2014. Two BIPM Zener diode-based travelling standards (Fluke 732B), BIPM6 (Z6) and BIPMA (ZA), were transported by freight to DMDM. At DMDM, the reference standard for DC voltage is a Josephson Voltage Standard. The output electromotive force of each travelling standard was measured by direct comparison with the primary standard. At the BIPM, the travelling standards were calibrated, before and after the measurements at DMDM, with the Josephson Voltage Standard. Results of all measurements were corrected for the dependence of the output voltages of the Zener standards on internal temperature and ambient atmospheric pressure. The final result of the comparison is presented as the difference between the values assigned to DC voltage standards by DMDM, at the level of 1.018 V and 10 V, at DMDM, UDMDM, and those assigned by the BIPM, at the BIPM, UBIPM, at the reference date of the 13 February 2014. UDMDM - UBIPM = 0.094 µV uc = 0.072 µV, at 1 V UDMDM - UBIPM = 0.39 µV uc = 0.12 µV, at 10 V where uc is the combined standard uncertainty associated with the measured difference, including the uncertainty of the representation of the volt at the BIPM and at DMDM, based on KJ-90, and the uncertainty related to the comparison. The results at the 10 V level are not covered by the uncertainties with a coverage factor of 2. After the distribution of the Draft A, the DMDM discovered that the pressure gauge was defective. Some considerations on the correction to apply on the comparison result and the corresponding uncertainties are presented in the report. Nevertheless, the above results fully cover the CMCs of DMDM which are significantly larger. No corrections for temperature and pressure are applied in calibrations for customers' secondary standards. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
[A systematic evaluation of application of the web-based cancer database].
Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui
2013-10-01
In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.
Lu, Tu-Lin; Li, Jin-Ci; Yu, Jiang-Yong; Cai, Bao-Chang; Mao, Chun-Qin; Yin, Fang-Zhou
2014-01-01
Traditional Chinese medicine (TCM) reference standards plays an important role in the quality control of Chinese herbal pieces. This paper overviewed the development of TCM reference standards. By analyzing the 2010 edition of Chinese pharmacopoeia, the application of TCM reference standards in the quality control of Chinese herbal pieces was summarized, and the problems exiting in the system were put forward. In the process of improving the quality control level of Chinese herbal pieces, various kinds of advanced methods and technology should be used to research the characteristic reference standards of Chinese herbal pieces, more and more reasonable reference standards should be introduced in the quality control system of Chinese herbal pieces. This article discussed the solutions in the aspect of TCM reference standards, and future development of quality control on Chinese herbal pieces is prospected.
Caracausi, Maria; Piovesan, Allison; Antonaros, Francesca; Strippoli, Pierluigi; Vitale, Lorenza; Pelleri, Maria Chiara
2017-09-01
The ideal reference, or control, gene for the study of gene expression in a given organism should be expressed at a medium‑high level for easy detection, should be expressed at a constant/stable level throughout different cell types and within the same cell type undergoing different treatments, and should maintain these features through as many different tissues of the organism. From a biological point of view, these theoretical requirements of an ideal reference gene appear to be best suited to housekeeping (HK) genes. Recent advancements in the quality and completeness of human expression microarray data and in their statistical analysis may provide new clues toward the quantitative standardization of human gene expression studies in biology and medicine, both cross‑ and within‑tissue. The systematic approach used by the present study is based on the Transcriptome Mapper tool and exploits the automated reassignment of probes to corresponding genes, intra‑ and inter‑sample normalization, elaboration and representation of gene expression values in linear form within an indexed and searchable database with a graphical interface recording quantitative levels of expression, expression variability and cross‑tissue width of expression for more than 31,000 transcripts. The present study conducted a meta‑analysis of a pool of 646 expression profile data sets from 54 different human tissues and identified actin γ 1 as the HK gene that best fits the combination of all the traditional criteria to be used as a reference gene for general use; two ribosomal protein genes, RPS18 and RPS27, and one aquaporin gene, POM121 transmembrane nucleporin C, were also identified. The present study provided a list of tissue‑ and organ‑specific genes that may be most suited for the following individual tissues/organs: Adipose tissue, bone marrow, brain, heart, kidney, liver, lung, ovary, skeletal muscle and testis; and also provides in these cases a representative, quantitative portrait of the relative, typical gene‑expression profile in the form of searchable database tables.
Molecular Identification of Commercialized Medicinal Plants in Southern Morocco
Krüger, Åsa; Rydberg, Anders; Abbad, Abdelaziz; Björk, Lars; Martin, Gary
2012-01-01
Background Medicinal plant trade is important for local livelihoods. However, many medicinal plants are difficult to identify when they are sold as roots, powders or bark. DNA barcoding involves using a short, agreed-upon region of a genome as a unique identifier for species– ideally, as a global standard. Research Question What is the functionality, efficacy and accuracy of the use of barcoding for identifying root material, using medicinal plant roots sold by herbalists in Marrakech, Morocco, as a test dataset. Methodology In total, 111 root samples were sequenced for four proposed barcode regions rpoC1, psbA-trnH, matK and ITS. Sequences were searched against a tailored reference database of Moroccan medicinal plants and their closest relatives using BLAST and Blastclust, and through inference of RAxML phylograms of the aligned market and reference samples. Principal Findings Sequencing success was high for rpoC1, psbA-trnH, and ITS, but low for matK. Searches using rpoC1 alone resulted in a number of ambiguous identifications, indicating insufficient DNA variation for accurate species-level identification. Combining rpoC1, psbA-trnH and ITS allowed the majority of the market samples to be identified to genus level. For a minority of the market samples, the barcoding identification differed significantly from previous hypotheses based on the vernacular names. Conclusions/Significance Endemic plant species are commercialized in Marrakech. Adulteration is common and this may indicate that the products are becoming locally endangered. Nevertheless the majority of the traded roots belong to species that are common and not known to be endangered. A significant conclusion from our results is that unknown samples are more difficult to identify than earlier suggested, especially if the reference sequences were obtained from different populations. A global barcoding database should therefore contain sequences from different populations of the same species to assure the reference sequences characterize the species throughout its distributional range. PMID:22761800
NASA Astrophysics Data System's New Data
NASA Astrophysics Data System (ADS)
Eichhorn, G.; Accomazzi, A.; Demleitner, M.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
2000-05-01
The NASA Astrophysics Data System has greatly increased its data holdings. The Physics database now contains almost 900,000 references and the Astronomy database almost 550,000 references. The Instrumentation database has almost 600,000 references. The scanned articles in the ADS Article Service are increasing in number continuously. Almost 1 million pages have been scanned so far. Recently the abstracts books from the Lunar and Planetary Science Conference have been scanned and put on-line. The Monthly Notices of the Royal Astronomical Society are currently being scanned back to Volume 1. This is the last major journal to be completely scanned and on-line. In cooperation with a conservation project of the Harvard libraries, microfilms of historical observatory literature are currently being scanned. This will provide access to an important part of the historical literature. The ADS can be accessed at: http://adswww.harvard.edu This project is funded by NASA under grant NCC5-189.
Learning-based automatic detection of severe coronary stenoses in CT angiographies
NASA Astrophysics Data System (ADS)
Melki, Imen; Cardon, Cyril; Gogin, Nicolas; Talbot, Hugues; Najman, Laurent
2014-03-01
3D cardiac computed tomography angiography (CCTA) is becoming a standard routine for non-invasive heart diseases diagnosis. Thanks to its high negative predictive value, CCTA is increasingly used to decide whether or not the patient should be considered for invasive angiography. However, an accurate assessment of cardiac lesions using this modality is still a time consuming task and needs a high degree of clinical expertise. Thus, providing automatic tool to assist clinicians during the diagnosis task is highly desirable. In this work, we propose a fully automatic approach for accurate severe cardiac stenoses detection. Our algorithm uses the Random Forest classi cation to detect stenotic areas. First, the classi er is trained on 18 CT cardiac exams with CTA reference standard. Then, then classi cation result is used to detect severe stenoses (with a narrowing degree higher than 50%) in a 30 cardiac CT exam database. Features that best captures the di erent stenoses con guration are extracted along the vessel centerlines at di erent scales. To ensure the accuracy against the vessel direction and scale changes, we extract features inside cylindrical patterns with variable directions and radii. Thus, we make sure that the ROIs contains only the vessel walls. The algorithm is evaluated using the Rotterdam Coronary Artery Stenoses Detection and Quantication Evaluation Framework. The evaluation is performed using reference standard quanti cations obtained from quantitative coronary angiography (QCA) and consensus reading of CTA. The obtained results show that we can reliably detect severe stenosis with a sensitivity of 64%.
Blue guardian: an open architecture for rapid ISR demonstration
NASA Astrophysics Data System (ADS)
Barrett, Donald A.; Borntrager, Luke A.; Green, David M.
2016-05-01
Throughout the Department of Defense (DoD), acquisition, platform integration, and life cycle costs for weapons systems have continued to rise. Although Open Architecture (OA) interface standards are one of the primary methods being used to reduce these costs, the Air Force Rapid Capabilities Office (AFRCO) has extended the OA concept and chartered the Open Mission System (OMS) initiative with industry to develop and demonstrate a consensus-based, non-proprietary, OA standard for integrating subsystems and services into airborne platforms. The new OMS standard provides the capability to decouple vendor-specific sensors, payloads, and service implementations from platform-specific architectures and is still in the early stages of maturation and demonstration. The Air Force Research Laboratory (AFRL) - Sensors Directorate has developed the Blue Guardian program to demonstrate advanced sensing technology utilizing open architectures in operationally relevant environments. Over the past year, Blue Guardian has developed a platform architecture using the Air Force's OMS reference architecture and conducted a ground and flight test program of multiple payload combinations. Systems tested included a vendor-unique variety of Full Motion Video (FMV) systems, a Wide Area Motion Imagery (WAMI) system, a multi-mode radar system, processing and database functions, multiple decompression algorithms, multiple communications systems, and a suite of software tools. Initial results of the Blue Guardian program show the promise of OA to DoD acquisitions, especially for Intelligence, Surveillance and Reconnaissance (ISR) payload applications. Specifically, the OMS reference architecture was extremely useful in reducing the cost and time required for integrating new systems.