Sample records for database release technology

  1. Investigation of an artificial intelligence technology--Model trees. Novel applications for an immediate release tablet formulation database.

    PubMed

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.

  2. Clay and Polymer-Based Composites Applied to Drug Release: A Scientific and Technological Prospection.

    PubMed

    Meirelles, Lyghia Maria Araújo; Raffin, Fernanda Nervo

    2017-01-01

    There has been a growing trend in recent years for the development of hybrid materials, called composites, based on clay and polymers, whose innovative properties render them attractive for drug release. The objective of this manuscript was to conduct a review of original articles on this topic published over the last decade and of the body of patents related to these carriers. A scientific prospection was carried out spanning the period from 2005 to 2015 on the Web of Science database. The technological prospection encompassed the United States Patent and Trademark Office, the European Patent Office, the World International Patent Office and the National Institute of Industrial Property databases, filtering patents with the code A61K. The survey revealed a rise in the number of publications over the past decade, confirming the potential of these hybrids for use in pharmaceutical technology. Through interaction between polymer and clay, the mechanical and thermal properties of composites are enhanced, promoting stable, controlled drugs release in biological media. The most cited clays analyzed in the articles was montmorillonite, owing to its high surface area and capacity for ion exchange. The polymeric part is commonly obtained by copolymerization, particularly using acrylate derivatives. The hybrid materials are obtained mainly in particulate form on a nanometric scale, attaining a modified release profile often sensitive to stimuli in the media. A low number of patents related to the topic were found. The World International Patent Office had the highest number of lodged patents, while Japan was the country which published the most patents. A need to broaden the application of this technology to include more therapeutic classes was identified. Moreover, the absence of regulation of nanomaterials might explain the disparity between scientific and technological output. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.

  3. Struggling with Excellence in All We Do: Is the Lure of New Technology Affecting How We Process Out Members’ Information

    DTIC Science & Technology

    2016-02-01

    Approved for public release: distribution unlimited. ii Disclaimer The views expressed in this academic research paper are those of the author...is managed today is far too complex and riddled with risk. Why is a members’ information duplicated across multiple disparate databases ? To better... databases . The purpose of this paper is to provide a viable solution within a given set of constrains that the Air Force can implement. Utilizing the

  4. Whole genome sequencing of elite rice cultivars as a comprehensive information resource for marker assisted selection

    USDA-ARS?s Scientific Manuscript database

    Current advances in sequencing technologies and bioinformatics allow to determine a nearly complete genomic background of rice, a staple food for the poor people. Consequently, comprehensive databases of variation among thousands of varieties is currently being assembled and released. Proper analysi...

  5. Design and Implementation of Campus Application APP Based on Android

    NASA Astrophysics Data System (ADS)

    dongxu, Zhu; yabin, liu; xian lei, PI; weixiang, Zhou; meng, Huang

    2017-07-01

    In this paper, "Internet + campus" as the entrance of the Android technology based on the application of campus design and implementation of Application program. Based on GIS(Geographic Information System) spatial database, GIS spatial analysis technology, Java development technology and Android development technology, this system server adopts the Model View Controller architectue to realize the efficient use of campus information and provide real-time information of all kinds of learning and life for campus student at the same time. "Fingertips on the Institute of Disaster Prevention Science and Technology" release for the campus students of all grades of life, learning, entertainment provides a convenient.

  6. Technology transfer at NASA - A librarian's view

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1991-01-01

    The NASA programs, publications, and services promoting the transfer and utilization of aerospace technology developed by and for NASA are briefly surveyed. Topics addressed include the corporate sources of NASA technical information and its interest for corporate users of information services; the IAA and STAR abstract journals; NASA/RECON, NTIS, and the AIAA Aerospace Database; the RECON Space Commercialization file; the Computer Software Management and Information Center file; company information in the RECON database; and services to small businesses. Also discussed are the NASA publications Tech Briefs and Spinoff, the Industrial Applications Centers, NASA continuing bibliographies on management and patent abstracts (indexed using the NASA Thesaurus), the Index to NASA News Releases and Speeches, and the Aerospace Research Information Network (ARIN).

  7. Automated Aerial Refueling Concept of Operations

    DTIC Science & Technology

    2017-06-09

    and their associated contractors . This document is suitable for release in the public domain; it may be included in DOD and NATO databases such as...5.3.5 TENSION DISCONNECT (BOOM/RECEPTACLE ONLY) ..................22 5.3.6 FUEL LEAKAGE...Navigation System (INS) technology to provide high -availability, high integrity, four dimensional guidance. A robust datalink will be needed with the

  8. 15 CFR Supplement No. 1 to Part 734 - Questions and Answers-Technology and Software Subject to the EAR

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... software and databases, at wholesale and retail. Our products are available by mail order to any member of.... Release of information by instruction in catalog courses and associated teaching laboratories of academic... proprietary business does not qualify as an “academic institution” within the meaning of § 734.9 of this part...

  9. 15 CFR Supplement No. 1 to Part 734 - Questions and Answers-Technology and Software Subject to the EAR

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... software and databases, at wholesale and retail. Our products are available by mail order to any member of.... Release of information by instruction in catalog courses and associated teaching laboratories of academic... proprietary business does not qualify as an “academic institution” within the meaning of § 734.9 of this part...

  10. 15 CFR Supplement No. 1 to Part 734 - Questions and Answers-Technology and Software Subject to the EAR

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... software and databases, at wholesale and retail. Our products are available by mail order to any member of.... Release of information by instruction in catalog courses and associated teaching laboratories of academic... proprietary business does not qualify as an “academic institution” within the meaning of § 734.9 of this part...

  11. 15 CFR Supplement No. 1 to Part 734 - Questions and Answers-Technology and Software Subject to the EAR

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... software and databases, at wholesale and retail. Our products are available by mail order to any member of.... Release of information by instruction in catalog courses and associated teaching laboratories of academic... proprietary business does not qualify as an “academic institution” within the meaning of § 734.9 of this part...

  12. 32 CFR 1800.21 - Processing of requests for records.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... components reasonably believed to hold responsive records. (b) Database of “officially released information... database of “officially released information” which contains copies of documents released by NACIC. Searches of this database can be accomplished expeditiously. Moreover, requests that are specific and well...

  13. 32 CFR 1800.21 - Processing of requests for records.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... components reasonably believed to hold responsive records. (b) Database of “officially released information... database of “officially released information” which contains copies of documents released by NACIC. Searches of this database can be accomplished expeditiously. Moreover, requests that are specific and well...

  14. 32 CFR 1800.21 - Processing of requests for records.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... components reasonably believed to hold responsive records. (b) Database of “officially released information... database of “officially released information” which contains copies of documents released by NACIC. Searches of this database can be accomplished expeditiously. Moreover, requests that are specific and well...

  15. 32 CFR 1800.21 - Processing of requests for records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... components reasonably believed to hold responsive records. (b) Database of “officially released information... database of “officially released information” which contains copies of documents released by NACIC. Searches of this database can be accomplished expeditiously. Moreover, requests that are specific and well...

  16. 32 CFR 1900.21 - Processing of requests for records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Information Act Amendments of 1996. (b) Database of “officially released information.” As an alternative to extensive tasking and as an accommodation to many requesters, the Agency maintains a database of “officially released information” which contains copies of documents released by this Agency. Searches of this database...

  17. The Sequenced Angiosperm Genomes and Genome Databases.

    PubMed

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.

  18. The Sequenced Angiosperm Genomes and Genome Databases

    PubMed Central

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology. PMID:29706973

  19. Release of ToxCastDB and ExpoCastDB databases

    EPA Science Inventory

    EPA has released two databases - the Toxicity Forecaster database (ToxCastDB) and a database of chemical exposure studies (ExpoCastDB) - that scientists and the public can use to access chemical toxicity and exposure data. ToxCastDB users can search and download data from over 50...

  20. Process of formulating USDA's Expanded Flavonoid Database for the Assessment of Dietary intakes: a new tool for epidemiological research.

    PubMed

    Bhagwat, Seema A; Haytowitz, David B; Wasswa-Kintu, Shirley I; Pehrsson, Pamela R

    2015-08-14

    The scientific community continues to be interested in potential links between flavonoid intakes and beneficial health effects associated with certain chronic diseases such as CVD, some cancers and type 2 diabetes. Three separate flavonoid databases (Flavonoids, Isoflavones and Proanthocyanidins) developed by the USDA Agricultural Research Service since 1999 with frequent updates have been used to estimate dietary flavonoid intakes, and investigate their health effects. However, each of these databases contains only a limited number of foods. The USDA has constructed a new Expanded Flavonoids Database for approximately 2900 commonly consumed foods, using analytical values from their existing flavonoid databases (Flavonoid Release 3.1 and Isoflavone Release 2.0) as the foundation to calculate values for all the twenty-nine flavonoid compounds included in these two databases. Thus, the new database provides full flavonoid profiles for twenty-nine predominant dietary flavonoid compounds for every food in the database. Original analytical values in Flavonoid Release 3.1 and Isoflavone Release 2.0 for corresponding foods were retained in the newly constructed database. Proanthocyanidins are not included in the expanded database. The process of formulating the new database includes various calculation techniques. This article describes the process of populating values for the twenty-nine flavonoid compounds for every food in the dataset, along with challenges encountered and resolutions suggested. The new expanded flavonoid database released on the Nutrient Data Laboratory's website would provide uniformity in estimations of flavonoid content in foods and will be a valuable tool for epidemiological studies to assess dietary intakes.

  1. USDA National Nutrient Database for Standard Reference, Release 24

    USDA-ARS?s Scientific Manuscript database

    The USDA Nutrient Database for Standard Reference, Release 24 contains data for over 7,900 food items for up to 146 food components. It replaces the previous release, SR23, issued in September 2010. Data in SR24 supersede values in the printed Handbooks and previous electronic releases of the databa...

  2. RNAimmuno: A database of the nonspecific immunological effects of RNA interference and microRNA reagents

    PubMed Central

    Olejniczak, Marta; Galka-Marciniak, Paulina; Polak, Katarzyna; Fligier, Andrzej; Krzyzosiak, Wlodzimierz J.

    2012-01-01

    The RNAimmuno database was created to provide easy access to information regarding the nonspecific effects generated in cells by RNA interference triggers and microRNA regulators. Various RNAi and microRNA reagents, which differ in length and structure, often cause non-sequence-specific immune responses, in addition to triggering the intended sequence-specific effects. The activation of the cellular sensors of foreign RNA or DNA may lead to the induction of type I interferon and proinflammatory cytokine release. Subsequent changes in the cellular transcriptome and proteome may result in adverse effects, including cell death during therapeutic treatments or the misinterpretation of experimental results in research applications. The manually curated RNAimmuno database gathers the majority of the published data regarding the immunological side effects that are caused in investigated cell lines, tissues, and model organisms by different reagents. The database is accessible at http://rnaimmuno.ibch.poznan.pl and may be helpful in the further application and development of RNAi- and microRNA-based technologies. PMID:22411954

  3. RNAimmuno: a database of the nonspecific immunological effects of RNA interference and microRNA reagents.

    PubMed

    Olejniczak, Marta; Galka-Marciniak, Paulina; Polak, Katarzyna; Fligier, Andrzej; Krzyzosiak, Wlodzimierz J

    2012-05-01

    The RNAimmuno database was created to provide easy access to information regarding the nonspecific effects generated in cells by RNA interference triggers and microRNA regulators. Various RNAi and microRNA reagents, which differ in length and structure, often cause non-sequence-specific immune responses, in addition to triggering the intended sequence-specific effects. The activation of the cellular sensors of foreign RNA or DNA may lead to the induction of type I interferon and proinflammatory cytokine release. Subsequent changes in the cellular transcriptome and proteome may result in adverse effects, including cell death during therapeutic treatments or the misinterpretation of experimental results in research applications. The manually curated RNAimmuno database gathers the majority of the published data regarding the immunological side effects that are caused in investigated cell lines, tissues, and model organisms by different reagents. The database is accessible at http://rnaimmuno.ibch.poznan.pl and may be helpful in the further application and development of RNAi- and microRNA-based technologies.

  4. USDA food and nutrient databases provide the infrastructure for food and nutrition research, policy, and practice.

    PubMed

    Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen

    2013-02-01

    The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.

  5. Teach with Databases: Toxics Release Inventory. [Multimedia].

    ERIC Educational Resources Information Center

    Barracato, Jay; Spooner, Barbara

    This curriculum unit provides students with real world applications of science as it pertains to toxic releases into the environment. This boxed package contains the Toxics Release Inventory (TRI) Teacher's Guide, TRI Database Basics guide, comprehensive TRI compact disk with user's guide, "Getting Started: A Guide to Bringing Environmental…

  6. USDA National Nutrient Database for Standard Reference, release 28

    USDA-ARS?s Scientific Manuscript database

    The USDA National Nutrient Database for Standard Reference, Release 28 contains data for nearly 8,800 food items for up to 150 food components. SR28 replaces the previous release, SR27, originally issued in August 2014. Data in SR28 supersede values in the printed handbooks and previous electronic...

  7. USDA National Nutrient Database for Standard Reference, Release 25

    USDA-ARS?s Scientific Manuscript database

    The USDA National Nutrient Database for Standard Reference, Release 25(SR25)contains data for over 8,100 food items for up to 146 food components. It replaces the previous release, SR24, issued in September 2011. Data in SR25 supersede values in the printed handbooks and previous electronic releas...

  8. New Formulations of Methylphenidate for the Treatment of Attention-Deficit/Hyperactivity Disorder: Pharmacokinetics, Efficacy, and Tolerability.

    PubMed

    Cortese, Samuele; D'Acunto, Giulia; Konofal, Eric; Masi, Gabriele; Vitiello, Benedetto

    2017-02-01

    Psychostimulants are the recommended first-line pharmacological treatment for attention-deficit/hyperactivity disorder (ADHD). Methylphenidate is one of the most commonly used psychostimulants worldwide. Given that immediate-release and/or tablet/capsule formulations may decrease adherence to methylphenidate treatment, several drug companies have been developing novel long-acting and/or liquid/chewable formulations that may improve adherence as well as (for long-acting formulations) reduce abuse potential, decrease stigma associated with multiple administrations per day, and decrease the potential for adverse effects related to dosage peak. Here, we review the pharmacokinetics, efficacy, and tolerability of novel formulations of methylphenidate that are in development or have been approved by the US FDA or European Medicines Agency (EMA) in the last 5 years. We searched the websites of the FDA, EMA, ClinicalTrials.gov, and the pertinent drug companies. We also searched PubMed, Ovid databases (MEDLINE, PsycINFO, Embase + Embase classic), and ISI Web of Knowledge (Web of Science [Science Citation Index Expanded], Biological Abstracts, Biosis, Food Science and Technology Abstracts) to retrieve any additional pertinent information. We found data from trials for the following compounds: (1) methylphenidate extended-release oral suspension (MEROS; NWP06, Quillivant™); (2) methylphenidate extended-release chewable capsules (NWP09, QuilliChew ER™); (3) methylphenidate hydrochloride extended-release capsules (Aptensio XR™); (4) methylphenidate extended-release orally disintegrating tablets (XR-ODT; NT-0102, Cotempla™); (5) ORADUR technology (once-daily tamper-resistant formulation) methylphenidate sustained release (SR); and (6) methylphenidate modified-release (HLD-200; Bejorna™). Overall, available evidence based on trials suggests these compounds have good efficacy and tolerability. Future research should further explore the effectiveness and tolerability of these new formulations as well as their potential to improve adherence to treatment in the 'real world' via pragmatic trials.

  9. JANIS-2: An Improved Version of the NEA Java-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, N.; Henriksson, H.; Nouri, A.; Nagel, P.; Dupont, E.

    2005-05-01

    JANIS (JAva-based Nuclear Information Software) is a display program designed to facilitate the visualisation and manipulation of nuclear data. Its objective is to allow the user of nuclear data to access numerical and graphical representations without prior knowledge of the storage format. It offers maximum flexibility for the comparison of different nuclear data sets. Features included in the latest release are described such as direct access to centralised databases through JAVA Servlet technology.

  10. JANIS-2: An Improved Version of the NEA Java-based Nuclear Data Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soppera, N.; Henriksson, H.; Nagel, P.

    2005-05-24

    JANIS (JAva-based Nuclear Information Software) is a display program designed to facilitate the visualisation and manipulation of nuclear data. Its objective is to allow the user of nuclear data to access numerical and graphical representations without prior knowledge of the storage format. It offers maximum flexibility for the comparison of different nuclear data sets. Features included in the latest release are described such as direct access to centralised databases through JAVA Servlet technology.

  11. Learning about Severe Combined Immunodeficiency (SCID)

    MedlinePlus

    ... Genomics Regulation of Genetic Tests Statute and Legislation Database Newsroom Calendar of Events Current News Releases Image ... Release: February 22, 2005 X-linked SCID mutation database (IL2RGbase) On Other Sites: Development of population-based ...

  12. Recent updates and developments to plant genome size databases

    PubMed Central

    Garcia, Sònia; Leitch, Ilia J.; Anadon-Rosell, Alba; Canela, Miguel Á.; Gálvez, Francisco; Garnatje, Teresa; Gras, Airy; Hidalgo, Oriane; Johnston, Emmeline; Mas de Xaxars, Gemma; Pellicer, Jaume; Siljak-Yakovlev, Sonja; Vallès, Joan; Vitales, Daniel; Bennett, Michael D.

    2014-01-01

    Two plant genome size databases have been recently updated and/or extended: the Plant DNA C-values database (http://data.kew.org/cvalues), and GSAD, the Genome Size in Asteraceae database (http://www.asteraceaegenomesize.com). While the first provides information on nuclear DNA contents across land plants and some algal groups, the second is focused on one of the largest and most economically important angiosperm families, Asteraceae. Genome size data have numerous applications: they can be used in comparative studies on genome evolution, or as a tool to appraise the cost of whole-genome sequencing programs. The growing interest in genome size and increasing rate of data accumulation has necessitated the continued update of these databases. Currently, the Plant DNA C-values database (Release 6.0, Dec. 2012) contains data for 8510 species, while GSAD has 1219 species (Release 2.0, June 2013), representing increases of 17 and 51%, respectively, in the number of species with genome size data, compared with previous releases. Here we provide overviews of the most recent releases of each database, and outline new features of GSAD. The latter include (i) a tool to visually compare genome size data between species, (ii) the option to export data and (iii) a webpage containing information about flow cytometry protocols. PMID:24288377

  13. Ten years of change: National Library of Medicine TOXMAP gets a new look.

    PubMed

    Hochstein, Colette; Gemoets, Darren; Goshorn, Jeanne

    2014-01-01

    The United States National Library of Medicine (NLM) TOXNET® databases < http://toxnet.nlm.nih.gov > provide broad coverage of environmental health information covering a wide variety of topics, including access to the U.S. Environment Protection Agency (EPA)'s Toxics Release Inventory (TRI) data. The NLM web-based geographic information system (GIS), TOXMAP® < http://toxmap.nlm.nih.gov/ >, provides interactive maps which show where TRI chemicals are released into the environment and links to TOXNET for information about these chemicals. TOXMAP also displays locations of Superfund sites on the EPA National Priority List, as well as information about the chemical contaminants at these sites. This column focuses on a new version of TOXMAP which brings it up to date with current web GIS technologies and user expectations.

  14. Freshwater Biological Traits Database (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Freshwater Biological Traits Database. This report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also...

  15. Biodiversity research in the “big data” era: GigaScience and Pensoft work together to publish the most data-rich species description

    PubMed Central

    2013-01-01

    With the publication of the first eukaryotic species description, combining transcriptomic, DNA barcoding, and micro-CT imaging data, GigaScience and Pensoft demonstrate how classical taxonomic description of a new species can be enhanced by applying new generation molecular methods, and novel computing and imaging technologies. This 'holistic’ approach in taxonomic description of a new species of cave-dwelling centipede is published in the Biodiversity Data Journal (BDJ), with coordinated data release in the GigaScience GigaDB database. PMID:24229463

  16. Biodiversity research in the "big data" era: GigaScience and Pensoft work together to publish the most data-rich species description.

    PubMed

    Edmunds, Scott C; Hunter, Chris I; Smith, Vincent; Stoev, Pavel; Penev, Lyubomir

    2013-10-28

    With the publication of the first eukaryotic species description, combining transcriptomic, DNA barcoding, and micro-CT imaging data, GigaScience and Pensoft demonstrate how classical taxonomic description of a new species can be enhanced by applying new generation molecular methods, and novel computing and imaging technologies. This 'holistic' approach in taxonomic description of a new species of cave-dwelling centipede is published in the Biodiversity Data Journal (BDJ), with coordinated data release in the GigaScience GigaDB database.

  17. The Reference Genome Sequence of Saccharomyces cerevisiae: Then and Now

    PubMed Central

    Engel, Stacia R.; Dietrich, Fred S.; Fisk, Dianna G.; Binkley, Gail; Balakrishnan, Rama; Costanzo, Maria C.; Dwight, Selina S.; Hitz, Benjamin C.; Karra, Kalpana; Nash, Robert S.; Weng, Shuai; Wong, Edith D.; Lloyd, Paul; Skrzypek, Marek S.; Miyasato, Stuart R.; Simison, Matt; Cherry, J. Michael

    2014-01-01

    The genome of the budding yeast Saccharomyces cerevisiae was the first completely sequenced from a eukaryote. It was released in 1996 as the work of a worldwide effort of hundreds of researchers. In the time since, the yeast genome has been intensively studied by geneticists, molecular biologists, and computational scientists all over the world. Maintenance and annotation of the genome sequence have long been provided by the Saccharomyces Genome Database, one of the original model organism databases. To deepen our understanding of the eukaryotic genome, the S. cerevisiae strain S288C reference genome sequence was updated recently in its first major update since 1996. The new version, called “S288C 2010,” was determined from a single yeast colony using modern sequencing technologies and serves as the anchor for further innovations in yeast genomic science. PMID:24374639

  18. Tag Content Access Control with Identity-based Key Exchange

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Rong, Chunming

    2010-09-01

    Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.

  19. Over 20 years of reaction access systems from MDL: a novel reaction substructure search algorithm.

    PubMed

    Chen, Lingran; Nourse, James G; Christie, Bradley D; Leland, Burton A; Grier, David L

    2002-01-01

    From REACCS, to MDL ISIS/Host Reaction Gateway, and most recently to MDL Relational Chemistry Server, a new product based on Oracle data cartridge technology, MDL's reaction database management and retrieval systems have undergone great changes. The evolution of the system architecture is briefly discussed. The evolution of MDL reaction substructure search (RSS) algorithms is detailed. This article mainly describes a novel RSS algorithm. This algorithm is based on a depth-first search approach and is able to fully and prospectively use reaction specific information, such as reacting center and atom-atom mapping (AAM) information. The new algorithm has been used in the recently released MDL Relational Chemistry Server and allows the user to precisely find reaction instances in databases while minimizing unrelated hits. Finally, the existing and new RSS algorithms are compared with several examples.

  20. 40 CFR 312.26 - Reviews of Federal, State, Tribal, and local government records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... use restrictions, applicable to the subject property. (c) With regard to nearby or adjoining properties, the review of federal, tribal, state, and local government records or databases of government... records of reported releases or threatened releases. Such records or databases containing such records and...

  1. Data Processing Factory for the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan

    2002-12-01

    The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.

  2. GOBASE—a database of mitochondrial and chloroplast information

    PubMed Central

    O'Brien, Emmet A.; Badidi, Elarbi; Barbasiewicz, Ania; deSousa, Cristina; Lang, B. Franz; Burger, Gertraud

    2003-01-01

    GOBASE is a relational database containing integrated sequence, RNA secondary structure and biochemical and taxonomic information about organelles. GOBASE release 6 (summer 2002) contains over 130 000 mitochondrial sequences, an increase of 37% over the previous release, and more than 30 000 chloroplast sequences in a new auxiliary database. To handle this flood of new data, we have designed and implemented GOpop, a Java system for population and verification of the database. We have also implemented a more powerful and flexible user interface using the PHP programming language. http://megasun.bch.umontreal.ca/gobase/gobase.html. PMID:12519975

  3. TNAURice: Database on rice varieties released from Tamil Nadu Agricultural University

    PubMed Central

    Ramalingam, Jegadeesan; Arul, Loganathan; Sathishkumar, Natarajan; Vignesh, Dhandapani; Thiyagarajan, Katiannan; Samiyappan, Ramasamy

    2010-01-01

    We developed, TNAURice: a database comprising of the rice varieties released from a public institution, Tamil Nadu Agricultural University (TNAU), Coimbatore, India. Backed by MS-SQL, and ASP-Net at the front end, this database provide information on both quantitative and qualitative descriptors of the rice varities inclusive of their parental details. Enabled by an user friendly search utility, the database can be effectively searched by the varietal descriptors, and the entire contents are navigable as well. The database comes handy to the plant breeders involved in the varietal improvement programs to decide on the choice of parental lines. TNAURice is available for public access at http://www.btistnau.org/germdefault.aspx. PMID:21364829

  4. TNAURice: Database on rice varieties released from Tamil Nadu Agricultural University.

    PubMed

    Ramalingam, Jegadeesan; Arul, Loganathan; Sathishkumar, Natarajan; Vignesh, Dhandapani; Thiyagarajan, Katiannan; Samiyappan, Ramasamy

    2010-11-27

    WE DEVELOPED, TNAURICE: a database comprising of the rice varieties released from a public institution, Tamil Nadu Agricultural University (TNAU), Coimbatore, India. Backed by MS-SQL, and ASP-Net at the front end, this database provide information on both quantitative and qualitative descriptors of the rice varities inclusive of their parental details. Enabled by an user friendly search utility, the database can be effectively searched by the varietal descriptors, and the entire contents are navigable as well. The database comes handy to the plant breeders involved in the varietal improvement programs to decide on the choice of parental lines. TNAURice is available for public access at http://www.btistnau.org/germdefault.aspx.

  5. A Ruby API to query the Ensembl database for genomic features.

    PubMed

    Strozzi, Francesco; Aerts, Jan

    2011-04-01

    The Ensembl database makes genomic features available via its Genome Browser. It is also possible to access the underlying data through a Perl API for advanced querying. We have developed a full-featured Ruby API to the Ensembl databases, providing the same functionality as the Perl interface with additional features. A single Ruby API is used to access different releases of the Ensembl databases and is also able to query multi-species databases. Most functionality of the API is provided using the ActiveRecord pattern. The library depends on introspection to make it release independent. The API is available through the Rubygem system and can be installed with the command gem install ruby-ensembl-api.

  6. HUNT: launch of a full-length cDNA database from the Helix Research Institute.

    PubMed

    Yudate, H T; Suwa, M; Irie, R; Matsui, H; Nishikawa, T; Nakamura, Y; Yamaguchi, D; Peng, Z Z; Yamamoto, T; Nagai, K; Hayashi, K; Otsuki, T; Sugiyama, T; Ota, T; Suzuki, Y; Sugano, S; Isogai, T; Masuho, Y

    2001-01-01

    The Helix Research Institute (HRI) in Japan is releasing 4356 HUman Novel Transcripts and related information in the newly established HUNT database. The institute is a joint research project principally funded by the Japanese Ministry of International Trade and Industry, and the clones were sequenced in the governmental New Energy and Industrial Technology Development Organization (NEDO) Human cDNA Sequencing Project. The HUNT database contains an extensive amount of annotation from advanced analysis and represents an essential bioinformatics contribution towards understanding of the gene function. The HRI human cDNA clones were obtained from full-length enriched cDNA libraries constructed with the oligo-capping method and have resulted in novel full-length cDNA sequences. A large fraction has little similarity to any proteins of known function and to obtain clues about possible function we have developed original analysis procedures. Any putative function deduced here can be validated or refuted by complementary analysis results. The user can also extract information from specific categories like PROSITE patterns, PFAM domains, PSORT localization, transmembrane helices and clones with GENIUS structure assignments. The HUNT database can be accessed at http://www.hri.co.jp/HUNT.

  7. Database of Sources of Environmental Releases of Dioxin-Like Compounds in the United States

    EPA Science Inventory

    The Database of Sources of Environmental Releases of Dioxin-like Compounds in the United States (US)Freshwater Biological Traits Database (Data Sources)

    EPA Science Inventory

    When EPA release the final report, Freshwater Biological Traits Database, it referenced numerous data sources that are included below. The Traits Database report covers the development of a database of freshwater biological traits with additional traits that are relevan...

  8. DOE technology information management system database study report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less

  9. IPD-MHC 2.0: an improved inter-species database for the study of the major histocompatibility complex

    PubMed Central

    Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A.; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G.; Flicek, Paul; Bontrop, Ronald E.; Hammond, John A.; Marsh, Steven G. E.

    2017-01-01

    The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. PMID:27899604

  10. First release of the Dietary Supplement Ingredient Database: Nutrient estimates and methodology for 18 vitamins and minerals in adult multivitamin/minerals (MVMs)

    USDA-ARS?s Scientific Manuscript database

    The Dietary Supplement Ingredient Database (DSID) is a federal initiative to provide analytical validation of ingredients in dietary supplements. The first release on vitamins and minerals in adult MVMs is now available. Multiple lots of >100 representative adult MVMs were chemically analyzed for ...

  11. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  13. Post-OPC verification using a full-chip pattern-based simulation verification method

    NASA Astrophysics Data System (ADS)

    Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary

    2005-11-01

    In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.

  14. Towards G2G: Systems of Technology Database Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David

    2005-01-01

    We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.

  15. Diet History Questionnaire: Database Revision History

    Cancer.gov

    The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.

  16. RNAcentral: an international database of ncRNA sequences

    DOE PAGES

    Williams, Kelly Porter

    2014-10-28

    The field of non-coding RNA biology has been hampered by the lack of availability of a comprehensive, up-to-date collection of accessioned RNA sequences. Here we present the first release of RNAcentral, a database that collates and integrates information from an international consortium of established RNA sequence databases. The initial release contains over 8.1 million sequences, including representatives of all major functional classes. A web portal (http://rnacentral.org) provides free access to data, search functionality, cross-references, source code and an integrated genome browser for selected species.

  17. The research of network database security technology based on web service

    NASA Astrophysics Data System (ADS)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  18. IPD-MHC 2.0: an improved inter-species database for the study of the major histocompatibility complex.

    PubMed

    Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G; Flicek, Paul; Bontrop, Ronald E; Hammond, John A; Marsh, Steven G E

    2017-01-04

    The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. XML technology planning database : lessons learned

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  1. Preliminary geologic map of the Oat Mountain 7.5' quadrangle, Southern California: a digital database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, Russell H.

    1995-01-01

    This database, identified as "Preliminary Geologic Map of the Oat Mountain 7.5' Quadrangle, southern California: A Digital Database," has been approved for release and publication by the Director of the USGS. Although this database has been reviewed and is substantially complete, the USGS reserves the right to revise the data pursuant to further analysis and review. This database is released on condition that neither the USGS nor the U. S. Government may be held liable for any damages resulting from its use. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1993). More specific information about the units may be available in the original sources.

  2. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS AND SANCTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States...

  3. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States, or by a United...

  4. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States, or by a United...

  5. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States, or by a United...

  6. 31 CFR 560.418 - Release of technology or software in the United States or a third country.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Release of technology or software in... IRANIAN TRANSACTIONS AND SANCTIONS REGULATIONS Interpretations § 560.418 Release of technology or software in the United States or a third country. The release of technology or software in the United States...

  7. Data Management System

    NASA Technical Reports Server (NTRS)

    1997-01-01

    CENTRA 2000 Inc., a wholly owned subsidiary of Auto-trol technology, obtained permission to use software originally developed at Johnson Space Center for the Space Shuttle and early Space Station projects. To support their enormous information-handling needs, a product data management, electronic document management and work-flow system was designed. Initially, just 33 database tables comprised the original software, which was later expanded to about 100 tables. This system, now called CENTRA 2000, is designed for quick implementation and supports the engineering process from preliminary design through release-to-production. CENTRA 2000 can also handle audit histories and provides a means to ensure new information is distributed. The product has 30 production sites worldwide.

  8. Database for volcanic processes and geology of Augustine Volcano, Alaska

    USGS Publications Warehouse

    McIntire, Jacqueline; Ramsey, David W.; Thoms, Evan; Waitt, Richard B.; Beget, James E.

    2012-01-01

    This digital release contains information used to produce the geologic map published as Plate 1 in U.S. Geological Survey Professional Paper 1762 (Waitt and Begét, 2009). The main component of this digital release is a geologic map database prepared using geographic information systems (GIS) applications. This release also contains links to files to view or print the map plate, accompanying measured sections, and main report text from Professional Paper 1762. It should be noted that Augustine Volcano erupted in 2006, after the completion of the geologic mapping shown in Professional Paper 1762 and presented in this database. Information on the 2006 eruption can be found in U.S. Geological Survey Professional Paper 1769. For the most up to date information on the status of Alaska volcanoes, please refer to the U.S. Geological Survey Volcano Hazards Program website.

  9. Physiological Parameters Database for PBPK Modeling (External Review Draft)

    EPA Science Inventory

    EPA released for public comment a physiological parameters database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence. It also contains similar data for an...

  10. U.S. Quaternary Fault and Fold Database Released

    NASA Astrophysics Data System (ADS)

    Haller, Kathleen M.; Machette, Michael N.; Dart, Richard L.; Rhea, B. Susan

    2004-06-01

    A comprehensive online compilation of Quaternary-age faults and folds throughout the United States was recently released by the U.S. Geological Survey, with cooperation from state geological surveys, academia, and the private sector. The Web site at http://Qfaults.cr.usgs.gov/ contains searchable databases and related geo-spatial data that characterize earthquake-related structures that could be potential seismic sources for large-magnitude (M > 6) earthquakes.

  11. The Smart Aerial Release Machine, a Universal System for Applying the Sterile Insect Technique

    PubMed Central

    Mubarqui, Ruben Leal; Perez, Rene Cano; Kladt, Roberto Angulo; Lopez, Jose Luis Zavala; Parker, Andrew; Seck, Momar Talla; Sall, Baba; Bouyer, Jérémy

    2014-01-01

    Background Beyond insecticides, alternative methods to control insect pests for agriculture and vectors of diseases are needed. Management strategies involving the mass-release of living control agents have been developed, including genetic control with sterile insects and biological control with parasitoids, for which aerial release of insects is often required. Aerial release in genetic control programmes often involves the use of chilled sterile insects, which can improve dispersal, survival and competitiveness of sterile males. Currently available means of aerially releasing chilled fruit flies are however insufficiently precise to ensure homogeneous distribution at low release rates and no device is available for tsetse. Methodology/Principal Findings Here we present the smart aerial release machine, a new design by the Mubarqui Company, based on the use of vibrating conveyors. The machine is controlled through Bluetooth by a tablet with Android Operating System including a completely automatic guidance and navigation system (MaxNav software). The tablet is also connected to an online relational database facilitating the preparation of flight schedules and automatic storage of flight reports. The new machine was compared with a conveyor release machine in Mexico using two fruit flies species (Anastrepha ludens and Ceratitis capitata) and we obtained better dispersal homogeneity (% of positive traps, p<0.001) for both species and better recapture rates for Anastrepha ludens (p<0.001), especially at low release densities (<1500 per ha). We also demonstrated that the machine can replace paper boxes for aerial release of tsetse in Senegal. Conclusions/Significance This technology limits damages to insects and allows a large range of release rates from 10 flies/km2 for tsetse flies up to 600 000 flies/km2 for fruit flies. The potential of this machine to release other species like mosquitoes is discussed. Plans and operating of the machine are provided to allow its use worldwide. PMID:25036274

  12. The smart aerial release machine, a universal system for applying the sterile insect technique.

    PubMed

    Leal Mubarqui, Ruben; Perez, Rene Cano; Kladt, Roberto Angulo; Lopez, Jose Luis Zavala; Parker, Andrew; Seck, Momar Talla; Sall, Baba; Bouyer, Jérémy

    2014-01-01

    Beyond insecticides, alternative methods to control insect pests for agriculture and vectors of diseases are needed. Management strategies involving the mass-release of living control agents have been developed, including genetic control with sterile insects and biological control with parasitoids, for which aerial release of insects is often required. Aerial release in genetic control programmes often involves the use of chilled sterile insects, which can improve dispersal, survival and competitiveness of sterile males. Currently available means of aerially releasing chilled fruit flies are however insufficiently precise to ensure homogeneous distribution at low release rates and no device is available for tsetse. Here we present the smart aerial release machine, a new design by the Mubarqui Company, based on the use of vibrating conveyors. The machine is controlled through Bluetooth by a tablet with Android Operating System including a completely automatic guidance and navigation system (MaxNav software). The tablet is also connected to an online relational database facilitating the preparation of flight schedules and automatic storage of flight reports. The new machine was compared with a conveyor release machine in Mexico using two fruit flies species (Anastrepha ludens and Ceratitis capitata) and we obtained better dispersal homogeneity (% of positive traps, p<0.001) for both species and better recapture rates for Anastrepha ludens (p<0.001), especially at low release densities (<1500 per ha). We also demonstrated that the machine can replace paper boxes for aerial release of tsetse in Senegal. This technology limits damages to insects and allows a large range of release rates from 10 flies/km2 for tsetse flies up to 600,000 flies/km2 for fruit flies. The potential of this machine to release other species like mosquitoes is discussed. Plans and operating of the machine are provided to allow its use worldwide.

  13. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  14. Database security and encryption technology research and application

    NASA Astrophysics Data System (ADS)

    Zhu, Li-juan

    2013-03-01

    The main purpose of this paper is to discuss the current database information leakage problem, and discuss the important role played by the message encryption techniques in database security, As well as MD5 encryption technology principle and the use in the field of website or application. This article is divided into introduction, the overview of the MD5 encryption technology, the use of MD5 encryption technology and the final summary. In the field of requirements and application, this paper makes readers more detailed and clearly understood the principle, the importance in database security, and the use of MD5 encryption technology.

  15. Applications of Technology to CAS Data-Base Production.

    ERIC Educational Resources Information Center

    Weisgerber, David W.

    1984-01-01

    Reviews the economic importance of applying computer technology to Chemical Abstracts Service database production from 1973 to 1983. Database building, technological applications for editorial processing (online editing, Author Index Manufacturing System), and benefits (increased staff productivity, reduced rate of increase of cost of services,…

  16. Advances in real-time magnetic resonance imaging of the vocal tract for speech science and technology research.

    PubMed

    Toutios, Asterios; Narayanan, Shrikanth S

    2016-01-01

    Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development.

  17. Advances in real-time magnetic resonance imaging of the vocal tract for speech science and technology research

    PubMed Central

    TOUTIOS, ASTERIOS; NARAYANAN, SHRIKANTH S.

    2016-01-01

    Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development. PMID:27833745

  18. Potentials of Advanced Database Technology for Military Information Systems

    DTIC Science & Technology

    2001-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010866 TITLE: Potentials of Advanced Database Technology for Military... Technology for Military Information Systems Sunil Choennia Ben Bruggemanb a National Aerospace Laboratory, NLR, P.O. Box 90502, 1006 BM Amsterdam...application of advanced information tech- nology, including database technology , as underpin- actions X and Y as dangerous or not? ning is

  19. Implications of Multilingual Interoperability of Speech Technology for Military Use (Les implications de l’interoperabilite multilingue des technologies vocales pour applications militaires)

    DTIC Science & Technology

    2004-09-01

    Databases 2-2 2.3.1 Translanguage English Database 2-2 2.3.2 Australian National Database of Spoken Language 2-3 2.3.3 Strange Corpus 2-3 2.3.4...some relevance to speech technology research. 2.3.1 Translanguage English Database In a daring plan Joseph Mariani, then at LIMSI-CNRS, proposed to...native speakers. The database is known as the ‘ Translanguage English Database’ but is often referred to as the ‘terrible English database.’ About 28

  20. USDA Branded Food Products Database, Release 2

    USDA-ARS?s Scientific Manuscript database

    The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...

  1. Genetically modified crops: Brazilian law and overview.

    PubMed

    Marinho, C D; Martins, F J O; Amaral Júnior, A T; Gonçalves, L S A; dos Santos, O J A P; Alves, D P; Brasileiro, B P; Peternelli, L A

    2014-07-07

    In Brazil, the first genetically modified (GM) crop was released in 1998, and it is estimated that 84, 78, and 50% of crop areas containing soybean, corn, and cotton, respectively, were transgenic in 2012. This intense and rapid adoption rate confirms that the choice to use technology has been the main factor in developing national agriculture. Thus, this review focuses on understanding these dynamics in the context of farmers, trade relations, and legislation. To accomplish this goal, a survey was conducted using the database of the National Cultivar Registry and the National Service for Plant Variety Protection of the Ministry of Agriculture, Livestock and Supply [Ministério da Agricultura, Pecuária e Abastecimento (MAPA)] between 1998 and October 13, 2013. To date, 36 events have been released: five for soybeans, 18 for corn, 12 for cotton, and one for beans. From these events, 1395 cultivars have been developed and registered: 582 for soybean, 783 for corn and 30 for cotton. Monsanto owns 73.05% of the technologies used to develop these cultivars, while the Dow AgroScience - DuPont partnership and Syngenta have 16.34 and 4.37% ownership, respectively. Thus, the provision of transgenic seeds by these companies is an oligopoly supported by legislation. Moreover, there has been a rapid replacement of conventional crops by GM crops, whose technologies belong almost exclusively to four multinational companies, with the major ownership by Monsanto. These results reflect a warning to the government of the increased dependence on multinational corporations for key agricultural commodities.

  2. Development of a database system for near-future climate change projections under the Japanese National Project SI-CAT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.

    2017-12-01

    Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.

  3. Designing Corporate Databases to Support Technology Innovation

    ERIC Educational Resources Information Center

    Gultz, Michael Jarett

    2012-01-01

    Based on a review of the existing literature on database design, this study proposed a unified database model to support corporate technology innovation. This study assessed potential support for the model based on the opinions of 200 technology industry executives, including Chief Information Officers, Chief Knowledge Officers and Chief Learning…

  4. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  5. Beyond volume: hospital-based healthcare technology as a predictor of mortality for cardiovascular patients in Korea.

    PubMed

    Kim, Jae-Hyun; Lee, Yunhwan; Park, Eun-Cheol

    2016-06-01

    To examine whether hospital-based healthcare technology is related to 30-day postoperative mortality rates after adjusting for hospital volume of cardiovascular surgical procedures.This study used the National Health Insurance Service-Cohort Sample Database from 2002 to 2013, which was released by the Korean National Health Insurance Service. A total of 11,109 cardiovascular surgical procedure patients were analyzed. The primary analysis was based on logistic regression models to examine our hypothesis.After adjusting for hospital volume of cardiovascular surgical procedures as well as for all other confounders, the odds ratio (OR) of 30-day mortality in low healthcare technology hospitals was 1.567-times higher (95% confidence interval [CI] = 1.069-2.297) than in those with high healthcare technology. We also found that, overall, cardiovascular surgical patients treated in low healthcare technology hospitals, regardless of the extent of cardiovascular surgical procedures, had the highest 30-day mortality rate.Although the results of our study provide scientific evidence for a hospital volume-mortality relationship in cardiovascular surgical patients, the independent effect of hospital-based healthcare technology is strong, resulting in a lower mortality rate. As hospital characteristics such as clinical pathways and protocols are likely to also play an important role in mortality, further research is required to explore their respective contributions.

  6. 77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... Database AGENCY: Office of the National Coordinator for Health Information Technology, HHS. ACTION: Notice of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has launched a new Health Information Technology Federal Advisory Committee Workgroup Application Database...

  7. A procedure for matching truck crash records with hazardous material release incidents and a comparative analysis of the determinants of truck crashes with hazardous material releases.

    DOT National Transportation Integrated Search

    2012-06-01

    In the current study, we quantified the number and location of hazardous release crashes and identified the events leading : to crashes, as well as the type of material released. This study, for the first time, combined two federal databases: the U.S...

  8. Alaska Division of Geological & Geophysical Surveys

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic hazards to buildings, roads, bridges, and other installations and structures (AS 41.08.020). Headlines New release! Active faults and seismic hazards in Alaska - MP 160 New release! The Alaska Volcano Observatory

  9. JAMSTEC multibeam surveys and submersible dives around the Hawaiian Islands: a collaborative Japan-USA exploration of Hawaii's deep seafloor

    USGS Publications Warehouse

    Robinson, Joel E.; Eakins, Barry W.; Kanamatsu, Toshiya; Naka, Jiro; Takahashi, Eiichi; Satake, Kenji; Smith, John R.; Clague, David A.; Yokose, Hisayoshi

    2006-01-01

    This database release, USGS Data Series 171, contains data collected during four Japan-USA collaborative cruises that characterize the seafloor around the Hawaiian Islands. The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) sponsored cruises in 1998, 1999, 2001, and 2002, to build a greater understanding of the deep marine geology around the Hawaiian Islands. During these cruises, scientists surveyed over 600,000 square kilometers of the seafloor with a hull-mounted multibeam seafloor-mapping sonar system (SEA BEAM® 2112), observed the seafloor and collected samples using robotic and manned submersible dives, collected dredge and piston-core samples, and performed single-channel seismic surveys.

  10. BioServices: a common Python package to access biological Web Services programmatically.

    PubMed

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  11. Geologic Map Database of Texas

    USGS Publications Warehouse

    Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.

    2005-01-01

    The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.

  12. Conservation and the 4 Rs, which are rescue, rehabilitation, release, and research.

    PubMed

    Pyke, Graham H; Szabo, Judit K

    2018-02-01

    Vertebrate animals can be injured or threatened with injury through human activities, thus warranting their "rescue." Details of wildlife rescue, rehabilitation, release, and associated research (our 4 Rs) are often recorded in large databases, resulting in a wealth of available information. This information has huge research potential and can contribute to understanding of animal biology, anthropogenic impacts on wildlife, and species conservation. However, such databases have been little used, few studies have evaluated factors influencing success of rehabilitation and/or release, recommended actions to conserve threatened species have rarely arisen, and direct benefits for species conservation are yet to be demonstrated. We therefore recommend that additional research be based on data from rescue, rehabilitation, and release of animals that is broader in scope than previous research and would have community support. © 2017 Society for Conservation Biology.

  13. How gamma radiation processing systems are benefiting from the latest advances in information technology

    NASA Astrophysics Data System (ADS)

    Gibson, Wayne H.; Levesque, Daniel

    2000-03-01

    This paper discusses how gamma irradiation plants are putting the latest advances in computer and information technology to use for better process control, cost savings, and strategic advantages. Some irradiator operations are gaining significant benefits by integrating computer technology and robotics with real-time information processing, multi-user databases, and communication networks. The paper reports on several irradiation facilities that are making good use of client/server LANs, user-friendly graphics interfaces, supervisory control and data acquisition (SCADA) systems, distributed I/O with real-time sensor devices, trending analysis, real-time product tracking, dynamic product scheduling, and automated dosimetry reading. These plants are lowering costs by fast and reliable reconciliation of dosimetry data, easier validation to GMP requirements, optimizing production flow, and faster release of sterilized products to market. There is a trend in the manufacturing sector towards total automation using "predictive process control". Real-time verification of process parameters "on-the-run" allows control parameters to be adjusted appropriately, before the process strays out of limits. Applying this technology to the gamma radiation process, control will be based on monitoring the key parameters such as time, and making adjustments during the process to optimize quality and throughput. Dosimetry results will be used as a quality control measurement rather than as a final monitor for the release of the product. Results are correlated with the irradiation process data to quickly and confidently reconcile variations. Ultimately, a parametric process control system utilizing responsive control, feedback and verification will not only increase productivity and process efficiency, but can also result in operating within tighter dose control set points.

  14. The Technology Education Graduate Research Database, 1892-2000. CTTE Monograph.

    ERIC Educational Resources Information Center

    Reed, Philip A., Ed.

    The Technology Education Graduate Research Database (TEGRD) was designed in two parts. The first part was a 384 page bibliography of theses and dissertations from 1892-2000. The second part was an online, searchable database of graduate research completed within technology education from 1892 to the present. The primary goals of the project were:…

  15. New database facilitates characterization of flavonoid intake, sources, and positive associations with diet quality among U.S. adults

    USDA-ARS?s Scientific Manuscript database

    Epidemiologic studies show inverse associations between flavonoid intake and chronic disease risk. However, a lack of comprehensive databases of the flavonoid content of foods has hindered efforts to fully characterize population intake. Using a newly released database of flavonoid values, we soug...

  16. The ASTRAL Compendium in 2004

    DOE R&D Accomplishments Database

    Chandonia, John-Marc; Hon, Gary; Walker, Nigel S.; Lo Conte, Loredana; Koehl, Patrice; Levitt, Michael; Brenner, Steven E.

    2003-09-15

    The ASTRAL compendium provides several databases and tools to aid in the analysis of protein structures, particularly through the use of their sequences. Partially derived from the SCOP database of protein structure domains, it includes sequences for each domain and other resources useful for studying these sequences and domain structures. The current release of ASTRAL contains 54,745 domains, more than three times as many as the initial release four years ago. ASTRAL has undergone major transformations in the past two years. In addition to several complete updates each year, ASTRAL is now updated on a weekly basis with preliminary classifications of domains from newly released PDB structures. These classifications are available as a stand-alone database, as well as available integrated into other ASTRAL databases such as representative subsets. To enhance the utility of ASTRAL to structural biologists, all SCOP domains are now made available as PDB-style coordinate files as well as sequences. In addition to sequences and representative subsets based on SCOP domains, sequences and subsets based on PDB chains are newly included in ASTRAL. Several search tools have been added to ASTRAL to facilitate retrieval of data by individual users and automated methods.

  17. The UMIST database for astrochemistry 2006

    NASA Astrophysics Data System (ADS)

    Woodall, J.; Agúndez, M.; Markwick-Kemper, A. J.; Millar, T. J.

    2007-05-01

    Aims:We present a new version of the UMIST Database for Astrochemistry, the fourth such version to be released to the public. The current version contains some 4573 binary gas-phase reactions, an increase of 10% from the previous (1999) version, among 420 species, of which 23 are new to the database. Methods: Major updates have been made to ion-neutral reactions, neutral-neutral reactions, particularly at low temperature, and dissociative recombination reactions. We have included for the first time the interstellar chemistry of fluorine. In addition to the usual database, we have also released a reaction set in which the effects of dipole-enhanced ion-neutral rate coefficients are included. Results: These two reactions sets have been used in a dark cloud model and the results of these models are presented and discussed briefly. The database and associated software are available on the World Wide Web at www.udfa.net. Tables 1, 2, 4 and 9 are only available in electronic form at http://www.aanda.org

  18. Gramene database in 2010: updates and extensions.

    PubMed

    Youens-Clark, Ken; Buckler, Ed; Casstevens, Terry; Chen, Charles; Declerck, Genevieve; Derwent, Paul; Dharmawardhana, Palitha; Jaiswal, Pankaj; Kersey, Paul; Karthikeyan, A S; Lu, Jerry; McCouch, Susan R; Ren, Liya; Spooner, William; Stein, Joshua C; Thomason, Jim; Wei, Sharon; Ware, Doreen

    2011-01-01

    Now in its 10th year, the Gramene database (http://www.gramene.org) has grown from its primary focus on rice, the first fully-sequenced grass genome, to become a resource for major model and crop plants including Arabidopsis, Brachypodium, maize, sorghum, poplar and grape in addition to several species of rice. Gramene began with the addition of an Ensembl genome browser and has expanded in the last decade to become a robust resource for plant genomics hosting a wide array of data sets including quantitative trait loci (QTL), metabolic pathways, genetic diversity, genes, proteins, germplasm, literature, ontologies and a fully-structured markers and sequences database integrated with genome browsers and maps from various published studies (genetic, physical, bin, etc.). In addition, Gramene now hosts a variety of web services including a Distributed Annotation Server (DAS), BLAST and a public MySQL database. Twice a year, Gramene releases a major build of the database and makes interim releases to correct errors or to make important updates to software and/or data.

  19. National Solar Radiation Database 1991-2005 Update: User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilcox, S.

    2007-04-01

    This manual describes how to obtain and interpret the data products from the updated 1991-2005 National Solar Radiation Database (NSRDB). This is an update of the original 1961-1990 NSRDB released in 1992.

  20. From LDEF to a national Space Environment and Effects (SEE) program: A natural progression

    NASA Technical Reports Server (NTRS)

    Bowles, David E.; Calloway, Robert L.; Funk, Joan G.; Kinard, William H.; Levine, Arlene S.

    1995-01-01

    As the LDEF program draws to a close, it leaves in place the fundamental building blocks for a Space Environment and Effects (SEE) program. Results from LDEF data analyses and investigations now form a substantial core of knowledge on the long term effects of the space environment on materials, system and structures. In addition, these investigations form the basic structure of a critically-needed SEE archive and database system. An agency-wide effort is required to capture all elements of a SEE program to provide a more comprehensive and focused approach to understanding the space environment, determining the best techniques for both flight and ground-based experimentation, updating the models which predict both the environments and those effects on subsystems and spacecraft, and, finally, ensuring that this multitudinous information is properly maintained, and inserted into spacecraft design programs. Many parts and pieces of a SEE program already exist at various locations to fulfill specific needs. The primary purpose of this program, under the direction of the Office of Advanced Concepts and Technology (OACT) in NASA Headquarters, is to take advantage of these parts; apply synergisms where possible; identify and when possible fill-in gaps; coordinate and advocate a comprehensive SEE program. The SEE program must coordinate and support the efforts of well-established technical communities wherein the bulk of the work will continue to be done. The SEE program will consist of a NASA-led SEE Steering Committee, consisting of government and industry users, with the responsibility for coordination between technology developers and NASA customers; and Technical Working Groups with primary responsibility for program technical content in response to user needs. The Technical Working Groups are as follows: Materials and Processes; Plasma and Fields; Ionizing Radiation; Meteoroids and Orbital Debris; Neutral External Contamination; Thermosphere, Thermal, and Solar Conditions; Electromagnetic Effects; Integrated Assessments and Databases. Specific technology development tasks will be solicited through a NASA Research Announcement to be released in May of 1994. The areas in which tasks are solicited include: (1) engineering environment definitions, (2) environments and effects design guidelines, (3) environments and effects assessment models and databases, and (4) flight/ground simulation/technology assessment data.

  1. From LDEF to a national Space Environment and Effects (SEE) program: A natural progression

    NASA Astrophysics Data System (ADS)

    Bowles, David E.; Calloway, Robert L.; Funk, Joan G.; Kinard, William H.; Levine, Arlene S.

    1995-02-01

    As the LDEF program draws to a close, it leaves in place the fundamental building blocks for a Space Environment and Effects (SEE) program. Results from LDEF data analyses and investigations now form a substantial core of knowledge on the long term effects of the space environment on materials, system and structures. In addition, these investigations form the basic structure of a critically-needed SEE archive and database system. An agency-wide effort is required to capture all elements of a SEE program to provide a more comprehensive and focused approach to understanding the space environment, determining the best techniques for both flight and ground-based experimentation, updating the models which predict both the environments and those effects on subsystems and spacecraft, and, finally, ensuring that this multitudinous information is properly maintained, and inserted into spacecraft design programs. Many parts and pieces of a SEE program already exist at various locations to fulfill specific needs. The primary purpose of this program, under the direction of the Office of Advanced Concepts and Technology (OACT) in NASA Headquarters, is to take advantage of these parts; apply synergisms where possible; identify and when possible fill-in gaps; coordinate and advocate a comprehensive SEE program. The SEE program must coordinate and support the efforts of well-established technical communities wherein the bulk of the work will continue to be done. The SEE program will consist of a NASA-led SEE Steering Committee, consisting of government and industry users, with the responsibility for coordination between technology developers and NASA customers; and Technical Working Groups with primary responsibility for program technical content in response to user needs. The Technical Working Groups are as follows: Materials and Processes; Plasma and Fields; Ionizing Radiation; Meteoroids and Orbital Debris; Neutral External Contamination; Thermosphere, Thermal, and Solar Conditions; Electromagnetic Effects; Integrated Assessments and Databases. Specific technology development tasks will be solicited through a NASA Research Announcement to be released in May of 1994. The areas in which tasks are solicited include: (1) engineering environment definitions, (2) environments and effects design guidelines, (3) environments and effects assessment models and databases, and (4) flight/ground simulation/technology assessment data.

  2. Database for the geologic map of upper Eocene to Holocene volcanic and related rocks in the Cascade Range, Washington

    USGS Publications Warehouse

    Barron, Andrew D.; Ramsey, David W.; Smith, James G.

    2014-01-01

    This digital database contains information used to produce the geologic map published as Sheet 1 in U.S. Geological Survey Miscellaneous Investigations Series Map I-2005. (Sheet 2 of Map I-2005 shows sources of geologic data used in the compilation and is available separately). Sheet 1 of Map I-2005 shows the distribution and relations of volcanic and related rock units in the Cascade Range of Washington at a scale of 1:500,000. This digital release is produced from stable materials originally compiled at 1:250,000 scale that were used to publish Sheet 1. The database therefore contains more detailed geologic information than is portrayed on Sheet 1. This is most noticeable in the database as expanded polygons of surficial units and the presence of additional strands of concealed faults. No stable compilation materials exist for Sheet 1 at 1:500,000 scale. The main component of this digital release is a spatial database prepared using geographic information systems (GIS) applications. This release also contains links to files to view or print the map sheet, main report text, and accompanying mapping reference sheet from Map I-2005. For more information on volcanoes in the Cascade Range in Washington, Oregon, or California, please refer to the U.S. Geological Survey Volcano Hazards Program website.

  3. SkyMapper Southern Survey: First Data Release (DR1)

    NASA Astrophysics Data System (ADS)

    Wolf, Christian; Onken, Christopher A.; Luvaul, Lance C.; Schmidt, Brian P.; Bessell, Michael S.; Chang, Seo-Won; Da Costa, Gary S.; Mackey, Dougal; Martin-Jones, Tony; Murphy, Simon J.; Preston, Tim; Scalzo, Richard A.; Shao, Li; Smillie, Jon; Tisserand, Patrick; White, Marc C.; Yuan, Fang

    2018-02-01

    We present the first data release of the SkyMapper Southern Survey, a hemispheric survey carried out with the SkyMapper Telescope at Siding Spring Observatory in Australia. Here, we present the survey strategy, data processing, catalogue construction, and database schema. The first data release dataset includes over 66 000 images from the Shallow Survey component, covering an area of 17 200 deg2 in all six SkyMapper passbands uvgriz, while the full area covered by any passband exceeds 20 000 deg2. The catalogues contain over 285 million unique astrophysical objects, complete to roughly 18 mag in all bands. We compare our griz point-source photometry with Pan-STARRS1 first data release and note an RMS scatter of 2%. The internal reproducibility of SkyMapper photometry is on the order of 1%. Astrometric precision is better than 0.2 arcsec based on comparison with Gaia first data release. We describe the end-user database, through which data are presented to the world community, and provide some illustrative science queries.

  4. Preliminary Geologic Map of the Topanga 7.5' Quadrangle, Southern California: A Digital Database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, R.H.

    1995-01-01

    INTRODUCTION This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1994). More specific information about the units may be available in the original sources. The content and character of the database and methods of obtaining it are described herein. The geologic map database itself, consisting of three ARC coverages and one base layer, can be obtained over the Internet or by magnetic tape copy as described below. The processes of extracting the geologic map database from the tar file, and importing the ARC export coverages (procedure described herein), will result in the creation of an ARC workspace (directory) called 'topnga.' The database was compiled using ARC/INFO version 7.0.3, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). It is stored in uncompressed ARC export format (ARC/INFO version 7.x) in a compressed UNIX tar (tape archive) file. The tar file was compressed with gzip, and may be uncompressed with gzip, which is available free of charge via the Internet from the gzip Home Page (http://w3.teaser.fr/~jlgailly/gzip). A tar utility is required to extract the database from the tar file. This utility is included in most UNIX systems, and can be obtained free of charge via the Internet from Internet Literacy's Common Internet File Formats Webpage http://www.matisse.net/files/formats.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView (version 1.0 for Windows 3.1 to 3.11 is available for free from ESRI's web site: http://www.esri.com). 1. Different base layer - The original digital database included separates clipped out of the Los Angeles 1:100,000 sheet. This release includes a vectorized scan of a scale-stable negative of the Topanga 7.5 minute quadrangle. 2. Map projection - The files in the original release were in polyconic projection. The projection used in this release is state plane, which allows for the tiling of adjacent quadrangles. 3. File compression - The files in the original release were compressed with UNIX compression. The files in this release are compressed with gzip.

  5. Development of an aquatic pathogen database (AquaPathogen X) and its utilization in tracking emerging fish virus pathogens in North America

    USGS Publications Warehouse

    Emmenegger, E.J.; Kentop, E.; Thompson, T.M.; Pittam, S.; Ryan, A.; Keon, D.; Carlino, J.A.; Ranson, J.; Life, R.B.; Troyer, R.M.; Garver, K.A.; Kurath, G.

    2011-01-01

    The AquaPathogen X database is a template for recording information on individual isolates of aquatic pathogens and is freely available for download (http://wfrc.usgs.gov). This database can accommodate the nucleotide sequence data generated in molecular epidemiological studies along with the myriad of abiotic and biotic traits associated with isolates of various pathogens (e.g. viruses, parasites and bacteria) from multiple aquatic animal host species (e.g. fish, shellfish and shrimp). The cataloguing of isolates from different aquatic pathogens simultaneously is a unique feature to the AquaPathogen X database, which can be used in surveillance of emerging aquatic animal diseases and elucidation of key risk factors associated with pathogen incursions into new water systems. An application of the template database that stores the epidemiological profiles of fish virus isolates, called Fish ViroTrak, was also developed. Exported records for two aquatic rhabdovirus species emerging in North America were used in the implementation of two separate web-accessible databases: the Molecular Epidemiology of Aquatic Pathogens infectious haematopoietic necrosis virus (MEAP-IHNV) database (http://gis.nacse.org/ihnv/) released in 2006 and the MEAP- viral haemorrhagic septicaemia virus (http://gis.nacse.org/vhsv/) database released in 2010.

  6. HMDB 3.0--The Human Metabolome Database in 2013.

    PubMed

    Wishart, David S; Jewison, Timothy; Guo, An Chi; Wilson, Michael; Knox, Craig; Liu, Yifeng; Djoumbou, Yannick; Mandal, Rupasri; Aziat, Farid; Dong, Edison; Bouatra, Souhaila; Sinelnikov, Igor; Arndt, David; Xia, Jianguo; Liu, Philip; Yallou, Faizath; Bjorndahl, Trent; Perez-Pineiro, Rolando; Eisner, Roman; Allen, Felicity; Neveu, Vanessa; Greiner, Russ; Scalbert, Augustin

    2013-01-01

    The Human Metabolome Database (HMDB) (www.hmdb.ca) is a resource dedicated to providing scientists with the most current and comprehensive coverage of the human metabolome. Since its first release in 2007, the HMDB has been used to facilitate research for nearly 1000 published studies in metabolomics, clinical biochemistry and systems biology. The most recent release of HMDB (version 3.0) has been significantly expanded and enhanced over the 2009 release (version 2.0). In particular, the number of annotated metabolite entries has grown from 6500 to more than 40,000 (a 600% increase). This enormous expansion is a result of the inclusion of both 'detected' metabolites (those with measured concentrations or experimental confirmation of their existence) and 'expected' metabolites (those for which biochemical pathways are known or human intake/exposure is frequent but the compound has yet to be detected in the body). The latest release also has greatly increased the number of metabolites with biofluid or tissue concentration data, the number of compounds with reference spectra and the number of data fields per entry. In addition to this expansion in data quantity, new database visualization tools and new data content have been added or enhanced. These include better spectral viewing tools, more powerful chemical substructure searches, an improved chemical taxonomy and better, more interactive pathway maps. This article describes these enhancements to the HMDB, which was previously featured in the 2009 NAR Database Issue. (Note to referees, HMDB 3.0 will go live on 18 September 2012.).

  7. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  8. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  9. How controlled release technology can aid gene delivery.

    PubMed

    Jo, Jun-Ichiro; Tabata, Yasuhiko

    2015-01-01

    Many types of gene delivery systems have been developed to enhance the level of gene expression. Controlled release technology is a feasible gene delivery system which enables genes to extend the expression duration by maintaining and releasing them at the injection site in a controlled manner. This technology can reduce the adverse effects by the bolus dose administration and avoid the repeated administration. Biodegradable biomaterials are useful as materials for the controlled release-based gene delivery technology and various biodegradable biomaterials have been developed. Controlled release-based gene delivery plays a critical role in a conventional gene therapy and genetic engineering. In the gene therapy, the therapeutic gene is released from biodegradable biomaterial matrices around the tissue to be treated. On the other hand, the intracellular controlled release of gene from the sub-micro-sized matrices is required for genetic engineering. Genetic engineering is feasible for cell transplantation as well as research of stem cells biology and medicine. DNA hydrogel containing a sequence of therapeutic gene and the exosome including the individual specific nucleic acids may become candidates for controlled release carriers. Technologies to deliver genes to cell aggregates will play an important role in the promotion of regenerative research and therapy.

  10. Release of the gPhoton Database of GALEX Photon Events

    NASA Astrophysics Data System (ADS)

    Fleming, Scott W.; Million, Chase; Shiao, Bernie; Tucker, Michael; Loyd, R. O. Parke

    2016-01-01

    The GALEX spacecraft surveyed much of the sky in two ultraviolet bands between 2003 and 2013 with non-integrating microchannel plate detectors. The Mikulski Archive for Space Telescopes (MAST) has made more than one trillion photon events observed by the spacecraft available, stored as a 130 TB database, along with an open-source, python-based software package to query this database and create calibrated lightcurves or images from these data at user-defined spatial and temporal scales. In particular, MAST users can now conduct photometry at the intra-visit level (timescales of seconds and minutes). The software, along with the fully populated database, was officially released in Aug. 2015, and improvements to both software functionality and data calibration are ongoing. We summarize the current calibration status of the gPhoton software, along with examples of early science enabled by gPhoton that include stellar flares, AGN, white dwarfs, exoplanet hosts, novae, and nearby galaxies.

  11. Radar signature generation for feature-aided tracking research

    NASA Astrophysics Data System (ADS)

    Piatt, Teri L.; Sherwood, John U.; Musick, Stanton H.

    2005-05-01

    Accurately associating sensor kinematic reports to known tracks, new tracks, or clutter is one of the greatest obstacles to effective track estimation. Feature-aiding is one technology that is emerging to address this problem, and it is expected that adding target features will aid report association by enhancing track accuracy and lengthening track life. The Sensor's Directorate of the Air Force Research Laboratory is sponsoring a challenge problem called Feature-Aided Tracking of Stop-move Objects (FATSO). The long-range goal of this research is to provide a full suite of public data and software to encourage researchers from government, industry, and academia to participate in radar-based feature-aided tracking research. The FATSO program is currently releasing a vehicle database coupled to a radar signature generator. The completed FATSO system will incorporate this database/generator into a Monte Carlo simulation environment for evaluating multiplatform/multitarget tracking scenarios. The currently released data and software contains the following: eight target models, including a tank, ammo hauler, and self-propelled artillery vehicles; and a radar signature generator capable of producing SAR and HRR signatures of all eight modeled targets in almost any configuration or articulation. In addition, the signature generator creates Z-buffer data, label map data, and radar cross-section prediction and allows the user to add noise to an image while varying sensor-target geometry (roll, pitch, yaw, squint). Future capabilities of this signature generator, such as scene models and EO signatures as well as details of the complete FATSO testbed, are outlined.

  12. Using bibliographic databases in technology transfer

    NASA Technical Reports Server (NTRS)

    Huffman, G. David

    1987-01-01

    When technology developed for a specific purpose is used in another application, the process is called technology transfer--the application of an existing technology to a new use or user for purposes other than those for which the technology was originally intended. Using Bibliographical Databases in Technology Transfer deals with demand-pull transfer, technology transfer that arises from need recognition, and is a guide for conducting demand-pull technology transfer studies. It can be used by a researcher as a self-teaching manual or by an instructor as a classroom text. A major problem of technology transfer is finding applicable technology to transfer. Described in detail is the solution to this problem, the use of computerized, bibliographic databases, which currently contain virtually all documented technology of the past 15 years. A general framework for locating technology is described. NASA technology organizations and private technology transfer firms are listed for consultation.

  13. 12 CFR 1204.7 - Are there any exemptions from the Privacy Act?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... & Evaluative Files Database,” “FHFA-OIG Investigative & Evaluative MIS Database,” “FHFA-OIG Hotline Database... investigation or evaluation. (ii) From 5 U.S.C. 552a(d)(1), because release of investigative or evaluative... or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  14. 12 CFR 1204.7 - Are there any exemptions from the Privacy Act?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... & Evaluative Files Database,” “FHFA-OIG Investigative & Evaluative MIS Database,” “FHFA-OIG Hotline Database... investigation or evaluation. (ii) From 5 U.S.C. 552a(d)(1), because release of investigative or evaluative... or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  15. 12 CFR 1204.7 - Are there any exemptions from the Privacy Act?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... & Evaluative Files Database,” “FHFA-OIG Investigative & Evaluative MIS Database,” “FHFA-OIG Hotline Database... investigation or evaluation. (ii) From 5 U.S.C. 552a(d)(1), because release of investigative or evaluative... or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  16. Database Creation and Statistical Analysis: Finding Connections Between Two or More Secondary Storage Device

    DTIC Science & Technology

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research

  17. MaizeGDB update: New tools, data, and interface for the maize model organism database

    USDA-ARS?s Scientific Manuscript database

    MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...

  18. Reasons for 2011 Release of the Evaluated Nuclear Data Library (ENDL2011.0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D.; Escher, J.; Hoffman, R.

    LLNL's Computational Nuclear Physics Group and Nuclear Theory and Modeling Group have collaborated to create the 2011 release of the Evaluated Nuclear Data Library (ENDL2011). ENDL2011 is designed to sup- port LLNL's current and future nuclear data needs. This database is currently the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles, surpassing ENDL2009.0 [1]. The ENDL2011 release [2] contains 918 transport-ready eval- uations in the neutron sub-library alone. ENDL2011 was assembled with strong support from the ASC program, leveraged with support from NNSA science campaigns and the DOE/Offce of Science US Nuclear Datamore » Pro- gram.« less

  19. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  20. Construction of Pará rubber tree genome and multi-transcriptome database accelerates rubber researches.

    PubMed

    Makita, Yuko; Kawashima, Mika; Lau, Nyok Sean; Othman, Ahmad Sofiman; Matsui, Minami

    2018-01-19

    Natural rubber is an economically important material. Currently the Pará rubber tree, Hevea brasiliensis is the main commercial source. Little is known about rubber biosynthesis at the molecular level. Next-generation sequencing (NGS) technologies brought draft genomes of three rubber cultivars and a variety of RNA sequencing (RNA-seq) data. However, no current genome or transcriptome databases (DB) are organized by gene. A gene-oriented database is a valuable support for rubber research. Based on our original draft genome sequence of H. brasiliensis RRIM600, we constructed a rubber tree genome and transcriptome DB. Our DB provides genome information including gene functional annotations and multi-transcriptome data of RNA-seq, full-length cDNAs including PacBio Isoform sequencing (Iso-Seq), ESTs and genome wide transcription start sites (TSSs) derived from CAGE technology. Using our original and publically available RNA-seq data, we calculated co-expressed genes for identifying functionally related gene sets and/or genes regulated by the same transcription factor (TF). Users can access multi-transcriptome data through both a gene-oriented web page and a genome browser. For the gene searching system, we provide keyword search, sequence homology search and gene expression search; users can also select their expression threshold easily. The rubber genome and transcriptome DB provides rubber tree genome sequence and multi-transcriptomics data. This DB is useful for comprehensive understanding of the rubber transcriptome. This will assist both industrial and academic researchers for rubber and economically important close relatives such as R. communis, M. esculenta and J. curcas. The Rubber Transcriptome DB release 2017.03 is accessible at http://matsui-lab.riken.jp/rubber/ .

  1. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly

  2. Online resources for news about toxicology and other environmental topics.

    PubMed

    South, J C

    2001-01-12

    Technology has revolutionized researchers' ability to find and retrieve news stories and press releases. Thanks to electronic library systems and telecommunications--notably the Internet--computer users in seconds can sift through millions of articles to locate mainstream articles about toxicology and other environmental topics. But that does not mean it is easy to find what one is looking for. There is a confusing array of databases and services that archive news articles and press releases: (1) some are free; others cost thousands of dollars a year to access, (2) some include hundreds of newspaper and magazine titles; others cover only one publication, (3) some contain archives going back decades; others have just the latest news, (4) some offer only journalistically balanced reports from mainstream news sources; others mix news with opinions and advocacy and include reports from obscure or biased sources. This article explores ways to find news online - particularly news about toxicology, hazardous chemicals, environmental health and the environment in general. The article covers web sites devoted to environmental news; sites and search engines for general-interest news; newspaper archives; commercial information services; press release distribution services and archives; and other resources and strategies for finding articles in the popular press about toxicology and the environment.

  3. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog)

    PubMed Central

    MacArthur, Jacqueline; Bowler, Emily; Cerezo, Maria; Gil, Laurent; Hall, Peggy; Hastings, Emma; Junkins, Heather; McMahon, Aoife; Milano, Annalisa; Morales, Joannella; Pendlington, Zoe May; Welter, Danielle; Burdett, Tony; Hindorff, Lucia; Flicek, Paul; Cunningham, Fiona; Parkinson, Helen

    2017-01-01

    The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future. PMID:27899670

  4. A New Approach To Secure Federated Information Bases Using Agent Technology.

    ERIC Educational Resources Information Center

    Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang

    2003-01-01

    Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…

  5. EST databases and web tools for EST projects.

    PubMed

    Shen, Yao-Qing; O'Brien, Emmet; Koski, Liisa; Lang, B Franz; Burger, Gertraud

    2009-01-01

    This chapter outlines key considerations for constructing and implementing an EST database. Instead of showing the technological details step by step, emphasis is put on the design of an EST database suited to the specific needs of EST projects and how to choose the most suitable tools. Using TBestDB as an example, we illustrate the essential factors to be considered for database construction and the steps for data population and annotation. This process employs technologies such as PostgreSQL, Perl, and PHP to build the database and interface, and tools such as AutoFACT for data processing and annotation. We discuss these in comparison to other available technologies and tools, and explain the reasons for our choices.

  6. Database System Design and Implementation for Marine Air-Traffic-Controller Training

    DTIC Science & Technology

    2017-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. DATABASE SYSTEM DESIGN AND...thesis 4. TITLE AND SUBTITLE DATABASE SYSTEM DESIGN AND IMPLEMENTATION FOR MARINE AIR-TRAFFIC-CONTROLLER TRAINING 5. FUNDING NUMBERS 6. AUTHOR(S...12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This project focused on the design , development, and implementation of a centralized

  7. Explanatory Supplement to the AllWISE Data Release Products

    NASA Astrophysics Data System (ADS)

    Cutri, R. M.; Wright, E. L.; Conrow, T.; Fowler, J. W.; Eisenhardt, P. R. M.; Grillmair, C.; Kirkpatrick, J. D.; Masci, F.; McCallon, H. L.; Wheelock, S. L.; Fajardo-Acosta, S.; Yan, L.; Benford, D.; Harbut, M.; Jarrett, T.; Lake, S.; Leisawitz, D.; Ressler, M. E.; Stanford, S. A.; Tsai, C. W.; Liu, F.; Helou, G.; Mainzer, A.; Gettings, D.; Gonzalez, A.; Hoffman, D.; Marsh, K. A.; Padgett, D.; Skrutskie, M. F.; Beck, R. P.; Papin, M.; Wittman, M.

    2013-11-01

    The AllWISE program builds upon the successful Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) mission by combining data from all WISE and NEOWISE (Mainzer et al. 2011) survey phases to form the most comprehensive view of the mid-infrared sky currently available. By combining the data from two complete sky coverage epochs in an advanced data processing system, AllWISE has generated new products that have enhanced photometric sensitivity and accuracy, and improved astrometric precision compared with the earlier WISE All-Sky Data Release. Exploiting the 6 month baseline between the WISE sky coverage epochs enables AllWISE to measure source motions for the first time, and to compute improved flux variability statistics. AllWISE data release products include: a Source Catalog that contains 4-band fluxes, positions, apparent motion measurements, and flux variability statistics for over 747 million objects detected at SNR>5 in the combined exposures; a Multiepoch Photometry Database containing over 42 billion time-tagged, single-exposure fluxes for each object detected on the combined exposures; and an Image Atlas of 18,240 4-band calibrated FITS images, depth-of-coverage and noise maps that cover the sky produced by coadding nearly 7.9 million single-exposure images from the cryogenic and post-cryogenic survey phases. The Explanatory Supplement to the AllWISE Data Release Products is a general guide for users of the AllWISE data. The Supplement contains detailed descriptions of the format and characteristics of the AllWISE data products, as well as a summary of cautionary notes that describe known limitations. The Supplement is an on-line document that is updated frequently to provide the most current information for users of the AllWISE data products. The Explanatory Supplement is maintained at: http://wise2.ipac.caltech.edu/docs/release/allwise/expsup/index.html AllWISE makes use of data from WISE, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration.

  8. Relational Database Technology: An Overview.

    ERIC Educational Resources Information Center

    Melander, Nicole

    1987-01-01

    Describes the development of relational database technology as it applies to educational settings. Discusses some of the new tools and models being implemented in an effort to provide educators with technologically advanced ways of answering questions about education programs and data. (TW)

  9. Materials to clinical devices: technologies for remotely triggered drug delivery.

    PubMed

    Timko, Brian P; Kohane, Daniel S

    2012-11-01

    Technologies in which a remote trigger is used to release drug from an implanted or injected device could enable on-demand release profiles that enhance therapeutic effectiveness or reduce systemic toxicity. A number of new materials have been developed that exhibit sensitivity to light, ultrasound, or electrical or magnetic fields. Delivery systems that incorporate these materials might be triggered externally by the patient, parent or physician to provide flexible control of dose magnitude and timing. To review injectable or implantable systems that are candidates for translation to the clinic, or ones that have already undergone clinical trials. Also considered are applicability in pediatrics and prospects for the future of drug delivery systems. We performed literature searches of the PubMed and Science Citation Index databases for articles in English that reported triggerable drug delivery devices, and for articles reporting related materials and concepts. Approaches to remotely-triggered systems that have clinical potential were identified. Ideally, these systems have been engineered to exhibit controlled on-state release kinetics, low baseline leak rates, and reproducible dosing across multiple cycles. Advances in remotely-triggered drug delivery have been brought about by the convergence of numerous scientific and engineering disciplines, and this convergence is likely to play an important part in the current trend to develop systems that provide more than one therapeutic modality. Preclinical systems must be carefully assessed for biocompatibility, and engineered to ensure pharmacokinetics within the therapeutic window. Future drug delivery systems may incorporate additional modalities, such as closed-loop sensing or onboard power generation, enabling more sophisticated drug delivery regimens. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  10. Explanatory Supplement to the WISE All-Sky Release Products

    NASA Technical Reports Server (NTRS)

    2012-01-01

    The Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010) surveyed the entire sky at 3.4, 4.6, 12 and 22 microns in 2010, achieving 5-sigma point source sensitivities per band better than 0.08, 0.11, 1 and 6 mJy in unconfused regions on the ecliptic. The WISE All-Sky Data Release, conducted on March 14, 2012, incorporates all data taken during the full cryogenic mission phase, 7 January 2010 to 6 August 20l0,that were processed with improved calibrations and reduction algorithms. Release data products include: (1) an Atlas of 18,240 match-filtered, calibrated and coadded image sets; (2) a Source Catalog containing positions and four-band photometry for over 563 million objects, and (3) an Explanatory Supplement. Ancillary products include a Reject Table that contains 284 million detections that were not selected for the Source Catalog because they are low signal-to-noise ratio or spurious detections of image artifacts, an archive of over 1.5 million sets of calibrated WISE Single-exposure images, and a database of 9.4 billion source extractions from those single images, and moving object tracklets identified by the NEOWISE program (Mainzer et aI. 2011). The WISE All-Sky Data Release products supersede those from the WISE Preliminary Data Release (Cutri et al. 2011). The Explanatory Supplement to the WISE All-Sky Data Release Products is a general guide for users of the WISE data. The Supplement contains an overview of the WISE mission, facilities, and operations, a detailed description of WISE data processing algorithms, a guide to the content and formals of the image and tabular data products, and cautionary notes that describe known limitations of the All-Sky Release products. Instructions for accessing the WISE data products via the services of the NASA/IPAC Infrared Science Archive are provided. The Supplement also provides analyses of the achieved sky coverage, photometric and astrometric characteristics and completeness and reliability of the All-Sky Release data products. The WISE All-Sky Release Explanatory Supplement is an on-line document that is updated frequently to provide the most current information for users of the WISE data products. The Explanatory Supplement is maintained at: http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/index.html WISE is a joint project of the University of California, Los Angeles and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. NEOWISE is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration.

  11. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Can Nanofluidic Chemical Release Enable Fast, High Resolution Neurotransmitter-Based Neurostimulation?

    PubMed

    Jones, Peter D; Stelzle, Martin

    2016-01-01

    Artificial chemical stimulation could provide improvements over electrical neurostimulation. Physiological neurotransmission between neurons relies on the nanoscale release and propagation of specific chemical signals to spatially-localized receptors. Current knowledge of nanoscale fluid dynamics and nanofluidic technology allows us to envision artificial mechanisms to achieve fast, high resolution neurotransmitter release. Substantial technological development is required to reach this goal. Nanofluidic technology-rather than microfluidic-will be necessary; this should come as no surprise given the nanofluidic nature of neurotransmission. This perspective reviews the state of the art of high resolution electrical neuroprostheses and their anticipated limitations. Chemical release rates from nanopores are compared to rates achieved at synapses and with iontophoresis. A review of microfluidic technology justifies the analysis that microfluidic control of chemical release would be insufficient. Novel nanofluidic mechanisms are discussed, and we propose that hydrophobic gating may allow control of chemical release suitable for mimicking neurotransmission. The limited understanding of hydrophobic gating in artificial nanopores and the challenges of fabrication and large-scale integration of nanofluidic components are emphasized. Development of suitable nanofluidic technology will require dedicated, long-term efforts over many years.

  13. The Dietary Supplement Ingredient Database (DSID) - 3 release.

    USDA-ARS?s Scientific Manuscript database

    The Dietary Supplement Ingredient Database (DSID) provides analytically-derived estimates of ingredient content in dietary supplement (DS) products sold in the United States. DSID was developed by the Nutrient Data Laboratory (NDL) within the Agricultural Research Service, U.S. Department of Agricu...

  14. 15 CFR 740.7 - Computers (APP).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...

  15. 48 CFR 1509.170-7 - Release of ratings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Performance System will have direct access to all Reports, including those of EPA, in the National Institutes of Health's database. Information on EPA contractors' performance ratings may also be obtained by... ACQUISITION PLANNING CONTRACTOR QUALIFICATIONS Contractor Performance Evaluations 1509.170-7 Release of...

  16. Leveraging Relational Technology through Industry Partnerships.

    ERIC Educational Resources Information Center

    Brush, Leonard M.; Schaller, Anthony J.

    1988-01-01

    Carnegie Mellon University has leveraged its technological expertise with database management systems (DBMS) into joint technological and developmental partnerships with DBMS and application software vendors. Carnegie's relational database strategy, the strategy of partnerships and how they were formed, and how the partnerships are doing are…

  17. Addition of a breeding database in the Genome Database for Rosaceae

    PubMed Central

    Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie

    2013-01-01

    Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox PMID:24247530

  18. Addition of a breeding database in the Genome Database for Rosaceae.

    PubMed

    Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie

    2013-01-01

    Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox.

  19. Update on NASA Space Shuttle Earth Observations Photography on the laser videodisc for rapid image access

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh

    1994-01-01

    There have been many significant improvements in the public access to the Space Shuttle Earth Observations Photography Database. New information is provided for the user community on the recently released videodisc of this database. Topics covered included the following: earlier attempts; our first laser videodisc in 1992; the new laser videodisc in 1994; and electronic database access.

  20. [Effect of 3D printing technology on pelvic fractures:a Meta-analysis].

    PubMed

    Zhang, Yu-Dong; Wu, Ren-Yuan; Xie, Ding-Ding; Zhang, Lei; He, Yi; Zhang, Hong

    2018-05-25

    To evaluate the effect of 3D printing technology applied in the surgical treatment of pelvic fractures through the published literatures by Meta-analysis. The PubMed database, EMCC database, CBM database, CNKI database, VIP database and Wanfang database were searched from the date of database foundation to August 2017 to collect the controlled clinical trials in wich 3D printing technology was applied in preoperative planning of pelvic fracture surgery. The retrieved literatures were screened according to predefined inclusion and exclusion criteria, and quality evaluation were performed. Then, the available data were extracted and analyzed with the RevMan5.3 software. Totally 9 controlled clinical trials including 638 cases were chosen. Among them, 279 cases were assigned to the 3D printing technology group and 359 cases to the conventional group. The Meta-analysis results showed that the operative time[SMD=-2.81, 95%CI(-3.76, -1.85)], intraoperative blood loss[SMD=-3.28, 95%CI(-4.72, -1.85)] and the rate of complication [OR=0.47, 95%CI(0.25, 0.87)] in the 3D printing technology were all lower than those in the conventional group;the excellent and good rate of pelvic fracture reduction[OR=2.09, 95%CI(1.32, 3.30)] and postoperative pelvic functional restoration [OR=1.94, 95%CI(1.15, 3.28) in the 3D printing technology were all superior to those in the conventional group. 3D printing technology applied in the surgical treatment of pelvic fractures has the advantage of shorter operative time, less intraoperative blood loss and lower rate of complication, and can improve the quality of pelvic fracture reduction and the recovery of postoperative pelvic function. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.

  1. Reinventing MaizeGDB

    USDA-ARS?s Scientific Manuscript database

    The Maize Database (MaizeDB) to the Maize Genetics and Genomics Database (MaizeGDB) turns 20 this year, and such a significant milestone must be celebrated! With the release of the B73 reference sequence and more sequenced genomes on the way, the maize community needs to address various opportunitie...

  2. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Read-only database. 1400.13 Section 1400.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY AND DEPARTMENT OF JUSTICE ACCIDENTAL RELEASE PREVENTION REQUIREMENTS; RISK MANAGEMENT PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7...

  3. ECLSS evolution: Advanced instrumentation interface requirements. Volume 3: Appendix C

    NASA Technical Reports Server (NTRS)

    1991-01-01

    An Advanced ECLSS (Environmental Control and Life Support System) Technology Interfaces Database was developed primarily to provide ECLSS analysts with a centralized and portable source of ECLSS technologies interface requirements data. The database contains 20 technologies which were previously identified in the MDSSC ECLSS Technologies database. The primary interfaces of interest in this database are fluid, electrical, data/control interfaces, and resupply requirements. Each record contains fields describing the function and operation of the technology. Fields include: an interface diagram, description applicable design points and operating ranges, and an explaination of data, as required. A complete set of data was entered for six of the twenty components including Solid Amine Water Desorbed (SAWD), Thermoelectric Integrated Membrane Evaporation System (TIMES), Electrochemical Carbon Dioxide Concentrator (EDC), Solid Polymer Electrolysis (SPE), Static Feed Electrolysis (SFE), and BOSCH. Additional data was collected for Reverse Osmosis Water Reclaimation-Potable (ROWRP), Reverse Osmosis Water Reclaimation-Hygiene (ROWRH), Static Feed Solid Polymer Electrolyte (SFSPE), Trace Contaminant Control System (TCCS), and Multifiltration Water Reclamation - Hygiene (MFWRH). A summary of the database contents is presented in this report.

  4. Keyless Entry: Building a Text Database Using OCR Technology.

    ERIC Educational Resources Information Center

    Grotophorst, Clyde W.

    1989-01-01

    Discusses the use of optical character recognition (OCR) technology to produce an ASCII text database. A tutorial on digital scanning and OCR is provided, and a systems integration project which used the Calera CDP-3000XF scanner and text retrieval software to construct a database of dissertations at George Mason University is described. (four…

  5. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  6. 2009.1 Revision of the Evaluated Nuclear Data Library (ENDL2009.1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, I. J.; Beck, B.; Descalles, M. A.

    LLNL’s Computational Nuclear Data and Theory Group have created a 2009.1 revised release of the Evaluated Nuclear Data Library (ENDL2009.1). This library is designed to support LLNL’s current and future nuclear data needs and will be employed in nuclear reactor, nuclear security and stockpile stewardship simulations with ASC codes. The ENDL2009 database was the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles. It was assembled with strong support from the ASC PEM and Attribution programs, leveraged with support from Campaign 4 and the DOE/Office of Science’s US Nuclear Data Program. This document listsmore » the revisions and fixes made in a new release called ENDL2009.1, by comparing with the existing data in the original release which is now called ENDL2009.0. These changes are made in conjunction with the revisions for ENDL2011.1, so that both the .1 releases are as free as possible of known defects.« less

  7. 2009.3 Revision of the Evaluated Nuclear Data Library (ENDL2009.3)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, I. J.; Beck, B.; Descalle, M. A.

    LLNL's Computational Nuclear Data and Theory Group have created a 2009.3 revised release of the Evaluated Nuclear Data Library (ENDL2009.3). This library is designed to support LLNL's current and future nuclear data needs and will be employed in nuclear reactor, nuclear security and stockpile stewardship simulations with ASC codes. The ENDL2009 database was the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles. It was assembled with strong support from the ASC PEM and Attribution programs, leveraged with support from Campaign 4 and the DOE/Office of Science's US Nuclear Data Program. This document listsmore » the revisions and fixes made in a new release called ENDL2009.3, by com- paring with the existing data in the previous release ENDL2009.2. These changes are made in conjunction with the revisions for ENDL2011.3, so that both the .3 releases are as free as possible of known defects.« less

  8. Current Abstracts Nuclear Reactors and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bales, J.D.; Hicks, S.C.

    1993-01-01

    This publication Nuclear Reactors and Technology (NRT) announces on a monthly basis the current worldwide information available from the open literature on nuclear reactors and technology, including all aspects of power reactors, components and accessories, fuel elements, control systems, and materials. This publication contains the abstracts of DOE reports, journal articles, conference papers, patents, theses, and monographs added to the Energy Science and Technology Database during the past month. Also included are US information obtained through acquisition programs or interagency agreements and international information obtained through acquisition programs or interagency agreements and international information obtained through the International Energy Agency`smore » Energy Technology Data Exchange or government-to-government agreements. The digests in NRT and other citations to information on nuclear reactors back to 1948 are available for online searching and retrieval on the Energy Science and Technology Database and Nuclear Science Abstracts (NSA) database. Current information, added daily to the Energy Science and Technology Database, is available to DOE and its contractors through the DOE Integrated Technical Information System. Customized profiles can be developed to provide current information to meet each user`s needs.« less

  9. Nuclear Reactors and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cason, D.L.; Hicks, S.C.

    1992-01-01

    This publication Nuclear Reactors and Technology (NRT) announces on a monthly basis the current worldwide information available from the open literature on nuclear reactors and technology, including all aspects of power reactors, components and accessories, fuel elements, control systems, and materials. This publication contains the abstracts of DOE reports, journal articles, conference papers, patents, theses, and monographs added to the Energy Science and Technology Database during the past month. Also included are US information obtained through acquisition programs or interagency agreements and international information obtained through the International Energy Agency`s Energy Technology Data Exchange or government-to-government agreements. The digests inmore » NRT and other citations to information on nuclear reactors back to 1948 are available for online searching and retrieval on the Energy Science and Technology Database and Nuclear Science Abstracts (NSA) database. Current information, added daily to the Energy Science and Technology Database, is available to DOE and its contractors through the DOE Integrated Technical Information System. Customized profiles can be developed to provide current information to meet each user`s needs.« less

  10. Natural and technologic hazardous material releases during and after natural disasters: a review.

    PubMed

    Young, Stacy; Balluz, Lina; Malilay, Josephine

    2004-04-25

    Natural disasters may be powerful and prominent mechanisms of direct and indirect hazardous material (hazmat) releases. Hazardous materials that are released as the result of a technologic malfunction precipitated by a natural event are referred to as natural-technologic or na-tech events. Na-tech events pose unique environmental and human hazards. Disaster-associated hazardous material releases are of concern, given increases in population density and accelerating industrial development in areas subject to natural disasters. These trends increase the probability of catastrophic future disasters and the potential for mass human exposure to hazardous materials released during disasters. This systematic review summarizes direct and indirect disaster-associated releases, as well as environmental contamination and adverse human health effects that have resulted from natural disaster-related hazmat incidents. Thorough examination of historic disaster-related hazmat releases can be used to identify future threats and improve mitigation and prevention efforts.

  11. Rat Genome and Model Resources.

    PubMed

    Shimoyama, Mary; Smith, Jennifer R; Bryda, Elizabeth; Kuramoto, Takashi; Saba, Laura; Dwinell, Melinda

    2017-07-01

    Rats remain a major model for studying disease mechanisms and discovery, validation, and testing of new compounds to improve human health. The rat's value continues to grow as indicated by the more than 1.4 million publications (second to human) at PubMed documenting important discoveries using this model. Advanced sequencing technologies, genome modification techniques, and the development of embryonic stem cell protocols ensure the rat remains an important mammalian model for disease studies. The 2004 release of the reference genome has been followed by the production of complete genomes for more than two dozen individual strains utilizing NextGen sequencing technologies; their analyses have identified over 80 million variants. This explosion in genomic data has been accompanied by the ability to selectively edit the rat genome, leading to hundreds of new strains through multiple technologies. A number of resources have been developed to provide investigators with access to precision rat models, comprehensive datasets, and sophisticated software tools necessary for their research. Those profiled here include the Rat Genome Database, PhenoGen, Gene Editing Rat Resource Center, Rat Resource and Research Center, and the National BioResource Project for the Rat in Japan. © The Author 2017. Published by Oxford University Press.

  12. EPA’s SPECIATE 4.4 Database:Development and Uses

    EPA Science Inventory

    SPECIATE is the U.S. Environmental Protection Agency's (EPA)repository of volatile organic gas and particulate matter (PM) speciation profiles for air pollution sources. EPA released SPECIATE 4.4 in early 2014 and, in total, the SPECIATE 4.4 database includes 5,728 PM, VOC, total...

  13. EPA’s SPECIATE 4.4 Database: Development and Uses

    EPA Science Inventory

    SPECIATE is the U.S. Environmental Protection Agency's (EPA)repository of volatile organic gas and particulate matter (PM) speciation profiles for air pollution sources. EPA released SPECIATE 4.4 in early 2014 and, in total, the SPECIATE 4.4 database includes 5,728 PM, VOC, total...

  14. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  15. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  16. An Overview of ARL’s Multimodal Signatures Database and Web Interface

    DTIC Science & Technology

    2007-12-01

    ActiveX components, which hindered distribution due to license agreements and run-time license software to use such components. g. Proprietary...Overview The database consists of multimodal signature data files in the HDF5 format. Generally, each signature file contains all the ancillary...only contains information in the database, Web interface, and signature files that is releasable to the public. The Web interface consists of static

  17. JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles

    PubMed Central

    Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T.; Arenillas, David; Zhao, Xiaobei; Valen, Eivind; Yusuf, Dimas; Lenhard, Boris; Wasserman, Wyeth W.; Sandelin, Albin

    2010-01-01

    JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database to date: the database now holds 457 non-redundant, curated profiles. The new entries include the first batch of profiles derived from ChIP-seq and ChIP-chip whole-genome binding experiments, and 177 yeast TF binding profiles. The introduction of a yeast division brings the convenience of JASPAR to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature. A curated catalog of mammalian TFs is provided, extending the use of the JASPAR profiles to additional TFs belonging to the same structural family. The changes in the database set the system ready for more rapid acquisition of new high-throughput data sources. Additionally, three new special collections provide matrix profile data produced by recent alternative high-throughput approaches. PMID:19906716

  18. Predicting Consequences of Technological Disasters from Natural Hazard Events: Challenges and Opportunities Associated with Industrial Accident Data Sources

    NASA Astrophysics Data System (ADS)

    Wood, M.

    2009-04-01

    The increased focus on the possibility of technological accidents caused by natural events (Natech) is foreseen to continue for years to come. In this case, experts in prevention, mitigation and preparation activities associated with natural events will increasingly need to borrow data and expertise traditionally associated with the technological fields to carry out the work. An important question is how useful is the data for understanding consequences from such natech events. Data and case studies provided on major industrial accidents tend to focus on lessons learned for re-engineering the process. While consequence data are reported at least nominally in most reports, their precision, quality and completeness is often lacking. Consequences that are often or sometimes available but not provided can include severity and type of injuries, distance of victims from the source, exposure measurements, volume of the release, population in potentially affected zones, and weather conditions. Yet these are precisely the type of data that will aid natural hazard experts in land-use planning and emergency response activities when a Natech event may be foreseen. This work discusses the results of a study of consequence data from accidents involving toxic releases reported in the EU's MARS accident database. The study analysed the precision, quality and completeness of three categories of consequence data reported: the description of health effects, consequence assessment and chemical risk assessment factors, and emergency response information. This work reports on the findings from this study and discusses how natural hazards experts might interact with industrial accident experts to promote more consistent and accurate reporting of the data that will be useful in consequence-based activities.

  19. Release of (and lessons learned from mining) a pioneering large toxicogenomics database.

    PubMed

    Sandhu, Komal S; Veeramachaneni, Vamsi; Yao, Xiang; Nie, Alex; Lord, Peter; Amaratunga, Dhammika; McMillian, Michael K; Verheyen, Geert R

    2015-07-01

    We release the Janssen Toxicogenomics database. This rat liver gene-expression database was generated using Codelink microarrays, and has been used over the past years within Janssen to derive signatures for multiple end points and to classify proprietary compounds. The release consists of gene-expression responses to 124 compounds, selected to give a broad coverage of liver-active compounds. A selection of the compounds were also analyzed on Affymetrix microarrays. The release includes results of an in-house reannotation pipeline to Entrez gene annotations, to classify probes into different confidence classes. High confidence unambiguously annotated probes were used to create gene-level data which served as starting point for cross-platform comparisons. Connectivity map-based similarity methods show excellent agreement between Codelink and Affymetrix runs of the same samples. We also compared our dataset with the Japanese Toxicogenomics Project and observed reasonable agreement, especially for compounds with stronger gene signatures. We describe an R-package containing the gene-level data and show how it can be used for expression-based similarity searches. Comparing the same biological samples run on the Affymetrix and the Codelink platform, good correspondence is observed using connectivity mapping approaches. As expected, this correspondence is smaller when the data are compared with an independent dataset such as TG-GATE. We hope that this collection of gene-expression profiles will be incorporated in toxicogenomics pipelines of users.

  20. US EPA's SPECIATE 4.4 Database: Development and Uses

    EPA Science Inventory

    SPECIATE is the U.S. Environmental Protection Agency’s (EPA) repository of volatile organic gas and particulate matter (PM) speciation profiles of air pollution sources. EPA released SPECIATE 4.4 in early 2014 and, in total, the SPECIATE 4.4 database includes 5,728 PM, volatile o...

  1. High Throughput Experimental Materials Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakutayev, Andriy; Perkins, John; Schwarting, Marcus

    The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).

  2. EPA’s SPECIATE 4.4 Database: Bridging Data Sources and Data Users

    EPA Science Inventory

    SPECIATE is the U.S. Environmental Protection Agency's (EPA)repository of volatile organic gas and particulate matter (PM) speciation profiles for air pollution sources. EPA released SPECIATE 4.4 in early 2014 and, in total, the SPECIATE 4.4 database includes 5,728 PM, VOC, total...

  3. New perspectives in toxicological information management, and the role of ISSTOX databases in assessing chemical mutagenicity and carcinogenicity.

    PubMed

    Benigni, Romualdo; Battistelli, Chiara Laura; Bossa, Cecilia; Tcheremenskaia, Olga; Crettaz, Pierre

    2013-07-01

    Currently, the public has access to a variety of databases containing mutagenicity and carcinogenicity data. These resources are crucial for the toxicologists and regulators involved in the risk assessment of chemicals, which necessitates access to all the relevant literature, and the capability to search across toxicity databases using both biological and chemical criteria. Towards the larger goal of screening chemicals for a wide range of toxicity end points of potential interest, publicly available resources across a large spectrum of biological and chemical data space must be effectively harnessed with current and evolving information technologies (i.e. systematised, integrated and mined), if long-term screening and prediction objectives are to be achieved. A key to rapid progress in the field of chemical toxicity databases is that of combining information technology with the chemical structure as identifier of the molecules. This permits an enormous range of operations (e.g. retrieving chemicals or chemical classes, describing the content of databases, finding similar chemicals, crossing biological and chemical interrogations, etc.) that other more classical databases cannot allow. This article describes the progress in the technology of toxicity databases, including the concepts of Chemical Relational Database and Toxicological Standardized Controlled Vocabularies (Ontology). Then it describes the ISSTOX cluster of toxicological databases at the Istituto Superiore di Sanitá. It consists of freely available databases characterised by the use of modern information technologies and by curation of the quality of the biological data. Finally, this article provides examples of analyses and results made possible by ISSTOX.

  4. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    PubMed

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  5. Ensembl core software resources: storage and programmatic access for DNA sequence and genome annotation.

    PubMed

    Ruffier, Magali; Kähäri, Andreas; Komorowska, Monika; Keenan, Stephen; Laird, Matthew; Longden, Ian; Proctor, Glenn; Searle, Steve; Staines, Daniel; Taylor, Kieron; Vullo, Alessandro; Yates, Andrew; Zerbino, Daniel; Flicek, Paul

    2017-01-01

    The Ensembl software resources are a stable infrastructure to store, access and manipulate genome assemblies and their functional annotations. The Ensembl 'Core' database and Application Programming Interface (API) was our first major piece of software infrastructure and remains at the centre of all of our genome resources. Since its initial design more than fifteen years ago, the number of publicly available genomic, transcriptomic and proteomic datasets has grown enormously, accelerated by continuous advances in DNA-sequencing technology. Initially intended to provide annotation for the reference human genome, we have extended our framework to support the genomes of all species as well as richer assembly models. Cross-referenced links to other informatics resources facilitate searching our database with a variety of popular identifiers such as UniProt and RefSeq. Our comprehensive and robust framework storing a large diversity of genome annotations in one location serves as a platform for other groups to generate and maintain their own tailored annotation. We welcome reuse and contributions: our databases and APIs are publicly available, all of our source code is released with a permissive Apache v2.0 licence at http://github.com/Ensembl and we have an active developer mailing list ( http://www.ensembl.org/info/about/contact/index.html ). http://www.ensembl.org. © The Author(s) 2017. Published by Oxford University Press.

  6. Pan-STARRS Data Release 1

    NASA Astrophysics Data System (ADS)

    Flewelling, Heather

    2017-01-01

    We present an overview of the first and second Pan-STARRS data release (DR1 and DR2), and how to use the Published Science Products Subsystem (PSPS) and the Pan-STARRS Science Interface (PSI) to access the images and the catalogs. The data will be available from the STScI MAST archive. The PSPS is an SQLServer database that can be queried via script or web interface. This database has relative photometry and astrometry and object associations, making it easy to do searches across the entire sky as well as tools to generate lightcurves of individual objects as a function of time. Both releases of data use the 3pi survey, which has 5 filters (g,r,i,z,y), roughly 60 epochs (12 per filter) and covers 3/4 of the sky and everything north of -30 degrees declination. The first release of data (DR1) will contain stack images, mean attribute catalogs and static sky catalogs based off of the stacks. The second release of data (DR2) will contain the time domain data. For the images, this will include single exposures that have been detrended and warped. For the catalogs, this will include catalogs of all exposures as well as forced photometry.

  7. Pan-STARRS Data Release 2

    NASA Astrophysics Data System (ADS)

    Flewelling, Heather

    2018-01-01

    On December 19, 2016, Pan-STARRS released the stacked images, mean attributes catalogs, and static sky catalogs for the 3pi survey, in 5 filters (g,r,i,z,y), covering 3/4 of the sky, everything north of -30 in declination. This set of data is called Data Release 1 (DR1), and it is available to all at http://panstarrs.stsci.edu. It contains more than 10 billion objects, 3 billion of those objects have stack photometry. We give an update on the progress of the forthcoming Data Release (DR2) database, which will provide time domain catalogs and single exposures for the 3pi survey. This includes 3pi data taken between 2010 and 2014, covering approximately 60 epochs per patch of sky, and includes measurements detected in the single exposures as well as forced photometry measurements (photometry measured on single exposures using the positions from sources detected in the stacks). We also provide informations on futures releases (DR3 and beyond), which will contain the rest of the 3pi database (specifically, the data products related to difference imaging), as well as the data products for the Medium Deep (MD) survey.

  8. [Conceptual foundations of creation of branch database of technology and intellectual property rights owned by scientific institutions, organizations, higher medical educational institutions and enterprises of healthcare sphere of Ukraine].

    PubMed

    Horban', A Ie

    2013-09-01

    The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.

  9. Design of an expert system for the development and formulation of push-pull osmotic pump tablets containing poorly water-soluble drugs.

    PubMed

    Zhang, Zhi-hong; Dong, Hong-ye; Peng, Bo; Liu, Hong-fei; Li, Chun-lei; Liang, Min; Pan, Wei-san

    2011-05-30

    The purpose of this article was to build an expert system for the development and formulation of push-pull osmotic pump tablets (PPOP). Hundreds of PPOP formulations were studied according to different poorly water-soluble drugs and pharmaceutical acceptable excipients. The knowledge base including database and rule base was built based on the reported results of hundreds of PPOP formulations containing different poorly water-soluble drugs and pharmaceutical excipients and the experiences available from other researchers. The prediction model of release behavior was built using back propagation (BP) neural network, which is good at nonlinear mapping and learning function. Formulation design model was established based on the prediction model of release behavior, which was the nucleus of the inference engine. Finally, the expert system program was constructed by VB.NET associating with SQL Server. Expert system is one of the most popular aspects in artificial intelligence. To date there is no expert system available for the formulation of controlled release dosage forms yet. Moreover, osmotic pump technology (OPT) is gradually getting consummate all over the world. It is meaningful to apply expert system on OPT. Famotidine, a water insoluble drug was chosen as the model drug to validate the applicability of the developed expert system. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Alaska Geochemical Database - Mineral Exploration Tool for the 21st Century - PDF of presentation

    USGS Publications Warehouse

    Granitto, Matthew; Schmidt, Jeanine M.; Labay, Keith A.; Shew, Nora B.; Gamble, Bruce M.

    2012-01-01

    The U.S. Geological Survey has created a geochemical database of geologic material samples collected in Alaska. This database is readily accessible to anyone with access to the Internet. Designed as a tool for mineral or environmental assessment, land management, or mineral exploration, the initial version of the Alaska Geochemical Database - U.S. Geological Survey Data Series 637 - contains geochemical, geologic, and geospatial data for 264,158 samples collected from 1962-2009: 108,909 rock samples; 92,701 sediment samples; 48,209 heavy-mineral-concentrate samples; 6,869 soil samples; and 7,470 mineral samples. In addition, the Alaska Geochemical Database contains mineralogic data for 18,138 nonmagnetic-fraction heavy mineral concentrates, making it the first U.S. Geological Survey database of this scope that contains both geochemical and mineralogic data. Examples from the Alaska Range will illustrate potential uses of the Alaska Geochemical Database in mineral exploration. Data from the Alaska Geochemical Database have been extensively checked for accuracy of sample media description, sample site location, and analytical method using U.S. Geological Survey sample-submittal archives and U.S. Geological Survey publications (plus field notebooks and sample site compilation base maps from the Alaska Technical Data Unit in Anchorage, Alaska). The database is also the repository for nearly all previously released U.S. Geological Survey Alaska geochemical datasets. Although the Alaska Geochemical Database is a fully relational database in Microsoft® Access 2003 and 2010 formats, these same data are also provided as a series of spreadsheet files in Microsoft® Excel 2003 and 2010 formats, and as ASCII text files. A DVD version of the Alaska Geochemical Database was released in October 2011, as U.S. Geological Survey Data Series 637, and data downloads are available at http://pubs.usgs.gov/ds/637/. Also, all Alaska Geochemical Database data have been incorporated into the interactive U.S. Geological Survey Mineral Resource Data web portal, available at http://mrdata.usgs.gov/.

  11. Combining new technologies for effective collection development: a bibliometric study using CD-ROM and a database management program.

    PubMed Central

    Burnham, J F; Shearer, B S; Wall, J C

    1992-01-01

    Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424

  12. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    PubMed

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  14. A Toolkit for Active Object-Oriented Databases with Application to Interoperability

    NASA Technical Reports Server (NTRS)

    King, Roger

    1996-01-01

    In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object-oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.

  15. A Toolkit for Active Object-Oriented Databases with Application to Interoperability

    NASA Technical Reports Server (NTRS)

    King, Roger

    1996-01-01

    In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object- oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.

  16. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  17. Video Discs in Libraries.

    ERIC Educational Resources Information Center

    Barker, Philip

    1986-01-01

    Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…

  18. The use of intelligent database systems in acute pancreatitis--a systematic review.

    PubMed

    van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John

    2014-01-01

    Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  19. Framework for Optimizing Selection of Interspecies Correlation Estimation Models to Address Species Diversity and Toxicity Gaps in an Aquatic Database

    EPA Science Inventory

    The Chemical Aquatic Fate and Effects (CAFE) database is a tool that facilitates assessments of accidental chemical releases into aquatic environments. CAFE contains aquatic toxicity data used in the development of species sensitivity distributions (SSDs) and the estimation of ha...

  20. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation

    DTIC Science & Technology

    2009-09-01

    Introduction This presentation summarizes recent activity in monitoring spacecraft health status using passive remote optical nonimaging ...Approved for public release; distribution is unlimited. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation...It is beneficial to the observer/analyst to understand the fundamental optical signature variability associated with these detection and

  1. A meta-analysis of bacterial diversity in the feces of cattle

    USDA-ARS?s Scientific Manuscript database

    In this study, we conducted a meta-analysis on 16S rRNA gene sequences of bovine fecal origin that are publicly available in the RDP database. A total of 13663 sequences including 603 isolate sequences were identified in the RDP database (Release 11, Update 1), where 13447 sequences were assigned t...

  2. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse event reporting, confidential enquiries, disease-only registers and health surveys. Databases in group I can be used not only to assess effectiveness but also to assess diffusion and equity. Databases in group II can only assess diffusion. Group III has restricted scope for assessing HTs, except for analysis of adverse events. For use in costing, databases need to include unit costs or prices. Some databases included unit cost as well as a specific HT. A list of around 270 databases was identified at the level of UK, England and Wales or England (over 1000 including Scotland, Wales and Northern Ireland). Allocation of these to the above groups identified around 60 databases with some potential for HT assessment, roughly half to group I. Eighteen clinical registers were identified as having the greatest potential although the clinical administrative datasets had potential mainly owing to their inclusion of a wide range of technologies. Only two databases were identified that could directly be used in costing. The review of the potential capture of HTs prioritized by the UK's NHS R&D HTA programme showed that only 10% would be captured in these databases, mainly drugs prescribed in primary care. The review of the use of routine databases in any form of HT assessment indicated that clinical registers were mainly used for national comparative audit. Some databases have only been used in annual reports, usually time trend analysis. A few peer-reviewed papers used a clinical register to assess the effectiveness of a technology. Accessibility is suggested as a barrier to using most databases. Clinical administrative databases (group Ib) have mainly been used to build population needs indices and performance indicators. A review of the validity of used databases showed that although internal consistency checks were common, relatively few had any form of external audit. Some comparative audit databases have data scrutinised by participating units. Issues around coverage and coding have, in general, received little attention. NHS funding of databases has been mainly for 'Central Returns' for management purposes, which excludes those databases with the greatest potential for HT assessment. Funding for databases was various, but some are unfunded, relying on goodwill. The estimated total cost of databases in group I plus selected databases from groups II and III has been estimated at pound 50 million or around 0.1% of annual NHS spend. A few databases with limited potential for HT assessment account for the bulk of spending. Suggestions for policy include clarification of responsibility for the strategic development of databases, improved resourcing, and issues around coding, confidentiality, ownership and access, maintenance of clinical support, optimal use of information technology, filling gaps and remedying deficiencies. Recommendations for researchers include closer policy links between routine data and R&D, and selective investment in the more promising databases. Recommended research topics include optimal capture and coding of the range of HTs, international comparisons of the role, funding and use of routine data in healthcare systems and use of routine database in trials and in modelling. Independent evaluations are recommended for information strategies (such as those around the National Service Frameworks and various collaborations) and for electronic patient and health records.

  3. MitoNuc: a database of nuclear genes coding for mitochondrial proteins. Update 2002.

    PubMed

    Attimonelli, Marcella; Catalano, Domenico; Gissi, Carmela; Grillo, Giorgio; Licciulli, Flavio; Liuni, Sabino; Santamaria, Monica; Pesole, Graziano; Saccone, Cecilia

    2002-01-01

    Mitochondria, besides their central role in energy metabolism, have recently been found to be involved in a number of basic processes of cell life and to contribute to the pathogenesis of many degenerative diseases. All functions of mitochondria depend on the interaction of nuclear and organelle genomes. Mitochondrial genomes have been extensively sequenced and analysed and data have been collected in several specialised databases. In order to collect information on nuclear coded mitochondrial proteins we developed MitoNuc, a database containing detailed information on sequenced nuclear genes coding for mitochondrial proteins in Metazoa. The MitoNuc database can be retrieved through SRS and is available via the web site http://bighost.area.ba.cnr.it/mitochondriome where other mitochondrial databases developed by our group, the complete list of the sequenced mitochondrial genomes, links to other mitochondrial sites and related information, are available. The MitoAln database, related to MitoNuc in the previous release, reporting the multiple alignments of the relevant homologous protein coding regions, is no longer supported in the present release. In order to keep the links among entries in MitoNuc from homologous proteins, a new field in the database has been defined: the cluster identifier, an alpha numeric code used to identify each cluster of homologous proteins. A comment field derived from the corresponding SWISS-PROT entry has been introduced; this reports clinical data related to dysfunction of the protein. The logic scheme of MitoNuc database has been implemented in the ORACLE DBMS. This will allow the end-users to retrieve data through a friendly interface that will be soon implemented.

  4. Environmental Carcinogen Releases and Lung Cancer Mortality in Rural-Urban Areas of the United States

    ERIC Educational Resources Information Center

    Luo, Juhua; Hendryx, Michael

    2011-01-01

    Purpose: Environmental hazards are unevenly distributed across communities and populations; however, little is known about the distribution of environmental carcinogenic pollutants and lung cancer risk across populations defined by race, sex, and rural-urban setting. Methods: We used the Toxics Release Inventory (TRI) database to conduct an…

  5. Public Release of Pan-STARRS Data

    NASA Astrophysics Data System (ADS)

    Flewelling, Heather; Consortium, panstarrs

    2015-08-01

    Pan-STARRS 1 is a 1.8 meter survey telescope, located on Haleakala, Hawaii, with a 1.4 Gigapixel camera, a 7 square degree field of view, and 5 filters (g,r,i,z,y). The public release of data, which is available to everyone, consists of 4 years of data taken between May 2010 and April 2014. Two of the surveys available in the public release are the 3pi survey and the Medium Deep (MD) survey. The 3pi survey has roughly 60 epochs (12 per filter) covering 3/4 of the sky and everything north of -30 degrees declination. The MD survey consists of 10 fields, observed in a couple of filters each night, usually 8 exposures per filter per field, for about 4000 epochs per MD field. The available data product are accessed through the “Postage Stamp Server” and through the Published Science Products Subsystem (PSPS), both of these are available through the Pan-STARRS Science Interface (PSI). The Postage Stamp Server provides images and catalogs for different stages of processing on single exposures, stack images, difference images, and forced photometry. The PSPS is a SQLServer database that can be queried via script or web interface, with a database for each MD field and a large database for the 3pi survey. This database has relative photometry and astrometry and object associations, making it easy to do searches across the entire sky as well as tools to generate lightcurves of individual objects as a function of time.

  6. Technology and the Modern Library.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    1984-01-01

    Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…

  7. LIPS database with LIPService: a microscopic image database of intracellular structures in Arabidopsis guard cells.

    PubMed

    Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro

    2013-05-16

    Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.

  8. Development and Evaluation of High Bioavailable Sustained-Release Nimodipine Tablets Prepared with Monolithic Osmotic Pump Technology.

    PubMed

    Kong, Hua; Yu, Fanglin; Liu, Yan; Yang, Yang; Li, Mingyuan; Cheng, Xiaohui; Hu, Xiaoqin; Tang, Xuemei; Li, Zhiping; Mei, Xingguo

    2018-01-01

    Frequent administration caused by short half-life and low bioavailability due to poor solubility and low dissolution rate limit the further application of poorly water-soluble nimodipine, although several new indications have been developed. To overcome these shortcomings, sophisticated technologies had to be used since the dose of nimodipine was not too low and the addition of solubilizers could not resolve the problem of poor release. The purpose of this study was to obtain sustained and complete release of nimodipine with a simple and easily industrialized technology. The expandable monolithic osmotic pump tablets containing nimodipine combined with poloxamer 188 and carboxymethylcellulose sodium were prepared. The factors affecting drug release including the amount of solubilizing agent, expanding agent, retarding agent in core tablet and porogenic agent in semipermeable film were optimized. The release behavior was investigated both in vitro and in beagle dogs. It was proved that the anticipant release of nimodipine could be realized in vitro. The sustained and complete release of nimodipine was also realized in beagles because the mean residence time of nimodipine from the osmotic pump system was longer and Cmax was lower than those from the sustained-release tablets in market while there was no difference in AUC(0-t) of the monolithic osmotic pump tablets and the sustained release tablets in market. It was reasonable to believe that the sustained and complete release of poorly watersoluble nimodipine could be realized by using simple expandable monolithic osmotic pump technology combined with surfactant. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Access To The PMM's Pixel Database

    NASA Astrophysics Data System (ADS)

    Monet, D.; Levine, S.

    1999-12-01

    The U.S. Naval Observatory Flagstaff Station is in the process of enabling access to the Precision Measuring Machine (PMM) program's pixel database. The initial release will include the pixels from the PMM's scans of the Palomar Observatory Sky Survey I (POSS-I) -O and -E surveys, the Whiteoak Extension, the European Southern Observatory-R survey, the Science and Engineering Council-J, -EJ, and -ER surveys, and the Anglo- Australian Observatory-R survey. (The SERC-ER and AAO-R surveys are currently incomplete.) As time allows, access to the POSS-II -J, -F, and -N surveys, the Palomar Infrared Milky Way Atlas, the Yale/San Juan Southern Proper Motion survey, and plates rejected by various surveys will be added. (POSS-II -J and -F are complete, but -N was never finished.) Eventually, some 10 Tbytes of pixel data will be available. Due to funding and technology limitations, the initial interface will have only limited functionality, and access time will be slow since the archive is stored on Digital Linear Tape (DLT). Usage of the pixel data will be restricted to non-commercial, scientific applications, and agreements on copyright issues have yet to be finalized. The poster presentation will give the URL.

  10. The Pan-STARRS pipeline and data products

    NASA Astrophysics Data System (ADS)

    Flewelling, Heather

    2018-01-01

    I will give a brief overview of the pipeline, database, and dataproducts for Pan-STARRS1 data release 1 (DR1) and data release 2 (DR2). DR1 and DR2 provides access to data from the Pan-STARRS1 3pi survey, a survey which covers ¾ of the sky over 4 years (2010-2014), everything with a declination greater than -30, in 5 filters (g,r,i,z,y), with at least 12 epochs per filter per area of sky. DR1, released in December 2016, and available to the public at http://stsci.panstarrs.edu, consists of two parts: the stacked images with a 5 sigma depth of (23.3,23.2,23.1,22.3,21.3) for (g,r,i,z,y), and the catalog database, which consists of 10 billion distinct objects, their mean properties from single exposures, and stack photometry. DR2, to be released early 2108, will contain the individual exposure images, with a 5 sigma depth of (22.0,21.8,21.5,20.9,19.7) for (g,r,i,z,y), and the time domain catalogs, from the 374k individual exposures taken for the 3pi survey. I will primarily focus on the catalog database, describing a subset of the tables and different use cases for them. Specifically, I will describe the major tables and metadata of DR1 - objects, their mean properties, and stack photometry, when different tables should be used, and basics on how to filter the data.

  11. [Construction and application of special analysis database of geoherbs based on 3S technology].

    PubMed

    Guo, Lan-ping; Huang, Lu-qi; Lv, Dong-mei; Shao, Ai-juan; Wang, Jian

    2007-09-01

    In this paper,the structures, data sources, data codes of "the spacial analysis database of geoherbs" based 3S technology are introduced, and the essential functions of the database, such as data management, remote sensing, spacial interpolation, spacial statistics, spacial analysis and developing are described. At last, two examples for database usage are given, the one is classification and calculating of NDVI index of remote sensing image in geoherbal area of Atractylodes lancea, the other one is adaptation analysis of A. lancea. These indicate that "the spacial analysis database of geoherbs" has bright prospect in spacial analysis of geoherbs.

  12. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  13. Hydrogen Release Compound (HRC®) Barrier Application At The North Of Basin F Site, Rocky Mountain Arsenal; Innovative Technology Evaluation Report

    EPA Science Inventory

    This Innovative Technology Evaluation Report documents the results of a demonstration of the hydrogen release compound (HRC®) barrier technology developed by Regenesis Bioremediation Products, Inc., of San Clemente, California. HRC® is a proprietary, food-q...

  14. [A Terahertz Spectral Database Based on Browser/Server Technique].

    PubMed

    Zhang, Zhuo-yong; Song, Yue

    2015-09-01

    With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to the obtained correlation coefficient one can perform the searching task very fast and conveniently. Our terahertz spectral database can be accessed at http://www.teralibrary.com. The proposed terahertz spectral database is based on spectral information so far, and will be improved in the future. We hope this terahertz spectral database can provide users powerful, convenient, and high efficient functions, and could promote the broader applications of terahertz technology.

  15. Relational Data Bases--Are You Ready?

    ERIC Educational Resources Information Center

    Marshall, Dorothy M.

    1989-01-01

    Migrating from a traditional to a relational database technology requires more than traditional project management techniques. An overview of what to consider before migrating to relational database technology is presented. Leadership, staffing, vendor support, hardware, software, and application development are discussed. (MLW)

  16. Solar Sail Propulsion Technology Readiness Level Database

    NASA Technical Reports Server (NTRS)

    Adams, Charles L.

    2004-01-01

    The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).

  17. Human Systems Priority Steering Council

    DTIC Science & Technology

    2011-11-08

    NDIA 8th Annual Disruptive Technologies Conference 8 November 2011 Distribution Statement A: Approved for public release; distribution is...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Presented at the NDIA Disruptive Technologies Conference

  18. Position-sensitive radiation monitoring (surface contamination monitor). Innovative technology summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1999-06-01

    The Shonka Research Associates, Inc. Position-Sensitive Radiation Monitor both detects surface radiation and prepares electronic survey map/survey report of surveyed area automatically. The electronically recorded map can be downloaded to a personal computer for review and a map/report can be generated for inclusion in work packages. Switching from beta-gamma detection to alpha detection is relatively simple and entails moving a switch position to alpha and adjusting the voltage level to an alpha detection level. No field calibration is required when switching from beta-gamma to alpha detection. The system can be used for free-release surveys because it meets the federal detectionmore » level sensitivity limits requires for surface survey instrumentation. This technology is superior to traditionally-used floor contamination monitor (FCM) and hand-held survey instrumentation because it can precisely register locations of radioactivity and accurately correlate contamination levels to specific locations. Additionally, it can collect and store continuous radiological data in database format, which can be used to produce real-time imagery as well as automated graphics of survey data. Its flexible design can accommodate a variety of detectors. The cost of the innovative technology is 13% to 57% lower than traditional methods. This technology is suited for radiological surveys of flat surfaces at US Department of Energy (DOE) nuclear facility decontamination and decommissioning (D and D) sites or similar public or commercial sites.« less

  19. A Database for Decision-Making in Training and Distributed Learning Technology

    DTIC Science & Technology

    1998-04-01

    developer must answer these questions: ♦ Who will develop the courseware? Should we outsource ? ♦ What media should we use? How much will it cost? ♦ What...to develop , the database can be useful for answering staffing questions and planning transitions to technology- assisted courses. The database...of distributed learning curricula in com- parison to traditional methods. To develop a military-wide distributed learning plan, the existing course

  20. PDB-wide collection of binding data: current status of the PDBbind database.

    PubMed

    Liu, Zhihai; Li, Yan; Han, Li; Li, Jie; Liu, Jie; Zhao, Zhixiong; Nie, Wei; Liu, Yuchen; Wang, Renxiao

    2015-02-01

    Molecular recognition between biological macromolecules and organic small molecules plays an important role in various life processes. Both structural information and binding data of biomolecular complexes are indispensable for depicting the underlying mechanism in such an event. The PDBbind database was created to collect experimentally measured binding data for the biomolecular complexes throughout the Protein Data Bank (PDB). It thus provides the linkage between structural information and energetic properties of biomolecular complexes, which is especially desirable for computational studies or statistical analyses. Since its first public release in 2004, the PDBbind database has been updated on an annual basis. The latest release (version 2013) provides experimental binding affinity data for 10,776 biomolecular complexes in PDB, including 8302 protein-ligand complexes and 2474 other types of complexes. In this article, we will describe the current methods used for compiling PDBbind and the updated status of this database. We will also review some typical applications of PDBbind published in the scientific literature. All contents of this database are freely accessible at the PDBbind-CN Web server at http://www.pdbbind-cn.org/. wangrx@mail.sioc.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. THE NASA AMES PAH IR SPECTROSCOPIC DATABASE VERSION 2.00: UPDATED CONTENT, WEB SITE, AND ON(OFF)LINE TOOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boersma, C.; Mattioda, A. L.; Allamandola, L. J.

    A significantly updated version of the NASA Ames PAH IR Spectroscopic Database, the first major revision since its release in 2010, is presented. The current version, version 2.00, contains 700 computational and 75 experimental spectra compared, respectively, with 583 and 60 in the initial release. The spectra span the 2.5-4000 μm (4000-2.5 cm{sup -1}) range. New tools are available on the site that allow one to analyze spectra in the database and compare them with imported astronomical spectra as well as a suite of IDL object classes (a collection of programs utilizing IDL's object-oriented programming capabilities) that permit offline analysismore » called the AmesPAHdbIDLSuite. Most noteworthy among the additions are the extension of the computational spectroscopic database to include a number of significantly larger polycyclic aromatic hydrocarbons (PAHs), the ability to visualize the molecular atomic motions corresponding to each vibrational mode, and a new tool that allows one to perform a non-negative least-squares fit of an imported astronomical spectrum with PAH spectra in the computational database. Finally, a methodology is described in the Appendix, and implemented using the AmesPAHdbIDLSuite, that allows the user to enforce charge balance during the fitting procedure.« less

  2. WEB-BASED DATABASE ON RENEWAL TECHNOLOGIES ...

    EPA Pesticide Factsheets

    As U.S. utilities continue to shore up their aging infrastructure, renewal needs now represent over 43% of annual expenditures compared to new construction for drinking water distribution and wastewater collection systems (Underground Construction [UC], 2016). An increased understanding of renewal options will ultimately assist drinking water utilities in reducing water loss and help wastewater utilities to address infiltration and inflow issues in a cost-effective manner. It will also help to extend the service lives of both drinking water and wastewater mains. This research effort involved collecting case studies on the use of various trenchless pipeline renewal methods and providing the information in an online searchable database. The overall objective was to further support technology transfer and information sharing regarding emerging and innovative renewal technologies for water and wastewater mains. The result of this research is a Web-based, searchable database that utility personnel can use to obtain technology performance and cost data, as well as case study references. The renewal case studies include: technologies used; the conditions under which the technology was implemented; costs; lessons learned; and utility contact information. The online database also features a data mining tool for automated review of the technologies selected and cost data. Based on a review of the case study results and industry data, several findings are presented on tren

  3. Nuclear science abstracts (NSA) database 1948--1974 (on the Internet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Nuclear Science Abstracts (NSA) is a comprehensive abstract and index collection of the International Nuclear Science and Technology literature for the period 1948 through 1976. Included are scientific and technical reports of the US Atomic Energy Commission, US Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Coverage of the literature since 1976 is provided by Energy Science and Technology Database. Approximately 25% of the records in the file contain abstracts. These are from the following volumes of the print Nuclear Science Abstracts: Volumes 12--18, Volume 29, and Volume 33. The database containsmore » over 900,000 bibliographic records. All aspects of nuclear science and technology are covered, including: Biomedical Sciences; Metals, Ceramics, and Other Materials; Chemistry; Nuclear Materials and Waste Management; Environmental and Earth Sciences; Particle Accelerators; Engineering; Physics; Fusion Energy; Radiation Effects; Instrumentation; Reactor Technology; Isotope and Radiation Source Technology. The database includes all records contained in Volume 1 (1948) through Volume 33 (1976) of the printed version of Nuclear Science Abstracts (NSA). This worldwide coverage includes books, conference proceedings, papers, patents, dissertations, engineering drawings, and journal literature. This database is now available for searching through the GOV. Research Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  4. RNA-Seq Analysis of Cocos nucifera: Transcriptome Sequencing and De Novo Assembly for Subsequent Functional Genomics Approaches

    PubMed Central

    Xia, Wei; Mason, Annaliese S.; Xia, Zhihui; Qiao, Fei; Zhao, Songlin; Tang, Haoru

    2013-01-01

    Background Cocos nucifera (coconut), a member of the Arecaceae family, is an economically important woody palm grown in tropical regions. Despite its agronomic importance, previous germplasm assessment studies have relied solely on morphological and agronomical traits. Molecular biology techniques have been scarcely used in assessment of genetic resources and for improvement of important agronomic and quality traits in Cocos nucifera, mostly due to the absence of available sequence information. Methodology/Principal Findings To provide basic information for molecular breeding and further molecular biological analysis in Cocos nucifera, we applied RNA-seq technology and de novo assembly to gain a global overview of the Cocos nucifera transcriptome from mixed tissue samples. Using Illumina sequencing, we obtained 54.9 million short reads and conducted de novo assembly to obtain 57,304 unigenes with an average length of 752 base pairs. Sequence comparison between assembled unigenes and released cDNA sequences of Cocos nucifera and Elaeis guineensis indicated that the assembled sequences were of high quality. Approximately 99.9% of unigenes were novel compared to the released coconut EST sequences. Using BLASTX, 68.2% of unigenes were successfully annotated based on the Genbank non-redundant (Nr) protein database. The annotated unigenes were then further classified using the Gene Ontology (GO), Clusters of Orthologous Groups (COG) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases. Conclusions/Significance Our study provides a large quantity of novel genetic information for Cocos nucifera. This information will act as a valuable resource for further molecular genetic studies and breeding in coconut, as well as for isolation and characterization of functional genes involved in different biochemical pathways in this important tropical crop species. PMID:23555859

  5. RNA-Seq analysis of Cocos nucifera: transcriptome sequencing and de novo assembly for subsequent functional genomics approaches.

    PubMed

    Fan, Haikuo; Xiao, Yong; Yang, Yaodong; Xia, Wei; Mason, Annaliese S; Xia, Zhihui; Qiao, Fei; Zhao, Songlin; Tang, Haoru

    2013-01-01

    Cocos nucifera (coconut), a member of the Arecaceae family, is an economically important woody palm grown in tropical regions. Despite its agronomic importance, previous germplasm assessment studies have relied solely on morphological and agronomical traits. Molecular biology techniques have been scarcely used in assessment of genetic resources and for improvement of important agronomic and quality traits in Cocos nucifera, mostly due to the absence of available sequence information. To provide basic information for molecular breeding and further molecular biological analysis in Cocos nucifera, we applied RNA-seq technology and de novo assembly to gain a global overview of the Cocos nucifera transcriptome from mixed tissue samples. Using Illumina sequencing, we obtained 54.9 million short reads and conducted de novo assembly to obtain 57,304 unigenes with an average length of 752 base pairs. Sequence comparison between assembled unigenes and released cDNA sequences of Cocos nucifera and Elaeis guineensis indicated that the assembled sequences were of high quality. Approximately 99.9% of unigenes were novel compared to the released coconut EST sequences. Using BLASTX, 68.2% of unigenes were successfully annotated based on the Genbank non-redundant (Nr) protein database. The annotated unigenes were then further classified using the Gene Ontology (GO), Clusters of Orthologous Groups (COG) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases. Our study provides a large quantity of novel genetic information for Cocos nucifera. This information will act as a valuable resource for further molecular genetic studies and breeding in coconut, as well as for isolation and characterization of functional genes involved in different biochemical pathways in this important tropical crop species.

  6. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog).

    PubMed

    MacArthur, Jacqueline; Bowler, Emily; Cerezo, Maria; Gil, Laurent; Hall, Peggy; Hastings, Emma; Junkins, Heather; McMahon, Aoife; Milano, Annalisa; Morales, Joannella; Pendlington, Zoe May; Welter, Danielle; Burdett, Tony; Hindorff, Lucia; Flicek, Paul; Cunningham, Fiona; Parkinson, Helen

    2017-01-04

    The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Senoo, Tetsuo

    As computer technology, communication technology and others have progressed, many corporations are likely to locate constructing and utilizing their own databases at the center of the information activities, and aim at developing their information activities newly. This paper considers how information management in a corporation is affected under changing management and technology environments, and clarifies and generalizes what in-house databases should be constructed and utilized from the viewpoints of requirements to be furnished, types and forms of information to be dealt, indexing, use type and frequency, evaluation method and so on. The author outlines an information system of Matsushita called MATIS (Matsushita Technical Information System) as an actual example, and describes the present status and some points to be reminded in constructing and utilizing databases of REP, BOOK and SYMP.

  8. International Soil Carbon Network (ISCN) Database v3-1

    DOE Data Explorer

    Nave, Luke [University of Michigan] (ORCID:0000000182588335); Johnson, Kris [USDA-Forest Service; van Ingen, Catharine [Microsoft Research; Agarwal, Deborah [Lawrence Berkeley National Laboratory] (ORCID:0000000150452396); Humphrey, Marty [University of Virginia; Beekwilder, Norman [University of Virginia

    2016-01-01

    The ISCN is an international scientific community devoted to the advancement of soil carbon research. The ISCN manages an open-access, community-driven soil carbon database. This is version 3-1 of the ISCN Database, released in December 2015. It gathers 38 separate dataset contributions, totalling 67,112 sites with data from 71,198 soil profiles and 431,324 soil layers. For more information about the ISCN, its scientific community and resources, data policies and partner networks visit: http://iscn.fluxdata.org/.

  9. 77 FR 71089 - Pilot Loading of Aeronautical Database Updates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-29

    ... the use of newer systems and data-transfer mechanisms such as those employing wireless technology. In... which enables wireless updating of systems and databases. The current regulation does not accommodate... maintenance); Recordkeeping requirements; Training for pilots; Technological advancements in data-transfer...

  10. LymPHOS 2.0: an update of a phosphosite database of primary human T cells

    PubMed Central

    Nguyen, Tien Dung; Vidal-Cortes, Oriol; Gallardo, Oscar; Abian, Joaquin; Carrascal, Montserrat

    2015-01-01

    LymPHOS is a web-oriented database containing peptide and protein sequences and spectrometric information on the phosphoproteome of primary human T-Lymphocytes. Current release 2.0 contains 15 566 phosphorylation sites from 8273 unique phosphopeptides and 4937 proteins, which correspond to a 45-fold increase over the original database description. It now includes quantitative data on phosphorylation changes after time-dependent treatment with activators of the TCR-mediated signal transduction pathway. Sequence data quality has also been improved with the use of multiple search engines for database searching. LymPHOS can be publicly accessed at http://www.lymphos.org. Database URL: http://www.lymphos.org. PMID:26708986

  11. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  12. Investigation of bioaerosols released from swine farms using conventional and alternative waste treatment and management technologies

    USGS Publications Warehouse

    Ko, G.; Simmons, O. D.; Likirdopulos, C.A.; Worley-Davis, L.; Williams, M.; Sobsey, M.D.

    2008-01-01

    Microbial air pollution from concentrated animal feeding operations (CAFOs) has raised concerns about potential public health and environmental impacts. We investigated the levels of bioaerosols released from two swine farms using conventional lagoon-sprayfield technology and ten farms using alternative waste treatment and management technologies in the United States. In total, 424 microbial air samples taken at the 12 CAFOs were analyzed for several indicator and pathogenic microorganisms, including culturable bacteria and fungi, fecal coliform, Escherichia coli, Clostridium perfringens, bacteriophage, and Salmonella. At all of the investigated farms, bacterial concentrations at the downwind boundary were higher than those at the upwind boundary, suggesting that the farms are sources of microbial air contamination. In addition, fecal indicator microorganisms were found more frequently near barns and treatment technology sites than upwind or downwind of the farms. Approximately 4.5% (19/424), 1.2% (5/424), 22.2% (94/424), and 12.3% (53/424) of samples were positive for fecal coliform, E. coli, Clostridium, and total coliphage, respectively. Based on statistical comparison of airborne fecal indicator concentrations at alternative treatment technology farms compared to control farms with conventional technology, three alternative waste treatment technologies appear to perform better at reducing the airborne release of fecal indicator microorganisms during on-farm treatment and management processes. These results demonstrate that airborne microbial contaminants are released from swine farms and pose possible exposure risks to farm workers and nearby neighbors. However, the release of airborne microorganisms appears to decrease significantly through the use of certain alternative waste management and treatment technologies. ?? 2008 American Chemical Society.

  13. Risk analysis of technological hazards: Simulation of scenarios and application of a local vulnerability index.

    PubMed

    Sanchez, E Y; Represa, S; Mellado, D; Balbi, K B; Acquesta, A D; Colman Lerner, J E; Porta, A A

    2018-06-15

    The potential impact of a technological accident can be assessed by risk estimation. Taking this into account, the latent or potential condition can be warned and mitigated. In this work we propose a methodology to estimate risk of technological hazards, focused on two components. The first one is the processing of meteorological databases to define the most probably and conservative scenario of study, and the second one, is the application of a local social vulnerability index to classify the population. In this case of study, the risk was estimated for a hypothetical release of liquefied ammonia in a meat-packing industry in the city of La Plata, Argentina. The method consists in integrating the simulated toxic threat zone with ALOHA software, and the layer of sociodemographic classification of the affected population. The results show the areas associated with higher risks of exposure to ammonia, which are worth being addressed for the prevention of disasters in the region. Advantageously, this systemic approach is methodologically flexible as it provides the possibility of being applied in various scenarios based on the available information of both, the exposed population and its meteorology. Furthermore, this methodology optimizes the processing of the input data and its calculation. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Scalable Power-Component Models for Concept Testing

    DTIC Science & Technology

    2011-08-17

    Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release 2011 NDIA GROUND VEHICLE...Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release Page 2 of 8 technology that has yet...Technology Symposium (GVSETS) Scalable Power-Component Models for Concept Testing, Mazzola, et al . UNCLASSIFIED: Dist A. Approved for public release

  15. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  16. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  17. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  18. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  19. Miniature stick-packaging--an industrial technology for pre-storage and release of reagents in lab-on-a-chip systems.

    PubMed

    van Oordt, Thomas; Barb, Yannick; Smetana, Jan; Zengerle, Roland; von Stetten, Felix

    2013-08-07

    Stick-packaging of goods in tubular-shaped composite-foil pouches has become a popular technology for food and drug packaging. We miniaturized stick-packaging for use in lab-on-a-chip (LOAC) systems to pre-store and on-demand release the liquid and dry reagents in a volume range of 80-500 μl. An integrated frangible seal enables the pressure-controlled release of reagents and simplifies the layout of LOAC systems, thereby making the package a functional microfluidic release unit. The frangible seal is adjusted to defined burst pressures ranging from 20 to 140 kPa. The applied ultrasonic welding process allows the packaging of temperature sensitive reagents. Stick-packs have been successfully tested applying recovery tests (where 99% (STDV = 1%) of 250 μl pre-stored liquid is released), long-term storage tests (where there is loss of only <0.5% for simulated 2 years) and air transport simulation tests. The developed technology enables the storage of a combination of liquid and dry reagents. It is a scalable technology suitable for rapid prototyping and low-cost mass production.

  20. Comparison of ASTER Global Emissivity Database (ASTER-GED) With In-Situ Measurement In Italian Vulcanic Areas

    NASA Astrophysics Data System (ADS)

    Silvestri, M.; Musacchio, M.; Buongiorno, M. F.; Amici, S.; Piscini, A.

    2015-12-01

    LP DAAC released the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Emissivity Database (GED) datasets on April 2, 2014. The database was developed by the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL), California Institute of Technology. The database includes land surface emissivities derived from ASTER data acquired over the contiguous United States, Africa, Arabian Peninsula, Australia, Europe, and China. In this work we compare ground measurements of emissivity acquired by means of Micro-FTIR (Fourier Thermal Infrared spectrometer) instrument with the ASTER emissivity map extract from ASTER-GED and the emissivity obtained by using single ASTER data. Through this analysis we want to investigate differences existing between the ASTER-GED dataset (average from 2000 to 2008 seasoning independent) and fall in-situ emissivity measurement. Moreover the role of different spatial resolution characterizing ASTER and MODIS, 90mt and 1km respectively, by comparing them with in situ measurements. Possible differences can be due also to the different algorithms used for the emissivity estimation, Temperature and Emissivity Separation algorithm for ASTER TIR band( Gillespie et al, 1998) and the classification-based emissivity method (Snyder and al, 1998) for MODIS. In-situ emissivity measurements have been collected during dedicated fields campaign on Mt. Etna vulcano and Solfatara of Pozzuoli. Gillespie, A. R., Matsunaga, T., Rokugawa, S., & Hook, S. J. (1998). Temperature and emissivity separation from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Transactions on Geoscience and Remote Sensing, 36, 1113-1125. Snyder, W.C., Wan, Z., Zhang, Y., & Feng, Y.-Z. (1998). Classification-based emissivity for land surface temperature measurement from space. International Journal of Remote Sensing, 19, 2753-2574.

  1. Rice Annotation Project Database (RAP-DB): an integrative and interactive database for rice genomics.

    PubMed

    Sakai, Hiroaki; Lee, Sung Shin; Tanaka, Tsuyoshi; Numa, Hisataka; Kim, Jungsok; Kawahara, Yoshihiro; Wakimoto, Hironobu; Yang, Ching-chia; Iwamoto, Masao; Abe, Takashi; Yamada, Yuko; Muto, Akira; Inokuchi, Hachiro; Ikemura, Toshimichi; Matsumoto, Takashi; Sasaki, Takuji; Itoh, Takeshi

    2013-02-01

    The Rice Annotation Project Database (RAP-DB, http://rapdb.dna.affrc.go.jp/) has been providing a comprehensive set of gene annotations for the genome sequence of rice, Oryza sativa (japonica group) cv. Nipponbare. Since the first release in 2005, RAP-DB has been updated several times along with the genome assembly updates. Here, we present our newest RAP-DB based on the latest genome assembly, Os-Nipponbare-Reference-IRGSP-1.0 (IRGSP-1.0), which was released in 2011. We detected 37,869 loci by mapping transcript and protein sequences of 150 monocot species. To provide plant researchers with highly reliable and up to date rice gene annotations, we have been incorporating literature-based manually curated data, and 1,626 loci currently incorporate literature-based annotation data, including commonly used gene names or gene symbols. Transcriptional activities are shown at the nucleotide level by mapping RNA-Seq reads derived from 27 samples. We also mapped the Illumina reads of a Japanese leading japonica cultivar, Koshihikari, and a Chinese indica cultivar, Guangluai-4, to the genome and show alignments together with the single nucleotide polymorphisms (SNPs) and gene functional annotations through a newly developed browser, Short-Read Assembly Browser (S-RAB). We have developed two satellite databases, Plant Gene Family Database (PGFD) and Integrative Database of Cereal Gene Phylogeny (IDCGP), which display gene family and homologous gene relationships among diverse plant species. RAP-DB and the satellite databases offer simple and user-friendly web interfaces, enabling plant and genome researchers to access the data easily and facilitating a broad range of plant research topics.

  2. Human spaceflight technology needs-a foundation for JSC's technology strategy

    NASA Astrophysics Data System (ADS)

    Stecklein, J. M.

    Human space exploration has always been heavily influenced by goals to achieve a specific mission on a specific schedule. This approach drove rapid technology development, the rapidity of which added risks and became a major driver for costs and cost uncertainty. The National Aeronautics and Space Administration (NASA) is now approaching the extension of human presence throughout the solar system by balancing a proactive yet less schedule-driven development of technology with opportunistic scheduling of missions as the needed technologies are realized. This approach should provide cost effective, low risk technology development that will enable efficient and effective manned spaceflight missions. As a first step, the NASA Human Spaceflight Architecture Team (HAT) has identified a suite of critical technologies needed to support future manned missions across a range of destinations, including in cis-lunar space, near earth asteroid visits, lunar exploration, Mars moons, and Mars exploration. The challenge now is to develop a strategy and plan for technology development that efficiently enables these missions over a reasonable time period, without increasing technology development costs unnecessarily due to schedule pressure, and subsequently mitigating development and mission risks. NASA's Johnson Space Center (JSC), as the nation's primary center for human exploration, is addressing this challenge through an innovative approach in allocating Internal Research and Development funding to projects. The HAT Technology Needs (Tech Needs) Database has been developed to correlate across critical technologies and the NASA Office of Chief Technologist Technology Area Breakdown Structure (TABS). The TechNeeds Database illuminates that many critical technologies may support a single technical capability gap, that many HAT technology needs may map to a single TABS technology discipline, and that a single HAT technology need may map to multiple TABS technology disciplines. Th- TechNeeds Database greatly clarifies understanding of the complex relationships of critical technologies to mission and architecture element needs. Extensions to the core TechNeeds Database allow JSC to factor in and appropriately weight JSC core technology competencies, and considerations of commercialization potential and partnership potential. The inherent coupling among these, along with an appropriate importance weighting, has provided an initial prioritization for allocation of technology development research funding at JSc. The HAT Technology Needs Database, with a core of built-in reports, clarifies and communicates complex technology needs for cost effective human space exploration so that an organization seeking to assure that research prioritization supports human spaceflight of the future can be successful.

  3. Human Spaceflight Technology Needs - A Foundation for JSC's Technology Strategy

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.

    2013-01-01

    Human space exploration has always been heavily influenced by goals to achieve a specific mission on a specific schedule. This approach drove rapid technology development, the rapidity of which adds risks as well as provides a major driver for costs and cost uncertainty. The National Aeronautics and Space Administration (NASA) is now approaching the extension of human presence throughout the solar system by balancing a proactive yet less schedule-driven development of technology with opportunistic scheduling of missions as the needed technologies are realized. This approach should provide cost effective, low risk technology development that will enable efficient and effective manned spaceflight missions. As a first step, the NASA Human Spaceflight Architecture Team (HAT) has identified a suite of critical technologies needed to support future manned missions across a range of destinations, including in cis-lunar space, near earth asteroid visits, lunar exploration, Mars moons, and Mars exploration. The challenge now is to develop a strategy and plan for technology development that efficiently enables these missions over a reasonable time period, without increasing technology development costs unnecessarily due to schedule pressure, and subsequently mitigating development and mission risks. NASA's Johnson Space Center (JSC), as the nation s primary center for human exploration, is addressing this challenge through an innovative approach in allocating Internal Research and Development funding to projects. The HAT Technology Needs (TechNeeds) Database has been developed to correlate across critical technologies and the NASA Office of Chief Technologist Technology Area Breakdown Structure (TABS). The TechNeeds Database illuminates that many critical technologies may support a single technical capability gap, that many HAT technology needs may map to a single TABS technology discipline, and that a single HAT technology need may map to multiple TABS technology disciplines. The TechNeeds Database greatly clarifies understanding of the complex relationships of critical technologies to mission and architecture element needs. Extensions to the core TechNeeds Database allow JSC to factor in and appropriately weight JSC Center Core Technology Competencies, and considerations of Commercialization Potential and Partnership Potential. The inherent coupling among these, along with an appropriate importance weighting, has provided an initial prioritization for allocation of technology development research funding for JSC. The HAT Technology Needs Database, with a core of built-in reports, clarifies and communicates complex technology needs for cost effective human space exploration such that an organization seeking to assure that research prioritization supports human spaceflight of the future can be successful.

  4. GOSSS-DR1: The First Data Release of the Galactic O-star Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Sota, Alfredo; Maíz Apellániz, Jesús; Barbá, Rodolfo H.; Walborn, Nolan R.; Alfaro, Emilio J.; Gamen, Roberto C.; Morrell, Nidia I.; Arias, Julia I.; Gallego Calvente, A. T.

    2013-06-01

    Coinciding with this meeting, we are publishing the first data release of GOSSS. This release contains [a] revised spectral classifications and [b] blue-violet R~2500 spectra in FITS format for ~400 Galactic O stars, including all brighter than B=8. DR1 (and future releases) will take place through GOSC, the Galactic O-Star Catalog (http://gosc.iaa.es), which will be updated for the occasion. Since 2011 GOSC runs on a MySQL database and allows for queries based on coordinates, spectral class, photometry, and other parameters. Future data releases will include the rest of the stars observed in GOSSS (currently 1521 with ~1000 more planned in the next two years).

  5. Databases in the Central Government : State-of-the-art and the Future

    NASA Astrophysics Data System (ADS)

    Ohashi, Tomohiro

    Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.

  6. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    NASA Astrophysics Data System (ADS)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  7. Service management at CERN with Service-Now

    NASA Astrophysics Data System (ADS)

    Toteva, Z.; Alvarez Alonso, R.; Alvarez Granda, E.; Cheimariou, M.-E.; Fedorko, I.; Hefferman, J.; Lemaitre, S.; Clavo, D. Martin; Martinez Pedreira, P.; Pera Mira, O.

    2012-12-01

    The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal - to bring the services closer to the end user based on Information Technology Infrastructure Library (ITIL) best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfilment processes which are based on a unique two-dimensional service catalogue that combines both the user and the support team views of all services. After an extensive evaluation of the available industrial solutions, Service-now was selected as the tool to implement the CERN Service-Management processes. The initial release of the tool provided an attractive web portal for the users and successfully implemented two basic ITIL processes; the incident management and the request fulfilment processes. It also integrated with the CERN personnel databases and the LHC GRID ticketing system. Subsequent releases continued to integrate with other third-party tools like the facility management systems of CERN as well as to implement new processes such as change management. Independently from those new development activities it was decided to simplify the request fulfilment process in order to achieve easier acceptance by the CERN user community. We believe that due to the high modularity of the Service-now tool, the parallel design of ITIL processes e.g., event management and non-ITIL processes, e.g., computer centre hardware management, will be easily achieved. This presentation will describe the experience that we have acquired and the techniques that were followed to achieve the CERN customization of the Service-Now tool.

  8. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...

  9. A New Methodology for Systematic Exploitation of Technology Databases.

    ERIC Educational Resources Information Center

    Bedecarrax, Chantal; Huot, Charles

    1994-01-01

    Presents the theoretical aspects of a data analysis methodology that can help transform sequential raw data from a database into useful information, using the statistical analysis of patents as an example. Topics discussed include relational analysis and a technology watch approach. (Contains 17 references.) (LRW)

  10. Curriculum Connection. Take Technology Outdoors.

    ERIC Educational Resources Information Center

    Dean, Bruce Robert

    1992-01-01

    Technology can support hands-on science as elementary students use computers to formulate field guides to nature surrounding their school. Students examine other field guides; open databases for recording information; collect, draw, and identify plants, insects, and animals; enter data into the database; then generate a computerized field guide.…

  11. User Guide to the Aircraft Cumulative Probability Chart Template

    DTIC Science & Technology

    2009-07-01

    Technology Organisation *AeroStructures Technologies DSTO-TR-2332 ABSTRACT To ensure aircraft structural integrity is maintained to an acceptable level...cracking (or failure) which may be used to assess the life of aircraft structures . RELEASE LIMITATION Approved for public release Report...ADDRESS(ES) DSTO Defence Science and Technology Organisation ,506 Lorimer St,Fishermans Bend Victoria 3207 Australia, , , 8. PERFORMING ORGANIZATION

  12. Engineered Resilient Systems (ERS) S&T Priority Description and Roadmap

    DTIC Science & Technology

    2011-11-08

    ERS PSC, NDIA Disruptive Technologies 8 November 2011 Page-1 Distribution Statement A: Approved for public release; distribution is unlimited...ODASD SE NDIA 8th Annual Disruptive Technologies Conference 8 November 2011 Report Documentation Page Form ApprovedOMB No. 0704-0188...release; distribution unlimited 13. SUPPLEMENTARY NOTES Presented at the NDIA Disruptive Technologies Conference, November 8,-9, 2011 Washington, DC 14

  13. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    DTIC Science & Technology

    2017-11-01

    Threat Information by Decetria Akole and Michael Chen Approved for public release; distribution is unlimited...Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information by Decetria Akole The Thurgood Marshall...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data

  14. SCOPe: Manual Curation and Artifact Removal in the Structural Classification of Proteins - extended Database.

    PubMed

    Chandonia, John-Marc; Fox, Naomi K; Brenner, Steven E

    2017-02-03

    SCOPe (Structural Classification of Proteins-extended, http://scop.berkeley.edu) is a database of relationships between protein structures that extends the Structural Classification of Proteins (SCOP) database. SCOP is an expert-curated ordering of domains from the majority of proteins of known structure in a hierarchy according to structural and evolutionary relationships. SCOPe classifies the majority of protein structures released since SCOP development concluded in 2009, using a combination of manual curation and highly precise automated tools, aiming to have the same accuracy as fully hand-curated SCOP releases. SCOPe also incorporates and updates the ASTRAL compendium, which provides several databases and tools to aid in the analysis of the sequences and structures of proteins classified in SCOPe. SCOPe continues high-quality manual classification of new superfamilies, a key feature of SCOP. Artifacts such as expression tags are now separated into their own class, in order to distinguish them from the homology-based annotations in the remainder of the SCOPe hierarchy. SCOPe 2.06 contains 77,439 Protein Data Bank entries, double the 38,221 structures classified in SCOP. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. JEnsembl: a version-aware Java API to Ensembl data systems.

    PubMed

    Paterson, Trevor; Law, Andy

    2012-11-01

    The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).

  16. Novel Method for Recruiting Representative At-Risk Individuals into Cancer Prevention Trials: Online Health Risk Assessment in Employee Wellness Programs.

    PubMed

    Hui, Siu-Kuen Azor; Miller, Suzanne M; Hazuda, Leah; Engelman, Kimberly; Ellerbeck, Edward F

    2016-09-01

    Participation in cancer prevention trials (CPT) is lower than 3 % among high-risk healthy individuals, and racial/ethnic minorities are the most under-represented. Novel recruitment strategies are therefore needed. Online health risk assessment (HRA) serves as a gateway component of nearly all employee wellness programs (EWPs) and may be a missed opportunity. This study aimed to explore employees' interest, willingness, motivators, and barriers of releasing their HRA responses to an external secure research database for recruitment purpose. We used qualitative research methods (focus group and individual interviews) to examine employees' interest and willingness in releasing their online HRA responses to an external, secure database to register as potential CPT participants. Fifteen structured interviews (40 % of study participants were of racial/ethnic minority) were conducted, and responses reached saturation after four interviews. All employees showed interest and willingness to release their online HRA responses to register as a potential CPT participant. Content analyses revealed that 91 % of participants were motivated to do so, and the major motivators were to (1) obtain help in finding personally relevant prevention trials, (2) help people they know who are affected by cancer, and/or (3) increase knowledge about CPT. A subset of participants (45 %) expressed barriers of releasing their HRA responses due to concerns about credibility and security of the external database. Online HRA may be a feasible but underutilized recruitment method for cancer prevention trials. EWP-sponsored HRA shows promise for the development of a large, centralized registry of racially/ethnically representative CPT potential participants.

  17. Novel method for recruiting representative at-risk individuals into cancer prevention trials: on-line health risk assessment in employee wellness programs

    PubMed Central

    Hui, Siu-kuen Azor; Miller, Suzanne M.; Hazuda, Leah; Engelman, Kimberly; Ellerbeck, Edward F.

    2015-01-01

    Participation in cancer prevention trials (CPT) is lower than 3% among high-risk healthy individuals, and racial/ethnic minorities are the most under-represented. Novel recruitment strategies are therefore needed. On-line health risk assessment (HRA) serves as a gateway component of nearly all employee wellness programs (EWP) and may be a missed opportunity. This study aimed to explore employees’ interest, willingness, motivators, and barriers of releasing their HRA responses to an external secure research database for recruitment purpose. We used qualitative research methods (focus group and individual interviews) to examine employees’ interest and willingness in releasing their on-line HRA responses to an external, secure database to register as potential CPT participants. Fifteen structured interviews (40% of study participants were of racial/ethnic minority) were conducted and responses reached saturation after four interviews. All employees showed interest and willingness to release their on-line HRA responses to register as a potential CPT participant. Content analyses revealed that 91% of participants were motivated to do so, and the major motivators were to: 1) obtain help in finding personally relevant prevention trials, 2) help people they know who are affected by cancer, and/or 3) increase knowledge about CPT. A subset of participants (45%) expressed barriers of releasing their HRA responses due to concerns about credibility and security of the external database. On-line HRA may be a feasible but underutilized recruitment method for cancer prevention trials. EWP-sponsored HRA shows promise for the development of a large, centralized registry of racially/ethnically representative CPT potential participants. PMID:26507744

  18. An international aerospace information system - A cooperative opportunity

    NASA Technical Reports Server (NTRS)

    Blados, Walter R.; Cotter, Gladys A.

    1992-01-01

    This paper presents for consideration new possibilities for uniting the various aerospace database efforts toward a cooperative international aerospace database initiative that can optimize the cost-benefit equation for all members. The development of astronautics and aeronautics in individual nations has led to initiatives for national aerospace databases. Technological developments in information technology and science, as well as the reality of scarce resources, makes it necessary to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing.

  19. Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study

    DTIC Science & Technology

    2015-01-16

    evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and

  20. Second NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    ONeil, D. A.; Mankins, J. C.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS), a spreadsheet analysis tool suite, applies parametric equations for sizing and lifecycle cost estimation. Performance, operation, and programmatic data used by the equations come from a Technology Tool Box (TTB) database. In this second TTB Technical Interchange Meeting (TIM), technologists, system model developers, and architecture analysts discussed methods for modeling technology decisions in spreadsheet models, identified specific technology parameters, and defined detailed development requirements. This Conference Publication captures the consensus of the discussions and provides narrative explanations of the tool suite, the database, and applications of ATLAS within NASA s changing environment.

  1. Health technology management: a database analysis as support of technology managers in hospitals.

    PubMed

    Miniati, Roberto; Dori, Fabrizio; Iadanza, Ernesto; Fregonara, Mario M; Gentili, Guido Biffi

    2011-01-01

    Technology management in healthcare must continually respond and adapt itself to new improvements in medical equipment. Multidisciplinary approaches which consider the interaction of different technologies, their use and user skills, are necessary in order to improve safety and quality. An easy and sustainable methodology is vital to Clinical Engineering (CE) services in healthcare organizations in order to define criteria regarding technology acquisition and replacement. This article underlines the critical aspects of technology management in hospitals by providing appropriate indicators for benchmarking CE services exclusively referring to the maintenance database from the CE department at the Careggi Hospital in Florence, Italy.

  2. Hawai`i forest bird monitoring database: Database dictionary

    USGS Publications Warehouse

    Camp, Richard J.; Genz, Ayesha

    2017-01-01

    Between 1976 and 1981, the U.S. Fish and Wildlife Service (now U.S. Geological Survey – Pacific Island Ecosystems Research Center [USGS-PIERC]) conducted systematic surveys of forest birds and plant communities on all the main Hawaiian Islands, except O‘ahu, as part of the Hawai‘i Forest Bird Surveys (HFBS). Results of this monumental effort have guided conservation efforts and provided the basis for many plant and bird recovery plans and land acquisition decisions in Hawai‘i. Unfortunately, these estimates and range maps are now seriously outdated, hindering modern conservation decision-making and recovery planning. HFBIDP staff work closely with land managers and others to identify the location of bird populations in need of protection. In addition, HFBIDP is able to assess field collection methods, census areas, and survey frequency for their effectiveness. Survey and geographical data are refined and released in successive versions, each more inclusive, detailed, and accurate than the previous release. Incrementally releasing data gives land managers and survey coordinators reasonably good data to work with early on rather than waiting for the release of ‘perfect’ data, ‘perfectly’ analyzed. Consequently, summary results are available in a timely manner.

  3. Directory of Assistive Technology: Data Sources.

    ERIC Educational Resources Information Center

    Council for Exceptional Children, Reston, VA. Center for Special Education Technology.

    The annotated directory describes in detail both on-line and print databases in the area of assistive technology for individuals with disabilities. For each database, the directory provides the name, address, and telephone number of the sponsoring organization; disability areas served; number of hardware and software products; types of information…

  4. The Effect of Relational Database Technology on Administrative Computing at Carnegie Mellon University.

    ERIC Educational Resources Information Center

    Golden, Cynthia; Eisenberger, Dorit

    1990-01-01

    Carnegie Mellon University's decision to standardize its administrative system development efforts on relational database technology and structured query language is discussed and its impact is examined in one of its larger, more widely used applications, the university information system. Advantages, new responsibilities, and challenges of the…

  5. Utility-Scale Energy Technology Capacity Factors | Energy Analysis | NREL

    Science.gov Websites

    Transparent Cost Database Button This chart indicates the range of recent capacity factor estimates for utility-scale technology cost and performance estimates, please visit the Transparent Cost Database website for NREL's information regarding vehicles, biofuels, and electricity generation. Capital Cost

  6. Understanding transit accidents using the National Transit Database and the role of Transit Intelligent Vehicle Initiative Technology in reducing accidents

    DOT National Transportation Integrated Search

    2004-06-01

    This report documents the results of bus accident data analysis using the 2002 National Transit Database (NTD) and discusses the potential of using advanced technology being studied and developed under the U.S. Department of Transportations (U.S. ...

  7. Physical Samples Linked Data in Action

    NASA Astrophysics Data System (ADS)

    Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.

    2017-12-01

    Most data and metadata related to physical samples currently reside in isolated relational databases driven by diverse data models. How to approach the challenge for sharing, interchanging and integrating data from these difference relational databases motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). In last few years, we have released four knowledge graphs concentrated on physical samples, including System for Earth Sample Registration (SESAR), USGS National Geochemical Database (NGDC), Ocean Biogeographic Information System (OBIS), and Earthchem Database. Currently the four knowledge graphs contain over 12 million facets (triples) about objects of interest to the geoscience domain. Choosing appropriate domain ontologies for representing context of data is the core of the whole work. Geolink ontology developed by Earthcube Geolink project was used as top level to represent common concepts like person, organization, cruise, etc. Physical sample ontology developed by Interdisciplinary Earth Data Alliance (IEDA) and Darwin Core vocabulary were used as second level to describe details about geological samples and biological diversity. We also focused on finding and building best tool chains to support the whole life cycle of publishing linked data we have, including information retrieval, linked data browsing and data visualization. Currently, Morph, Virtuoso Server, LodView, LodLive, and YASGUI were employed for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Persistent digital identifier is another main point we concentrated on. Open Researcher & Contributor IDs (ORCIDs), International Geo Sample Numbers (IGSNs), Global Research Identifier Database (GRID) and other persistent identifiers were used to link different resources from various graphs with person, sample, organization, cruise, etc. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).

  8. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  9. The Gypsy Database (GyDB) of mobile genetic elements: release 2.0

    PubMed Central

    Llorens, Carlos; Futami, Ricardo; Covelli, Laura; Domínguez-Escribá, Laura; Viu, Jose M.; Tamarit, Daniel; Aguilar-Rodríguez, Jose; Vicente-Ripolles, Miguel; Fuster, Gonzalo; Bernet, Guillermo P.; Maumus, Florian; Munoz-Pomer, Alfonso; Sempere, Jose M.; Latorre, Amparo; Moya, Andres

    2011-01-01

    This article introduces the second release of the Gypsy Database of Mobile Genetic Elements (GyDB 2.0): a research project devoted to the evolutionary dynamics of viruses and transposable elements based on their phylogenetic classification (per lineage and protein domain). The Gypsy Database (GyDB) is a long-term project that is continuously progressing, and that owing to the high molecular diversity of mobile elements requires to be completed in several stages. GyDB 2.0 has been powered with a wiki to allow other researchers participate in the project. The current database stage and scope are long terminal repeats (LTR) retroelements and relatives. GyDB 2.0 is an update based on the analysis of Ty3/Gypsy, Retroviridae, Ty1/Copia and Bel/Pao LTR retroelements and the Caulimoviridae pararetroviruses of plants. Among other features, in terms of the aforementioned topics, this update adds: (i) a variety of descriptions and reviews distributed in multiple web pages; (ii) protein-based phylogenies, where phylogenetic levels are assigned to distinct classified elements; (iii) a collection of multiple alignments, lineage-specific hidden Markov models and consensus sequences, called GyDB collection; (iv) updated RefSeq databases and BLAST and HMM servers to facilitate sequence characterization of new LTR retroelement and caulimovirus queries; and (v) a bibliographic server. GyDB 2.0 is available at http://gydb.org. PMID:21036865

  10. The Gypsy Database (GyDB) of mobile genetic elements: release 2.0.

    PubMed

    Llorens, Carlos; Futami, Ricardo; Covelli, Laura; Domínguez-Escribá, Laura; Viu, Jose M; Tamarit, Daniel; Aguilar-Rodríguez, Jose; Vicente-Ripolles, Miguel; Fuster, Gonzalo; Bernet, Guillermo P; Maumus, Florian; Munoz-Pomer, Alfonso; Sempere, Jose M; Latorre, Amparo; Moya, Andres

    2011-01-01

    This article introduces the second release of the Gypsy Database of Mobile Genetic Elements (GyDB 2.0): a research project devoted to the evolutionary dynamics of viruses and transposable elements based on their phylogenetic classification (per lineage and protein domain). The Gypsy Database (GyDB) is a long-term project that is continuously progressing, and that owing to the high molecular diversity of mobile elements requires to be completed in several stages. GyDB 2.0 has been powered with a wiki to allow other researchers participate in the project. The current database stage and scope are long terminal repeats (LTR) retroelements and relatives. GyDB 2.0 is an update based on the analysis of Ty3/Gypsy, Retroviridae, Ty1/Copia and Bel/Pao LTR retroelements and the Caulimoviridae pararetroviruses of plants. Among other features, in terms of the aforementioned topics, this update adds: (i) a variety of descriptions and reviews distributed in multiple web pages; (ii) protein-based phylogenies, where phylogenetic levels are assigned to distinct classified elements; (iii) a collection of multiple alignments, lineage-specific hidden Markov models and consensus sequences, called GyDB collection; (iv) updated RefSeq databases and BLAST and HMM servers to facilitate sequence characterization of new LTR retroelement and caulimovirus queries; and (v) a bibliographic server. GyDB 2.0 is available at http://gydb.org.

  11. Suitability Screening Test for Marine Corps Air Traffic Controllers

    DTIC Science & Technology

    2012-12-01

    Approved for public release; distribution is unlimited. Navy Personnel Research, Studies, and Technology 5720 Integrity Drive  Millington...Karen M. Walker, Ph.D. William L. Farmer, Ph.D. Rebecca C. Roberts, MS Navy Personnel Research, Studies, and Technology NPRST-TR-13-1...Roberts, MS Navy Personnel Research, Studies, and Technology Reviewed by David Alderton, Ph.D. Approved and released by D. M. Cashbaugh Assistant

  12. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    PubMed

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.

  13. Nomenclature for the KIR of non-human species.

    PubMed

    Robinson, James; Guethlein, Lisbeth A; Maccari, Giuseppe; Blokhuis, Jeroen; Bimber, Benjamin N; de Groot, Natasja G; Sanderson, Nicholas D; Abi-Rached, Laurent; Walter, Lutz; Bontrop, Ronald E; Hammond, John A; Marsh, Steven G E; Parham, Peter

    2018-06-04

    The increasing number of Killer Immunoglobulin-like Receptor (KIR) sequences available for non-human primate species and cattle has prompted development of a centralized database, guidelines for a standardized nomenclature, and minimum requirements for database submission. The guidelines and nomenclature are based on those used for human KIR and incorporate modifications made for inclusion of non-human species in the companion IPD-NHKIR database. Included in this first release are the rhesus macaque (Macaca mulatta), chimpanzee (Pan troglodytes), orangutan (Pongo abelii and Pongo pygmaeus), and cattle (Bos taurus).

  14. Realization of Real-Time Clinical Data Integration Using Advanced Database Technology

    PubMed Central

    Yoo, Sooyoung; Kim, Boyoung; Park, Heekyong; Choi, Jinwook; Chun, Jonghoon

    2003-01-01

    As information & communication technologies have advanced, interest in mobile health care systems has grown. In order to obtain information seamlessly from distributed and fragmented clinical data from heterogeneous institutions, we need solutions that integrate data. In this article, we introduce a method for information integration based on real-time message communication using trigger and advanced database technologies. Messages were devised to conform to HL7, a standard for electronic data exchange in healthcare environments. The HL7 based system provides us with an integrated environment in which we are able to manage the complexities of medical data. We developed this message communication interface to generate and parse HL7 messages automatically from the database point of view. We discuss how easily real time data exchange is performed in the clinical information system, given the requirement for minimum loading of the database system. PMID:14728271

  15. Trends of anthropogenic mercury emissions from 1970-2008 using the global EDGARv4 database: the role of increasing emission mitigation by the energy sector and the chlor-alkali industry

    NASA Astrophysics Data System (ADS)

    Muntean, M.; Janssens-Maenhout, G.; Olivier, J. G.; Guizzardi, D.; Dentener, F. J.

    2012-12-01

    The Emission Database for Global Atmospheric Research (EDGAR) describes time-series of emissions of man-made greenhouse gases and short-lived atmospheric pollutants from 1970-2008. EDGARv4 is continuously updated to respond to needs of both the scientific community and environmental policy makers. Mercury, a toxic pollutant with bioaccumulation properties, is included in the forthcoming EDGARv4.3 release, thereby enriching the spectrum of multi-pollutant sources. Three different forms of mercury have been distinguished: gaseous elemental mercury (Hg0), divalent mercury compounds (Hg2+) and particulate associated mercury (Hg-P). A complete inventory of mercury emission sources has been developed at country level using the EDGAR technology-based methodology together with international activity statistics, technology-specific abatement measures, and emission factors from EMEP/EEA (2009), USEPA AP 42 and the scientific literature. A comparison of the EDGAR mercury emission data to the widely used UNEP inventory shows consistent emissions across most sectors compared for the year 2005. The different shares of mercury emissions by region and by sector will be presented with special emphasis on the region-specific mercury emission mitigation potential. We provide a comprehensive ex-post analysis of the mitigation of mercury emissions by respectively end-of-pipe abatement measures in the power generation sector and technology changes in the chlor-alkali industry between 1970 and 2008. Given the local scale impacts of mercury, we have paid special attention to the spatial distribution of emissions. The default EDGAR Population proxy data was only used to distribute emissions from the residential and solid waste incineration sectors. Other sectors use point source data of power plants, industrial plants, gold and mercury mines. The 2008 mercury emission distribution will be presented, which shows emissions hot-spots on a 0.1°x0.1°resolution gridmap.

  16. Soil Organic Carbon for Global Benefits - assessing potential SOC increase under SLM technologies worldwide and evaluating tradeoffs and gains of upscaling SLM technologies

    NASA Astrophysics Data System (ADS)

    Wolfgramm, Bettina; Hurni, Hans; Liniger, Hanspeter; Ruppen, Sebastian; Milne, Eleanor; Bader, Hans-Peter; Scheidegger, Ruth; Amare, Tadele; Yitaferu, Birru; Nazarmavloev, Farrukh; Conder, Malgorzata; Ebneter, Laura; Qadamov, Aslam; Shokirov, Qobiljon; Hergarten, Christian; Schwilch, Gudrun

    2013-04-01

    There is a fundamental mutual interest between enhancing soil organic carbon (SOC) in the world's soils and the objectives of the major global environmental conventions (UNFCCC, UNCBD, UNCCD). While there is evidence at the case study level that sustainable land management (SLM) technologies increase SOC stocks and SOC related benefits, there is no quantitative data available on the potential for increasing SOC benefits from different SLM technologies and especially from case studies in the developing countries, and a clear understanding of the trade-offs related to SLM up-scaling is missing. This study aims at assessing the potential increase of SOC under SLM technologies worldwide, evaluating tradeoffs and gains in up-scaling SLM for case studies in Tajikistan, Ethiopia and Switzerland. It makes use of the SLM technologies documented in the online database of the World Overview of Conservation Approaches and Technologies (WOCAT). The study consists of three components: 1) Identifying SOC benefits contributing to the major global environmental issues for SLM technologies worldwide as documented in the WOCAT global database 2) Validation of SOC storage potentials and SOC benefit predictions for SLM technologies from the WOCAT database using results from existing comparative case studies at the plot level, using soil spectral libraries and standardized documentations of ecosystem service from the WOCAT database. 3) Understanding trade-offs and win-win scenarios of up-scaling SLM technologies from the plot to the household and landscape level using material flow analysis. This study builds on the premise that the most promising way to increase benefits from land management is to consider already existing sustainable strategies. Such SLM technologies from all over the world documented are accessible in a standardized way in the WOCAT online database. The study thus evaluates SLM technologies from the WOCAT database by calculating the potential SOC storage increase and related benefits by comparing SOC estimates before-and-after establishment of the SLM technology. These results are validated using comparative case studies of plots with-and-without SLM technologies (existing SLM systems versus surrounding, degrading systems). In view of upscaling SLM technologies, it is crucial to understand tradeoffs and gains supporting or hindering the further spread. Systemic biomass management analysis using material flow analysis allows quantifying organic carbon flows and storages for different land management options at the household, but also at landscape level. The study shows results relevant for science, policy and practice for accounting, monitoring and evaluating SOC related ecosystem services: - A comprehensive methodology for SLM impact assessments allowing quantification of SOC storage and SOC related benefits under different SLM technologies, and - Improved understanding of upscaling options for SLM technologies and tradeoffs as well as win-win opportunities for biomass management, SOC content increase, and ecosystem services improvement at the plot and household level.

  17. Middle Level Teachers' Perceptions of Interim Reading Assessments: An Exploratory Study of Data-Based Decision Making

    ERIC Educational Resources Information Center

    Reed, Deborah K.

    2015-01-01

    This study explored the data-based decision making of 12 teachers in grades 6-8 who were asked about their perceptions and use of three required interim measures of reading performance: oral reading fluency (ORF), retell, and a benchmark comprised of released state test items. Focus group participants reported they did not believe the benchmark or…

  18. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  19. Marine and Hydrokinetic Data | Geospatial Data Science | NREL

    Science.gov Websites

    . wave energy resource using a 51-month Wavewatch III hindcast database developed by the National Database The U.S. Department of Energy's Marine and Hydrokinetic Technology Database provides information database includes wave, tidal, current, and ocean thermal energy and contains information about energy

  20. Energy science and technology database (on the internet). Online data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Energy Science and Technology Database (EDB) is a multidisciplinary file containing worldwide references to basic and applied scientific and technical research literature. The information is collected for use by government managers, researchers at the national laboratories, and other research efforts sponsored by the U.S. Department of Energy, and the results of this research are transferred to the public. Abstracts are included for records from 1976 to the present. The EDB also contains the Nuclear Science Abstracts which is a comprehensive abstract and index collection to the international nuclear science and technology literature for the period 1948 through 1976. Includedmore » are scientific and technical reports of the U.S. Atomic Energy Commission, U.S. Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Approximately 25% of the records in the file contain abstracts. Nuclear Science Abstracts contains over 900,000 bibliographic records. The entire Energy Science and Technology Database contains over 3 million bibliographic records. This database is now available for searching through the GOV. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  1. NATIONAL RESPONSE TEAM TECHNICAL ASSISTANCE ...

    EPA Pesticide Factsheets

    This document provides technical information on a wide range of activities to aid in response to intentional release of anthrax in urban environments. It includes initial actions when a potential release is discovered, health and safety issues for responders, sampling and analysis methods, decontamination technologies, decontamination waste disposal, and communication with public. This document provides technical information on a wide range of activities to aid in response to intentional release of anthrax in urban environments. It includes initial actions when a potential release is discovered, health and safety issues for responders, sampling and analysis methods, decontamination technologies, decontamination waste disposal, and communication with public.

  2. Recent Updates to the System Advisor Model (SAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiOrio, Nicholas A

    The System Advisor Model (SAM) is a mature suite of techno-economic models for many renewable energy technologies that can be downloaded for free as a desktop application or software development kit. SAM is used for system-level modeling, including generating performance pro the release of the code as an open source project on GitHub. Other additions that will be covered include the ability to download data directly into SAM from the National Solar Radiation Database (NSRDB) and up- dates to a user-interface macro that assists with PV system sizing. A brief update on SAM's battery model and its integration with themore » detailed photovoltaic model will also be discussed. Finally, an outline of planned work for the next year will be presented, including the addition of a bifacial model, support for multiple MPPT inputs for detailed inverter modeling, and the addition of a model for inverter thermal behavior.« less

  3. Behavioral and social sciences at the National Institutes of Health: Methods, measures, and data infrastructures as a scientific priority.

    PubMed

    Riley, William T

    2017-01-01

    The National Institutes of Health Office of Behavioral and Social Sciences Research (OBSSR) recently released its strategic plan for 2017-2021. This plan focuses on three equally important strategic priorities: 1) improve the synergy of basic and applied behavioral and social sciences research, 2) enhance and promote the research infrastructure, methods, and measures needed to support a more cumulative and integrated approach to behavioral and social sciences research, and 3) facilitate the adoption of behavioral and social sciences research findings in health research and in practice. This commentary focuses on scientific priority two and future directions in measurement science, technology, data infrastructure, behavioral ontologies, and big data methods and analytics that have the potential to transform the behavioral and social sciences into more cumulative, data rich sciences that more efficiently build on prior research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Reflections on CD-ROM: Bridging the Gap between Technology and Purpose.

    ERIC Educational Resources Information Center

    Saviers, Shannon Smith

    1987-01-01

    Provides a technological overview of CD-ROM (Compact Disc-Read Only Memory), an optically-based medium for data storage offering large storage capacity, computer-based delivery system, read-only medium, and economic mass production. CD-ROM database attributes appropriate for information delivery are also reviewed, including large database size,…

  5. Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.

    ERIC Educational Resources Information Center

    Filby, A. M. Iliana

    1996-01-01

    Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…

  6. A COMPARISON: ORGANIC EMISSIONS FROM HAZARDOUS WASTE INCINERATORS VERSUS THE 1990 TOXICS RELEASE INVENTORY AIR RELEASES.

    EPA Science Inventory

    Incineration is often the preferred technology for disposing of hazardous waste, and remediating Superfund sites. The effective implementation of this technology is frequently impeded by strong public opposition `to hazardous waste' incineration HWI). One of the reasons cited for...

  7. The Pfam protein families database: towards a more sustainable future.

    PubMed

    Finn, Robert D; Coggill, Penelope; Eberhardt, Ruth Y; Eddy, Sean R; Mistry, Jaina; Mitchell, Alex L; Potter, Simon C; Punta, Marco; Qureshi, Matloob; Sangrador-Vegas, Amaia; Salazar, Gustavo A; Tate, John; Bateman, Alex

    2016-01-04

    In the last two years the Pfam database (http://pfam.xfam.org) has undergone a substantial reorganisation to reduce the effort involved in making a release, thereby permitting more frequent releases. Arguably the most significant of these changes is that Pfam is now primarily based on the UniProtKB reference proteomes, with the counts of matched sequences and species reported on the website restricted to this smaller set. Building families on reference proteomes sequences brings greater stability, which decreases the amount of manual curation required to maintain them. It also reduces the number of sequences displayed on the website, whilst still providing access to many important model organisms. Matches to the full UniProtKB database are, however, still available and Pfam annotations for individual UniProtKB sequences can still be retrieved. Some Pfam entries (1.6%) which have no matches to reference proteomes remain; we are working with UniProt to see if sequences from them can be incorporated into reference proteomes. Pfam-B, the automatically-generated supplement to Pfam, has been removed. The current release (Pfam 29.0) includes 16 295 entries and 559 clans. The facility to view the relationship between families within a clan has been improved by the introduction of a new tool. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Database for the Quaternary and Pliocene Yellowstone Plateau volcanic field of Wyoming, Idaho, and Montana (Database for Professional Paper 729-G)

    USGS Publications Warehouse

    Koch, Richard D.; Ramsey, David W.; Christiansen, Robert L.

    2011-01-01

    The superlative hot springs, geysers, and fumarole fields of Yellowstone National Park are vivid reminders of a recent volcanic past. Volcanism on an immense scale largely shaped the unique landscape of central and western Yellowstone Park, and intimately related tectonism and seismicity continue even now. Furthermore, the volcanism that gave rise to Yellowstone's hydrothermal displays was only part of a long history of late Cenozoic eruptions in southern and eastern Idaho, northwestern Wyoming, and southwestern Montana. The late Cenozoic volcanism of Yellowstone National Park, although long believed to have occurred in late Tertiary time, is now known to have been of latest Pliocene and Pleistocene age. The eruptions formed a complex plateau of voluminous rhyolitic ash-flow tuffs and lavas, but basaltic lavas too have erupted intermittently around the margins of the rhyolite plateau. Volcanism almost certainly will recur in the Yellowstone National Park region. This digital release contains all the information used to produce the geologic maps published as plates in U.S. Geological Survey Professional Paper 729-G (Christiansen, 2001). The main component of this digital release is a geologic map database prepared using geographic information systems (GIS) applications. This release also contains files to view or print the geologic maps and main report text from Professional Paper 729-G.

  9. Learning lessons from Natech accidents - the eNATECH accident database

    NASA Astrophysics Data System (ADS)

    Krausmann, Elisabeth; Girgin, Serkan

    2016-04-01

    When natural hazards impact industrial facilities that house or process hazardous materials, fires, explosions and toxic releases can occur. This type of accident is commonly referred to as Natech accident. In order to prevent the recurrence of accidents or to better mitigate their consequences, lessons-learned type studies using available accident data are usually carried out. Through post-accident analysis, conclusions can be drawn on the most common damage and failure modes and hazmat release paths, particularly vulnerable storage and process equipment, and the hazardous materials most commonly involved in these types of accidents. These analyses also lend themselves to identifying technical and organisational risk-reduction measures that require improvement or are missing. Industrial accident databases are commonly used for retrieving sets of Natech accident case histories for further analysis. These databases contain accident data from the open literature, government authorities or in-company sources. The quality of reported information is not uniform and exhibits different levels of detail and accuracy. This is due to the difficulty of finding qualified information sources, especially in situations where accident reporting by the industry or by authorities is not compulsory, e.g. when spill quantities are below the reporting threshold. Data collection has then to rely on voluntary record keeping often by non-experts. The level of detail is particularly non-uniform for Natech accident data depending on whether the consequences of the Natech event were major or minor, and whether comprehensive information was available for reporting. In addition to the reporting bias towards high-consequence events, industrial accident databases frequently lack information on the severity of the triggering natural hazard, as well as on failure modes that led to the hazmat release. This makes it difficult to reconstruct the dynamics of the accident and renders the development of equipment vulnerability models linking the natural-hazard severity to the observed damage almost impossible. As a consequence, the European Commission has set up the eNATECH database for the systematic collection of Natech accident data and near misses. The database exhibits the more sophisticated accident representation required to capture the characteristics of Natech events and is publicly accessible at http://enatech.jrc.ec.europa.eu. This presentation outlines the general lessons-learning process, introduces the eNATECH database and its specific structure, and discusses natural-hazard specific lessons learned and features common to Natech accidents triggered by different natural hazards.

  10. A Knowledge Database on Thermal Control in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Hirasawa, Shigeki; Satoh, Isao

    A prototype version of a knowledge database on thermal control in manufacturing processes, specifically, molding, semiconductor manufacturing, and micro-scale manufacturing has been developed. The knowledge database has search functions for technical data, evaluated benchmark data, academic papers, and patents. The database also displays trends and future roadmaps for research topics. It has quick-calculation functions for basic design. This paper summarizes present research topics and future research on thermal control in manufacturing engineering to collate the information to the knowledge database. In the molding process, the initial mold and melt temperatures are very important parameters. In addition, thermal control is related to many semiconductor processes, and the main parameter is temperature variation in wafers. Accurate in-situ temperature measurment of wafers is important. And many technologies are being developed to manufacture micro-structures. Accordingly, the knowledge database will help further advance these technologies.

  11. TOXMAP

    MedlinePlus

    ... to Main Content Two Ways to Explore Toxic Chemicals in Your Community TOXMAP classic provides an Advanced ... group of TOXNET databases related to toxicology, hazardous chemicals, environmental health, and toxic releases. Connect with Us ...

  12. [Oral controlled release dosage forms].

    PubMed

    Mehuys, Els; Vervaet, Chris

    2010-06-01

    Several technologies to control drug release from oral dosage forms have been developed. Drug release can be regulated in several ways: sustained release, whereby the drug is released slowly over a prolonged period of time, postponed release, whereby drug release is delayed until passage from the stomach into the intestine (via enteric coating), and targeted release, whereby the drug is targeted to a specific location of the gastrointestinal tract. This article reviews the various oral controlled release dosage forms on the market.

  13. A case Study of Applying Object-Relational Persistence in Astronomy Data Archiving

    NASA Astrophysics Data System (ADS)

    Yao, S. S.; Hiriart, R.; Barg, I.; Warner, P.; Gasson, D.

    2005-12-01

    The NOAO Science Archive (NSA) team is developing a comprehensive domain model to capture the science data in the archive. Java and an object model derived from the domain model weil address the application layer of the archive system. However, since RDBMS is the best proven technology for data management, the challenge is the paradigm mismatch between the object and the relational models. Transparent object-relational mapping (ORM) persistence is a successful solution to this challenge. In the data modeling and persistence implementation of NSA, we are using Hibernate, a well-accepted ORM tool, to bridge the object model in the business tier and the relational model in the database tier. Thus, the database is isolated from the Java application. The application queries directly on objects using a DBMS-independent object-oriented query API, which frees the application developers from the low level JDBC and SQL so that they can focus on the domain logic. We present the detailed design of the NSA R3 (Release 3) data model and object-relational persistence, including mapping, retrieving and caching. Persistence layer optimization and performance tuning will be analyzed. The system is being built on J2EE, so the integration of Hibernate into the EJB container and the transaction management are also explored.

  14. A Comparison of Organic Emissions from Hazardous Waste Incinerators Versus the 1990 Toxics Release Inventory Air Releases

    EPA Science Inventory

    Incineration is often the preferred technology for disposing of hazardous waste and remediating Superfund sites. The effective implementation of this technology is frequently impeded by strong public opposition to hazardous waste incineration (HWI). One of the reasons cited for t...

  15. The Long Valley Caldera GIS database

    USGS Publications Warehouse

    Battaglia, Maurizio; Williams, M.J.; Venezky, D.Y.; Hill, D.P.; Langbein, J.O.; Farrar, C.D.; Howle, J.F.; Sneed, M.; Segall, P.

    2003-01-01

    This database provides an overview of the studies being conducted by the Long Valley Observatory in eastern California from 1975 to 2001. The database includes geologic, monitoring, and topographic datasets related to Long Valley caldera. The CD-ROM contains a scan of the original geologic map of the Long Valley region by R. Bailey. Real-time data of the current activity of the caldera (including earthquakes, ground deformation and the release of volcanic gas), information about volcanic hazards and the USGS response plan are available online at the Long Valley observatory web page (http://lvo.wr.usgs.gov). If you have any comments or questions about this database, please contact the Scientist in Charge of the Long Valley observatory.

  16. CADDIS Volume 5. Causal Databases: CADLink

    EPA Pesticide Factsheets

    CADLink, an improved tool for searching and organizing literature-based evidence, will be released in Fall 2016. The original CADDIS literature resource, CADLit, is unavailable as we make these improvements.

  17. Drug delivery systems with modified release for systemic and biophase bioavailability.

    PubMed

    Leucuta, Sorin E

    2012-11-01

    This review describes the most important new generations of pharmaceutical systems: medicines with extended release, controlled release pharmaceutical systems, pharmaceutical systems for the targeted delivery of drug substances. The latest advances and approaches for delivering small molecular weight drugs and other biologically active agents such as proteins and nucleic acids require novel delivery technologies, the success of a drug being many times dependent on the delivery method. All these dosage forms are qualitatively superior to medicines with immediate release, in that they ensure optimal drug concentrations depending on specific demands of different disease particularities of the body. Drug delivery of these pharmaceutical formulations has the benefit of improving product efficacy and safety, as well as patient convenience and compliance. This paper describes the biopharmaceutical, pharmacokinetic, pharmacologic and technological principles in the design of drug delivery systems with modified release as well as the formulation criteria of prolonged and controlled release drug delivery systems. The paper presents pharmaceutical prolonged and controlled release dosage forms intended for different routes of administration: oral, ocular, transdermal, parenteral, pulmonary, mucoadhesive, but also orally fast dissolving tablets, gastroretentive drug delivery systems, colon-specific drug delivery systems, pulsatile drug delivery systems and carrier or ligand mediated transport for site specific or receptor drug targeting. Specific technologies are given on the dosage forms with modified release as well as examples of marketed products, and current research in these areas.

  18. Enzyme-triggered compound release using functionalized antimicrobial peptide derivatives† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04435b Click here for additional data file.

    PubMed Central

    Kashibe, Masayoshi; Matsumoto, Kengo; Hori, Yuichiro

    2017-01-01

    Controlled release is one of the key technologies for medical innovation, and many stimulus-responsive nanocarriers have been developed to utilize this technology. Enzyme activity is one of the most useful stimuli, because many enzymes are specifically activated in diseased tissues. However, controlled release stimulated by enzyme activity has not been frequently reported. One of the reasons for this is the lack of versatility of carriers. Most of the reported stimulus-responsive systems involve a sophisticated design and a complicated process for the synthesis of stimulus-responsive nanocarrier components. The purpose of this study was to develop versatile controlled release systems triggered by various stimuli, including enzyme activity, without modifying the nanocarrier components. We developed two controlled release systems, both of which comprised a liposome as the nanocarrier and a membrane-damaging peptide, temporin L (TL), and its derivatives as the release-controllers. One system utilized branched peptides for proteases, and the other utilized phosphopeptides for phosphatases. In our systems, the target enzymes converted the non-membrane-damaging TL derivatives into membrane-damaging peptides and released the liposome inclusion. We demonstrated the use of our antimicrobial peptide-based controlled release systems for different enzymes and showed the promise of this technology as a novel theranostic tool. PMID:28451373

  19. An Examination of Selected Software Testing Tools: 1992

    DTIC Science & Technology

    1992-12-01

    Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows

  20. TeachAstronomy.com - Digitizing Astronomy Resources

    NASA Astrophysics Data System (ADS)

    Hardegree-Ullman, Kevin; Impey, C. D.; Austin, C.; Patikkal, A.; Paul, M.; Ganesan, N.

    2013-06-01

    Teach Astronomy—a new, free online resource—can be used as a teaching tool in non-science major introductory college level astronomy courses, and as a reference guide for casual learners and hobbyists. Digital content available on Teach Astronomy includes: a comprehensive introductory astronomy textbook by Chris Impey, Wikipedia astronomy articles, images from Astronomy Picture of the Day archives and (new) AstroPix database, two to three minute topical video clips by Chris Impey, podcasts from 365 Days of Astronomy archives, and an RSS feed of astronomy news from Science Daily. Teach Astronomy features an original technology called the Wikimap to cluster, display, and navigate site search results. Development of Teach Astronomy was motivated by steep increases in textbook prices, the rapid adoption of digital resources by students and the public, and the modern capabilities of digital technology. This past spring semester Teach Astronomy was used as content supplement to lectures in a massive, open, online course (MOOC) taught by Chris Impey. Usage of Teach Astronomy has been steadily growing since its initial release in August of 2012. The site has users in all corners of the country and is being used as a primary teaching tool in at least four states.

  1. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  2. Directory of On-Line Networks, Databases and Bulletin Boards on Assistive Technology. Second Edition. RESNA Technical Assistance Project.

    ERIC Educational Resources Information Center

    RESNA: Association for the Advancement of Rehabilitation Technology, Washington, DC.

    This resource directory provides a selective listing of electronic networks, online databases, and bulletin boards that highlight technology-related services and products. For each resource, the following information is provided: name, address, and telephone number; description; target audience; hardware/software needs to access the system;…

  3. Examining the Factors That Contribute to Successful Database Application Implementation Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Nworji, Alexander O.

    2013-01-01

    Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…

  4. An Overview to Research on Education Technology Based on Constructivist Learning Approach

    ERIC Educational Resources Information Center

    Asiksoy, Gulsum; Ozdamli, Fezile

    2017-01-01

    The aim of this research is to determine the trends of education technology researches on Constructivist Learning Approach, which were published on database of ScienceDirect between 2010 and 2016. It also aims to guide researchers who will do studies in this field. After scanning the database, 81 articles published on ScienceDirect's data base…

  5. In need of combined topography and bathymetry DEM

    NASA Astrophysics Data System (ADS)

    Kisimoto, K.; Hilde, T.

    2003-04-01

    In many geoscience applications, digital elevation models (DEMs) are now more commonly used at different scales and greater resolution due to the great advancement in computer technology. Increasing the accuracy/resolution of the model and the coverage of the terrain (global model) has been the goal of users as mapping technology has improved and computers get faster and cheaper. The ETOPO5 (5 arc minutes spatial resolution land and seafloor model), initially developed in 1988 by Margo Edwards, then at Washington University, St. Louis, MO, has been the only global terrain model for a long time, and it is now being replaced by three new topographic and bathymetric DEMs, i.e.; the ETOPO2 (2 arc minutes spatial resolution land and seafloor model), the GTOPO30 land model with a spatial resolution of 30 arc seconds (c.a. 1km at equator) and the 'GEBCO 1-MINUTE GLOBAL BATHYMETRIC GRID' ocean floor model with a spatial resolution of 1 arc minute (c.a. 2 km at equator). These DEMs are products of projects through which compilation and reprocessing of existing and/or new datasets were made to meet user's new requirements. These ongoing efforts are valuable and support should be continued to refine and update these DEMs. On the other hand, a different approach to create a global bathymetric (seafloor) database exists. A method to estimate the seafloor topography from satellite altimetry combined with existing ships' conventional sounding data was devised and a beautiful global seafloor database created and made public by W.H. Smith and D.T. Sandwell in 1997. The big advantage of this database is the uniformity of coverage, i.e. there is no large area where depths are missing. It has a spatial resolution of 2 arc minute. Another important effort is found in making regional, not global, seafloor databases with much finer resolutions in many countries. The Japan Hydrographic Department has compiled and released a 500m-grid topography database around Japan, J-EGG500, in 1999. Although the coverage of this database is only a small portion of the Earth, the database has been highly appreciated in the academic community, and accepted in surprise by the general public when the database was displayed in 3D imagery to show its quality. This database could be rather smoothly combined with the finer land DEM of 250m spatial resolution (Japan250m.grd, K. Kisimoto, 2000). One of the most important applications of this combined DEM of topography and bathymetry is tsunami modeling. Understanding of the coastal environment, management and development of the coastal region are other fields in need of these data. There is, however, an important issue to consider when we create a combined DEM of topography and bathymetry in finer resolutions. The problem arises from the discrepancy of the standard datum planes or reference levels used for topographic leveling and bathymetric sounding. Land topography (altitude) is defined by leveling from the single reference point determined by average mean sea level, in other words, land height is measured from the geoid. On the other hand, depth charts are made based on depth measured from locally determined reference sea surface level, and this value of sea surface level is taken from the long term average of the lowest tidal height. So, to create a combined DEM of topography and bathymetry in very fine scale, we need to avoid this inconsistency between height and depth across the coastal region. Height and depth should be physically continuous relative to a single reference datum across the coast within such new high resolution DEMs. (N.B. Coast line is not equal to 'altitude-zero line' nor 'depth-zero line'. It is defined locally as the long term average of the highest tide level.) All of this said, we still need a lot of work on the ocean side. Global coverage with detailed bathymetric mapping is still poor. Seafloor imaging and other geophysical measurements/experiments should be organized and conducted internationally and interdisciplinary ways more than ever. We always need greater technological advancement and application of this technology in marine sciences, and more enthusiastic minds of seagoing researchers as well. Recent seafloor mapping technology/quality both in bathymetry and imagery is very promising and even favorably compared with the terrain mapping. We discuss and present on recent achievement and needs on the seafloor mapping using several most up-to-date global- and regional- DEMs available for science community at the poster session.

  6. Critical Needs for Robust and Reliable Database for Design and Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, M.

    1999-01-01

    Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.

  7. Database Development for Ocean Impacts: Imaging, Outreach, and Rapid Response

    DTIC Science & Technology

    2012-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Database Development for Ocean Impacts: Imaging, Outreach...Development for Ocean Impacts: Imaging, Outreach, and Rapid Response 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...hoses ( Applied Ocean Physics & Engineering department, WHOI, to evaluate wear and locate in mooring optical cables used in the Right Whale monitoring

  8. US Army Research Laboratory Visualization Framework Design Document

    DTIC Science & Technology

    2016-01-01

    This section highlights each module in the ARL-VF and subsequent sections provide details on how each module interacts . Fig. 2 ARL-VF with the...ConfigAgent MultiTouch VizDatabase VizController TUIO VizDatabase User VizDaemon VizDaemon VizDaemon VizDaemon VizDaemon TestPoint...received by the destination. The sequence diagram in Fig. 4 shows this interaction . Approved for public release; distribution unlimited. 13 Fig. 4

  9. JEnsembl: a version-aware Java API to Ensembl data systems

    PubMed Central

    Paterson, Trevor; Law, Andy

    2012-01-01

    Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789

  10. Databases on biotechnology and biosafety of GMOs.

    PubMed

    Degrassi, Giuliano; Alexandrova, Nevena; Ripandelli, Decio

    2003-01-01

    Due to the involvement of scientific, industrial, commercial and public sectors of society, the complexity of the issues concerning the safety of genetically modified organisms (GMOs) for the environment, agriculture, and human and animal health calls for a wide coverage of information. Accordingly, development of the field of biotechnology, along with concerns related to the fate of released GMOs, has led to a rapid development of tools for disseminating such information. As a result, there is a growing number of databases aimed at collecting and storing information related to GMOs. Most of the sites deal with information on environmental releases, field trials, transgenes and related sequences, regulations and legislation, risk assessment documents, and literature. Databases are mainly established and managed by scientific, national or international authorities, and are addressed towards scientists, government officials, policy makers, consumers, farmers, environmental groups and civil society representatives. This complexity can lead to an overlapping of information. The purpose of the present review is to analyse the relevant databases currently available on the web, providing comments on their vastly different information and on the structure of the sites pertaining to different users. A preliminary overview on the development of these sites during the last decade, at both the national and international level, is also provided.

  11. Implantable microencapsulated dopamine (DA): prolonged functional release of DA in denervated striatal tissue.

    PubMed

    McRae, A; Hjorth, S; Mason, D; Dillon, L; Tice, T

    1990-01-01

    Biodegradable controlled-release microcapsule systems made with the biocompatible biodegradable polyester excipient poly [DL-lactide-co-gly-colide] constitute an exciting new technology for drug delivery to the central nervous system (CNS). The present study describes functional observations indicating that implantation of dopamine (DA) microcapsules encapsulated within two different polymer excipients into denervated striatal tissue assures a prolonged release of the transmitter in vivo. This technology has a considerable potential for basic and possibly clinical research.

  12. National transportation technology plan

    DOT National Transportation Integrated Search

    2000-05-01

    The National Science and Technology Council (NSTC) Committee on Technology, Subcommittee on Transportation Research and Development (R&D), has created a National Transportation Technology Plan that builds on the initial Technology Plan released in 19...

  13. Lidar and Dial application for detection and identification: a proposal to improve safety and security

    NASA Astrophysics Data System (ADS)

    Gaudio, P.; Malizia, A.; Gelfusa, M.; Murari, A.; Parracino, S.; Poggi, L. A.; Lungaroni, M.; Ciparisse, J. F.; Di Giovanni, D.; Cenciarelli, O.; Carestia, M.; Peluso, E.; Gabbarini, V.; Talebzadeh, S.; Bellecci, C.

    2017-01-01

    Nowadays the intentional diffusion in air (both in open and confined environments) of chemical contaminants is a dramatic source of risk for the public health worldwide. The needs of a high-tech networks composed by software, diagnostics, decision support systems and cyber security tools are urging all the stakeholders (military, public, research & academic entities) to create innovative solutions to face this problem and improve both safety and security. The Quantum Electronics and Plasma Physics (QEP) Research Group of the University of Rome Tor Vergata is working since the 1960s on the development of laser-based technologies for the stand-off detection of contaminants in the air. Up to now, four demonstrators have been developed (two LIDAR-based and two DIAL-based) and have been used in experimental campaigns during all 2015. These systems and technologies can be used together to create an innovative solution to the problem of public safety and security: the creation of a network composed by detection systems: A low cost LIDAR based system has been tested in an urban area to detect pollutants coming from urban traffic, in this paper the authors show the results obtained in the city of Crotone (south of Italy). This system can be used as a first alarm and can be coupled with an identification system to investigate the nature of the threat. A laboratory dial based system has been used in order to create a database of absorption spectra of chemical substances that could be release in atmosphere, these spectra can be considered as the fingerprints of the substances that have to be identified. In order to create the database absorption measurements in cell, at different conditions, are in progress and the first results are presented in this paper.

  14. Technologically enhanced naturally occurring radioactive materials.

    PubMed

    Vearrier, David; Curtis, John A; Greenberg, Michael I

    2009-05-01

    Naturally occurring radioactive materials (NORM) are ubiquitous throughout the earth's crust. Human manipulation of NORM for economic ends, such as mining, ore processing, fossil fuel extraction, and commercial aviation, may lead to what is known as "technologically enhanced naturally occurring radioactive materials," often called TENORM. The existence of TENORM results in an increased risk for human exposure to radioactivity. Workers in TENORM-producing industries may be occupationally exposed to ionizing radiation. TENORM industries may release significant amounts of radioactive material into the environment resulting in the potential for widespread exposure to ionizing radiation. These industries include mining, phosphate processing, metal ore processing, heavy mineral sand processing, titanium pigment production, fossil fuel extraction and combustion, manufacture of building materials, thorium compounds, aviation, and scrap metal processing. A search of the PubMed database ( www.pubmed.com ) and Ovid Medline database ( ovidsp.tx.ovid.com ) was performed using a variety of search terms including NORM, TENORM, and occupational radiation exposure. A total of 133 articles were identified, retrieved, and reviewed. Seventy-three peer-reviewed articles were chosen to be cited in this review. A number of studies have evaluated the extent of ionizing radiation exposure both among workers and the general public due to TENORM. Quantification of radiation exposure is limited because of modeling constraints. In some occupational settings, an increased risk of cancer has been reported and postulated to be secondary to exposure to TENORM, though these reports have not been validated using toxicological principles. NORM and TENORM have the potential to cause important human health effects. It is important that these adverse health effects are evaluated using the basic principles of toxicology, including the magnitude and type of exposure, as well as threshold and dose response.

  15. ROBIN: a platform for evaluating automatic target recognition algorithms: II. Protocols used for evaluating algorithms and results obtained on the SAGEM DS database

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    Over the five past years, the computer vision community has explored many different avenues of research for Automatic Target Recognition. Noticeable advances have been made and we are now in the situation where large-scale evaluations of ATR technologies have to be carried out, to determine what the limitations of the recently proposed methods are and to determine the best directions for future works. ROBIN, which is a project funded by the French Ministry of Defence and by the French Ministry of Research, has the ambition of being a new reference for benchmarking ATR algorithms in operational contexts. This project, headed by major companies and research centers involved in Computer Vision R&D in the field of Defense (Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES) recently released a large dataset of several thousands of hand-annotated infrared and RGB images of different targets in different situations. Setting up an evaluation campaign requires us to define, accurately and carefully, sets of data (both for training ATR algorithms and for their evaluation), tasks to be evaluated, and finally protocols and metrics for the evaluation. ROBIN offers interesting contributions to each one of these three points. This paper first describes, justifies and defines the set of functions used in the ROBIN competitions and relevant for evaluating ATR algorithms (Detection, Localization, Recognition and Identification). It also defines the metrics and the protocol used for evaluating these functions. In the second part of the paper, the results obtained by several state-of-the-art algorithms on the SAGEM DS database (a subpart of ROBIN) are presented and discussed

  16. Initial experiences with building a health care infrastructure based on Java and object-oriented database technology.

    PubMed

    Dionisio, J D; Sinha, U; Dai, B; Johnson, D B; Taira, R K

    1999-01-01

    A multi-tiered telemedicine system based on Java and object-oriented database technology has yielded a number of practical insights and experiences on their effectiveness and suitability as implementation bases for a health care infrastructure. The advantages and drawbacks to their use, as seen within the context of the telemedicine system's development, are discussed. Overall, these technologies deliver on their early promise, with a few remaining issues that are due primarily to their relative newness.

  17. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  18. JASPAR 2016: a major expansion and update of the open-access database of transcription factor binding profiles

    PubMed Central

    Mathelier, Anthony; Fornes, Oriol; Arenillas, David J.; Chen, Chih-yu; Denay, Grégoire; Lee, Jessica; Shi, Wenqiang; Shyr, Casper; Tan, Ge; Worsley-Hunt, Rebecca; Zhang, Allen W.; Parcy, François; Lenhard, Boris; Sandelin, Albin; Wasserman, Wyeth W.

    2016-01-01

    JASPAR (http://jaspar.genereg.net) is an open-access database storing curated, non-redundant transcription factor (TF) binding profiles representing transcription factor binding preferences as position frequency matrices for multiple species in six taxonomic groups. For this 2016 release, we expanded the JASPAR CORE collection with 494 new TF binding profiles (315 in vertebrates, 11 in nematodes, 3 in insects, 1 in fungi and 164 in plants) and updated 59 profiles (58 in vertebrates and 1 in fungi). The introduced profiles represent an 83% expansion and 10% update when compared to the previous release. We updated the structural annotation of the TF DNA binding domains (DBDs) following a published hierarchical structural classification. In addition, we introduced 130 transcription factor flexible models trained on ChIP-seq data for vertebrates, which capture dinucleotide dependencies within TF binding sites. This new JASPAR release is accompanied by a new web tool to infer JASPAR TF binding profiles recognized by a given TF protein sequence. Moreover, we provide the users with a Ruby module complementing the JASPAR API to ease programmatic access and use of the JASPAR collection of profiles. Finally, we provide the JASPAR2016 R/Bioconductor data package with the data of this release. PMID:26531826

  19. The data and system Nikkei Telecom "Industry/Technology Information Service"

    NASA Astrophysics Data System (ADS)

    Kurata, Shizuya; Sueyoshi, Yukio

    Nihoh Keizai Shimbun started supplying "Industry/Technology Information Service" from July 1989 as a part of Nikkei Telecom Package, which is online information service using personal computers for its terminals. Previously Nikkei's database service mainly covered such areas as economy, corporations and markets. On the other hand, the new "Industry/Technology Information Service" (main data covers industry by industry information-semi macro) is attracting a good deal of attention as it is the first to supply science and technology related database which has not been touched before. Moreover it is attracting attention technically as it has an access by gateway system to JOIS which is the first class science technology file in Japan. This report introduces data and system of "Industry/Technology Information Service" briefly.

  20. Let your fingers do the walking: The projects most invaluable tool

    NASA Technical Reports Server (NTRS)

    Zirk, Deborah A.

    1993-01-01

    The barrage of information pertaining to the software being developed for a project can be overwhelming. Current status information, as well as the statistics and history of software releases, should be 'at the fingertips' of project management and key technical personnel. This paper discusses the development, configuration, capabilities, and operation of a relational database, the System Engineering Database (SEDB) which was designed to assist management in monitoring of the tasks performed by the Network Control Center (NCC) Project. This database has proven to be an invaluable project tool and is utilized daily to support all project personnel.

  1. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  2. Documentation of the U.S. Geological Survey Oceanographic Time-Series Measurement Database

    USGS Publications Warehouse

    Montgomery, Ellyn T.; Martini, Marinna A.; Lightsom, Frances L.; Butman, Bradford

    2008-01-02

    This report describes the instrumentation and platforms used to make the measurements; the methods used to process, apply quality-control criteria, and archive the data; the data storage format, and how the data are released and distributed. The report also includes instructions on how to access the data from the online database at http://stellwagen.er.usgs.gov/. As of 2016, the database contains about 5,000 files, which may include observations of current velocity, wave statistics, ocean temperature, conductivity, pressure, and light transmission at one or more depths over some duration of time.

  3. Controlled drug delivery systems: past forward and future back.

    PubMed

    Park, Kinam

    2014-09-28

    Controlled drug delivery technology has progressed over the last six decades. This progression began in 1952 with the introduction of the first sustained release formulation. The 1st generation of drug delivery (1950-1980) focused on developing oral and transdermal sustained release systems and establishing controlled drug release mechanisms. The 2nd generation (1980-2010) was dedicated to the development of zero-order release systems, self-regulated drug delivery systems, long-term depot formulations, and nanotechnology-based delivery systems. The latter part of the 2nd generation was largely focused on studying nanoparticle formulations. The Journal of Controlled Release (JCR) has played a pivotal role in the 2nd generation of drug delivery technologies, and it will continue playing a leading role in the next generation. The best path towards a productive 3rd generation of drug delivery technology requires an honest, open dialog without any preconceived ideas of the past. The drug delivery field needs to take a bold approach to designing future drug delivery formulations primarily based on today's necessities, to produce the necessary innovations. The JCR provides a forum for sharing the new ideas that will shape the 3rd generation of drug delivery technology. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Publications - AR 2015 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic publication sales page for more information. Quadrangle(s): Alaska General Bibliographic Reference DGGS Staff

  5. Publications - GMC 280 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic please see our publication sales page for more information. Bibliographic Reference Piggott, Neil, and

  6. The cost effectiveness of long-acting/extended-release antipsychotics for the treatment of schizophrenia: a systematic review of economic evaluations.

    PubMed

    Achilla, Evanthia; McCrone, Paul

    2013-04-01

    Antipsychotic medication is the mainstay of treatment in schizophrenia. Long-acting medication has potential advantages over daily medication in improving compliance and thus reducing hospitalization and relapse rates. The high acquisition and administration costs of such formulations raise the need for pharmacoeconomic evaluation. The aim of this article is to provide a comprehensive review of the available evidence on the cost effectiveness of long-acting/extended-release antipsychotic medication and critically appraise the strength of evidence reported in the studies from a methodological viewpoint. Relevant studies were identified by searching five electronic databases: PsycINFO, MEDLINE, EMBASE, the NHS Economic Evaluation Database and the Health Technology Assessment database (HTA). Search terms included, but were not limited to, 'long-acting injection', 'economic evaluation', 'cost-effectiveness' and 'cost-utility'. No limits were applied for publication dates and language. Full economic evaluations on long-acting/extended-release antipsychotics were eligible for inclusion. Observational studies and clinical trials were also checked for cost-effectiveness information. Conference abstracts and poster presentations on the cost effectiveness of long-acting antipsychotics were excluded. Thirty-two percent of identified studies met the selection criteria. Pertinent abstracts were reviewed independently by two reviewers. Relevant studies underwent data extraction by one reviewer and were checked by a second, with any discrepancies being clarified during consensus meetings. Eligible studies were assessed for methodological quality using the quality checklist for economic studies recommended by the NICE guideline on interventions in the treatment and management of schizophrenia. After applying the selection criteria, the final sample consisted of 28 studies. The majority of studies demonstrated that risperidone long-acting injection, relative to oral or other long-acting injectable drugs, was associated with cost savings and additional clinical benefits and was the dominant strategy in terms of cost effectiveness. However, olanzapine in either oral or long-acting injectable formulation dominated risperidone long-acting injection in a Slovenian and a US study. Furthermore, in two UK studies, the use of long-acting risperidone increased the hospitalization days and overall healthcare costs, relative to other atypical or typical long-acting antipsychotics. Finally, paliperidone extended-release was the most cost-effective treatment compared with atypical oral or typical long-acting formulations. From a methodological viewpoint, most studies employed decision analytic models, presented results using average cost-effectiveness ratios and conducted comprehensive sensitivity analyses to test the robustness of the results. Variations in study methodologies restrict consistent and direct comparisons across countries. The exclusion of a large body of potentially relevant conference abstracts as well as some papers being unobtainable may have increased the likelihood of misrepresenting the overall cost effectiveness of long-acting antipsychotics. Finally, the review process was restricted to qualitative assessment rather than a quantitative synthesis of results, which could provide more robust conclusions. Atypical long-acting (especially risperidone)/extended-release antipsychotic medication is likely to be a cost-effective, first-line strategy for managing schizophrenia, compared with long-acting haloperidol and other oral or depot formulations, irrespective of country-specific differences. However, inconsistencies in study methodologies and in the reporting of study findings suggest caution needs to be applied in interpreting these findings.

  7. Reactome graph database: Efficient access to complex pathway data

    PubMed Central

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  8. Reactome graph database: Efficient access to complex pathway data.

    PubMed

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  9. Publications - AR 2008 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  10. Publications - AR 2007 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  11. Publications - AR 2001 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  12. Publications - GMC 379 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Info: Download below or please see our publication sales page for more information. Quadrangle(s

  13. Publications - AR 2002 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more

  14. Facts about Vitamin K

    MedlinePlus

    ... the amount of vitamin K they contain (USDA- ARS, 2015). Table 2. Sources of vitamin K. Food ... U.S. Department of Agriculture, Agricultural Research Service USDA-ARS. (2015). National Nutrient Database for Standard Reference, Release ...

  15. Materials Science News | Materials Science | NREL

    Science.gov Websites

    News Release: NREL Opens Large Database of Inorganic Thin-Film Materials An extensive experimental developing a rechargeable non-aqueous magnesium-metal battery. January 30, 2018 Dave Moore: Taking Roundabout

  16. Technology in Science and Mathematics Education.

    ERIC Educational Resources Information Center

    Buccino, Alphonse

    Provided are several perspectives on technology, addressing changes in learners related to technology, changes in contemporary life related to technology, and changes in subject areas related to technology (indicating that technology has created such new tools for inquiry as computer programming, word processing, online database searches, and…

  17. The HITRAN 2008 Molecular Spectroscopic Database

    NASA Technical Reports Server (NTRS)

    Rothman, Laurence S.; Gordon, Iouli E.; Barbe, Alain; Benner, D. Chris; Bernath, Peter F.; Birk, Manfred; Boudon, V.; Brown, Linda R.; Campargue, Alain; Champion, J.-P.; hide

    2009-01-01

    This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are not resolved; individual line parameters and absorption cross sections for bands in the ultra-violet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for forty-two molecules including many of their isotopologues.

  18. Minimotif Miner 3.0: database expansion and significantly improved reduction of false-positive predictions from consensus sequences.

    PubMed

    Mi, Tian; Merlin, Jerlin Camilus; Deverasetty, Sandeep; Gryk, Michael R; Bill, Travis J; Brooks, Andrew W; Lee, Logan Y; Rathnayake, Viraj; Ross, Christian A; Sargeant, David P; Strong, Christy L; Watts, Paula; Rajasekaran, Sanguthevar; Schiller, Martin R

    2012-01-01

    Minimotif Miner (MnM available at http://minimotifminer.org or http://mnm.engr.uconn.edu) is an online database for identifying new minimotifs in protein queries. Minimotifs are short contiguous peptide sequences that have a known function in at least one protein. Here we report the third release of the MnM database which has now grown 60-fold to approximately 300,000 minimotifs. Since short minimotifs are by their nature not very complex we also summarize a new set of false-positive filters and linear regression scoring that vastly enhance minimotif prediction accuracy on a test data set. This online database can be used to predict new functions in proteins and causes of disease.

  19. Publications - GMC 322 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Ordering Info: Download below or please see our publication sales page for more information. Quadrangle(s

  20. The Pfam protein families database

    PubMed Central

    Finn, Robert D.; Mistry, Jaina; Tate, John; Coggill, Penny; Heger, Andreas; Pollington, Joanne E.; Gavin, O. Luke; Gunasekaran, Prasad; Ceric, Goran; Forslund, Kristoffer; Holm, Liisa; Sonnhammer, Erik L. L.; Eddy, Sean R.; Bateman, Alex

    2010-01-01

    Pfam is a widely used database of protein families and domains. This article describes a set of major updates that we have implemented in the latest release (version 24.0). The most important change is that we now use HMMER3, the latest version of the popular profile hidden Markov model package. This software is ∼100 times faster than HMMER2 and is more sensitive due to the routine use of the forward algorithm. The move to HMMER3 has necessitated numerous changes to Pfam that are described in detail. Pfam release 24.0 contains 11 912 families, of which a large number have been significantly updated during the past two years. Pfam is available via servers in the UK (http://pfam.sanger.ac.uk/), the USA (http://pfam.janelia.org/) and Sweden (http://pfam.sbc.su.se/). PMID:19920124

  1. Tritium environmental transport studies at TFTR

    NASA Astrophysics Data System (ADS)

    Ritter, P. D.; Dolan, T. J.; Longhurst, G. R.

    1993-06-01

    Environmental tritium concentrations will be measured near the Tokamak Fusion Test Reactor (TFTR) to help validate dynamic models of tritium transport in the environment. For model validation the database must contain sequential measurements of tritium concentrations in key environmental compartments. Since complete containment of tritium is an operational goal, the supplementary monitoring program should be able to glean useful data from an unscheduled acute release. Portable air samplers will be used to take samples automatically every 4 hours for a week after an acute release, thus obtaining the time resolution needed for code validation. Samples of soil, vegetation, and foodstuffs will be gathered daily at the same locations as the active air monitors. The database may help validate the plant/soil/air part of tritium transport models and enhance environmental tritium transport understanding for the International Thermonuclear Experimental Reactor (ITER).

  2. DECADE web portal: toward the integration of MaGa, EarthChem and VOTW data systems to further the knowledge on Earth degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Frigeri, Alessandro; Lehnert, Kerstin; Ash, Jason; McCormick, Brendan; Chiodini, Giovanni; Fischer, Tobias; Cottrell, Elizabeth

    2015-04-01

    The release of volatiles from the Earth's interior takes place in both volcanic and non-volcanic areas of the planet. The comprehension of such complex process and the improvement of the current estimates of global carbon emissions, will greatly benefit from the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing interoperability between three data systems that will make their data accessible via the DECADE portal: (1) the Smithsonian Institutionian's Global Volcanism Program database (VOTW) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. The DECADE web portal will create a powerful search engine of these databases from a single entry point and will return comprehensive multi-component datasets. A user will be able, for example, to obtain data relating to compositions of emitted gases, compositions and age of the erupted products and coincident activity, of a specific volcano. This level of capability requires a complete synergy between the databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process allowing to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  3. Remote modulation of neural activities via near-infrared triggered release of biomolecules.

    PubMed

    Li, Wei; Luo, Rongcong; Lin, Xudong; Jadhav, Amol D; Zhang, Zicong; Yan, Li; Chan, Chung-Yuan; Chen, Xianfeng; He, Jufang; Chen, Chia-Hung; Shi, Peng

    2015-10-01

    The capability to remotely control the release of biomolecules provides an unique opportunity to monitor and regulate neural signaling, which spans extraordinary spatial and temporal scales. While various strategies, including local perfusion, molecular "uncaging", or photosensitive polymeric materials, have been applied to achieve controlled releasing of neuro-active substances, it is still challenging to adopt these technologies in many experimental contexts that require a straightforward but versatile loading-releasing mechanism. Here, we develop a synthetic strategy for remotely controllable releasing of neuro-modulating molecules. This platform is based on microscale composite hydrogels that incorporate polypyrrole (PPy) nanoparticles as photo-thermal transducers and is triggered by near-infrared-light (NIR) irradiation. Specifically, we first demonstrate the utility of our technology by recapitulating the "turning assay" and "collapse assay", which involve localized treatment of chemotactic factors (e.g. Netrin or Semaphorin 3A) to subcellular neural elements and have been extensively used in studying axonal pathfinding. On a network scale, the photo-sensitive microgels are also validated for light-controlled releasing of neurotransmitters (e.g. glutamate). A single NIR-triggered release is sufficient to change the dynamics of a cultured hippocampal neuron network. Taking the advantage of NIR's capability to penetrate deep into live tissue, this technology is further shown to work similarly well in vivo, which is evidenced by synchronized spiking activity in response to NIR-triggered delivery of glutamate in rat auditory cortex, demonstrating remote control of brain activity without any genetic modifications. Notably, our nano-composite microgels are capable of delivering various molecules, ranging from small chemicals to large proteins, without involving any crosslinking chemistry. Such great versatility and ease-of-use will likely make our optically-controlled delivery technology a general and important tool in cell biology research. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Using Systems Thinking to Evaluate Formative Feedback in UK Higher Education: The Case of Classroom Response Technology

    ERIC Educational Resources Information Center

    Pagano, Rosane; Paucar-Caceres, Alberto

    2013-01-01

    Providing high quality formative feedback to large and very diverse cohorts of students taking short intensive blocks of teaching (block release) has become crucial in management education provision. The paper proposes the exploitation of classroom response technology (CRT) to evaluate learning activities of students taking block release modules.…

  5. TEDS-M 2008 User Guide for the International Database. Supplement 4: TEDS-M Released Mathematics and Mathematics Pedagogy Knowledge Assessment Items

    ERIC Educational Resources Information Center

    Brese, Falk, Ed.

    2012-01-01

    The goal for selecting the released set of test items was to have approximately 25% of each of the full item sets for mathematics content knowledge (MCK) and mathematics pedagogical content knowledge (MPCK) that would represent the full range of difficulty, content, and item format used in the TEDS-M study. The initial step in the selection was to…

  6. Construction of databases: advances and significance in clinical research.

    PubMed

    Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian

    2015-12-01

    Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.

  7. Explosive Growth and Advancement of the NASA/IPAC Extragalactic Database (NED)

    NASA Astrophysics Data System (ADS)

    Mazzarella, Joseph M.; Ogle, P. M.; Fadda, D.; Madore, B. F.; Ebert, R.; Baker, K.; Chan, H.; Chen, X.; Frayer, C.; Helou, G.; Jacobson, J. D.; LaGue, C.; Lo, T. M.; Pevunova, O.; Schmitz, M.; Terek, S.; Steer, I.

    2014-01-01

    The NASA/IPAC Extragalactic Database (NED) is continuing to evolve in lock-step with the explosive growth of astronomical data and advancements in information technology. A new methodology is being used to fuse data from very large surveys. Selected parameters are first loaded into a new database layer and made available in areal searches before they are cross-matched with prior NED objects. Then a programmed, rule-based statistical approach is used to identify new objects and compute cross-identifications with existing objects where possible; otherwise associations between objects are derived based on positional uncertainties or spatial resolution differences. Approximately 62 million UV sources from the GALEX All-Sky Survey and Medium Imaging Survey catalogs have been integrated into NED using this new process. The December 2013 release also contains nearly half a billion sources from the 2MASS Point Source Catalog accessible in cone searches, while the large scale cross-matching is in progress. Forthcoming updates will fuse data from All-WISE, SDSS DR12, and other very large catalogs. This work is progressing in parallel with the equally important integration of data from the literature, which is also growing rapidly. Recent updates have also included H I and CO channel maps (data cubes), as well as substantial growth in redshifts, classifications, photometry, spectra and redshift-independent distances. The By Parameters search engine now incorporates a simplified form for entry of constraints, and support for long-running queries with machine-readable output. A new tool for exploring the environments of galaxies with measured radial velocities includes informative graphics and a method to assess the incompleteness of redshift measurements. The NED user interface is also undergoing a major transformation, providing more streamlined navigation and searching, and a modern development framework for future enhancements. For further information, please visit our poster (Fadda et al. 2014) and stop by the NED exhibit for a demo. NED is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  8. PREVENTION REFERENCE MANUAL: CONTROL TECHNOLOGIES, VOL. 2. POST-RELEASE MITIGATION MEASURES FOR CONTROLLING ACCIDENTAL RELEASES OF AIR TOXICS

    EPA Science Inventory

    The volume discusses prevention and protection measures for controlling accidental releases of air toxics. The probability of accidental releases depends on the extent to which deviations (in magnitude and duration) in the process can be tolerated before a loss of chemical contai...

  9. Inorganic Crystal Structure Database (ICSD)

    National Institute of Standards and Technology Data Gateway

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  10. Satellite Communications Technology Database. Part 2

    NASA Technical Reports Server (NTRS)

    2001-01-01

    The Satellite Communications Technology Database is a compilation of data on state-of-the-art Ka-band technologies current as of January 2000. Most U.S. organizations have not published much of their Ka-band technology data, and so the great majority of this data is drawn largely from Japanese, European, and Canadian publications and Web sites. The data covers antennas, high power amplifiers, low noise amplifiers, MMIC devices, microwave/IF switch matrices, SAW devices, ASIC devices, power and data storage. The data herein is raw, and is often presented simply as the download of a table or figure from a site, showing specified technical characteristics, with no further explanation.

  11. New generic indexing technology

    NASA Technical Reports Server (NTRS)

    Freeston, Michael

    1996-01-01

    There has been no fundamental change in the dynamic indexing methods supporting database systems since the invention of the B-tree twenty-five years ago. And yet the whole classical approach to dynamic database indexing has long since become inappropriate and increasingly inadequate. We are moving rapidly from the conventional one-dimensional world of fixed-structure text and numbers to a multi-dimensional world of variable structures, objects and images, in space and time. But, even before leaving the confines of conventional database indexing, the situation is highly unsatisfactory. In fact, our research has led us to question the basic assumptions of conventional database indexing. We have spent the past ten years studying the properties of multi-dimensional indexing methods, and in this paper we draw the strands of a number of developments together - some quite old, some very new, to show how we now have the basis for a new generic indexing technology for the next generation of database systems.

  12. Some Reliability Issues in Very Large Databases.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1988-01-01

    Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…

  13. Library Instruction and Online Database Searching.

    ERIC Educational Resources Information Center

    Mercado, Heidi

    1999-01-01

    Reviews changes in online database searching in academic libraries. Topics include librarians conducting all searches; the advent of end-user searching and the need for user instruction; compact disk technology; online public catalogs; the Internet; full text databases; electronic information literacy; user education and the remote library user;…

  14. Content Independence in Multimedia Databases.

    ERIC Educational Resources Information Center

    de Vries, Arjen P.

    2001-01-01

    Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…

  15. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    EPA Pesticide Factsheets

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  16. The EMBL nucleotide sequence database

    PubMed Central

    Stoesser, Guenter; Baker, Wendy; van den Broek, Alexandra; Camon, Evelyn; Garcia-Pastor, Maria; Kanz, Carola; Kulikova, Tamara; Lombard, Vincent; Lopez, Rodrigo; Parkinson, Helen; Redaschi, Nicole; Sterk, Peter; Stoehr, Peter; Tuli, Mary Ann

    2001-01-01

    The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl/) is maintained at the European Bioinformatics Institute (EBI) in an international collaboration with the DNA Data Bank of Japan (DDBJ) and GenBank at the NCBI (USA). Data is exchanged amongst the collaborating databases on a daily basis. The major contributors to the EMBL database are individual authors and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via ftp, email and World Wide Web interfaces. EBI’s Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many specialized databases. For sequence similarity searching a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT. PMID:11125039

  17. ENVIRONMENTALLY-BENIGN POLYTETRAFLUOROETHYLENE (PTFE) COATINGS FOR MOLD RELEASE - PHASE II

    EPA Science Inventory

    GVD proposes to develop high performance, volatile organic compound (VOC)-free and perfluorooctanoic acid (PFOA)-free, non-stick mold release coatings based on its novel polytetrafluoroethylene (PTFE) fluoropolymer technology. Most commercial mold release agents make use of...

  18. Synchronous delivery of felodipine and metoprolol tartrate using monolithic osmotic pump technology.

    PubMed

    Zhao, Shiqing; Yu, Fanglin; Liu, Nan; Di, Zhong; Yan, Kun; Liu, Yan; Li, Ying; Zhang, Hui; Yang, Yang; Yang, Zhenbo; Li, Zhiping; Mei, Xingguo

    2016-11-01

    The synchronous sustained-release of two drugs was desired urgently for patients needing combination therapy in long term. However, sophisticated technologies were used generally to realize the simultaneous delivery of two drugs especially those with different physico-chemical properties. The purpose of this study was to obtain the concurrent release of felodipine and metoprolol tartrate, two drugs with completely different solubilities, in a simple monolithic osmotic pump system (FMOP). Two types of blocking agents were used in monolithic osmotic pump tablets and the synchronous sustained-release of FMOP was acquired in vitro. The tablets were also administered to beagle dogs and the plasma levels of FMOP were determined by HPLC-MS/MS. The pharmacokinetic parameters were calculated using a non-compartmental model. Cmax of both felodipine and metoprolol from the osmotic pump tablets were lower, tmax and mean residence time of both felodipine and metoprolol from the osmotic pump tablets were longer significantly than those from immediate release tablets. These results verified prolonged release of felodipine and metoprolol tartrate from osmotic pump formulations. The similar absorption rate between felodipine and metoprolol in beagles was also obtained by this osmotic pump formulation. Therefore, it could be supposed that the accordant release of two drugs with completely different solubilities may be realized just by using monolithic osmotic pump technology.

  19. Centrifugal air-assisted melt agglomeration for fast-release "granulet" design.

    PubMed

    Wong, Tin Wui; Musa, Nafisah

    2012-07-01

    Conventional melt pelletization and granulation processes produce round and dense, and irregularly shaped but porous agglomerates respectively. This study aimed to design centrifugal air-assisted melt agglomeration technology for manufacture of spherical and yet porous "granulets" for ease of downstream manufacturing and enhancing drug release. A bladeless agglomerator, which utilized shear-free air stream to mass the powder mixture of lactose filler, polyethylene glycol binder and poorly water-soluble tolbutamide drug into "granulets", was developed. The inclination angle and number of vane, air-impermeable surface area of air guide, processing temperature, binder content and molecular weight were investigated with reference to "granulet" size, shape, texture and drug release properties. Unlike fluid-bed melt agglomeration with vertical processing air flow, the air stream in the present technology moved centrifugally to roll the processing mass into spherical but porous "granulets" with a drug release propensity higher than physical powder mixture, unprocessed drug and dense pellets prepared using high shear mixer. The fast-release attribute of "granulets" was ascribed to porous matrix formed with a high level of polyethylene glycol as solubilizer. The agglomeration and drug release outcomes of centrifugal air-assisted technology are unmet by the existing high shear and fluid-bed melt agglomeration techniques. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. [Preparation of curcumin-EC sustained-release composite particles by supercritical CO2 anti-solvent technology].

    PubMed

    Bai, Wei-li; Yan, Ting-yuan; Wang, Zhi-xiang; Huang, De-chun; Yan, Ting-xuan; Li, Ping

    2015-01-01

    Curcumin-ethyl-cellulose (EC) sustained-release composite particles were prepared by using supercritical CO2 anti-solvent technology. With drug loading and yield of inclusion complex as evaluation indexes, on the basis of single factor tests, orthogonal experimental design was used to optimize the preparation process of curcumin-EC sustained-release composite particles. The experiments such as drug loading, yield, particle size distribution, electron microscope analysis (SEM) , infrared spectrum (IR), differential scanning calorimetry (DSC) and in vitro dissolution were used to analyze the optimal process combination. The orthogonal experimental optimization process conditions were set as follows: crystallization temperature 45 degrees C, crystallization pressure 10 MPa, curcumin concentration 8 g x L(-1), solvent flow rate 0.9 mL x min(-1), and CO2 velocity 4 L x min(-1). Under the optimal conditions, the average drug loading and yield of curcumin-EC sustained-release composite particles were 33.01% and 83.97%, and the average particle size of the particles was 20.632 μm. IR and DSC analysis showed that curcumin might complex with EC. The experiments of in vitro dissolution showed that curcumin-EC composite particles had good sustained-release effect. Curcumin-EC sustained-release composite particles can be prepared by supercritical CO2 anti-solvent technology.

  1. Assembled modules technology for site-specific prolonged delivery of norfloxacin.

    PubMed

    Oliveira, Paulo Renato; Bernardi, Larissa Sakis; Strusi, Orazio Luca; Mercuri, Salvatore; Segatto Silva, Marcos A; Colombo, Paolo; Sonvico, Fabio

    2011-02-28

    The aim of this research was to design and study norfloxacin (NFX) release in floating conditions from compressed hydrophilic matrices of hydroxypropylmethylcellulose (HPMC) or poly(ethylene oxide) (PEO). Module assembling technology for drug delivery system manufacturing was used. Two differently cylindrical base curved matrix/modules, identified as female and male, were assembled in void configuration by friction interlocking their concave bases obtaining a floating release system. Drug release and floatation behavior of this assembly was investigated. Due to the higher surface area exposed to the release medium, faster release was observed for individual modules compared to their assembled configuration, independently on the polymer used and concentration. The release curves analyzed using the Korsmeyer exponential equation and Peppas & Sahlin binomial equation showed that the drug release was controlled both by drug diffusion and polymer relaxation or erosion mechanisms. However, convective transport was predominant with PEO and at low content of polymers. NFX release from PEO polymeric matrix was more erosion dependent than HPMC. The assembled systems were able to float in vitro for up to 240min, indicating that this drug delivery system of norfloxacin could provide gastro-retentive site-specific release for increasing norfloxacin bioavailability. Copyright © 2010. Published by Elsevier B.V.

  2. RNA-Seq reveals complex genetic response to Deepwater Horizon oil release in Fundulus grandis.

    PubMed

    Garcia, Tzintzuni I; Shen, Yingjia; Crawford, Douglas; Oleksiak, Marjorie F; Whitehead, Andrew; Walter, Ronald B

    2012-09-12

    The release of oil resulting from the blowout of the Deepwater Horizon (DH) drilling platform was one of the largest in history discharging more than 189 million gallons of oil and subject to widespread application of oil dispersants. This event impacted a wide range of ecological habitats with a complex mix of pollutants whose biological impact is still not yet fully understood. To better understand the effects on a vertebrate genome, we studied gene expression in the salt marsh minnow Fundulus grandis, which is local to the northern coast of the Gulf of Mexico and is a sister species of the ecotoxicological model Fundulus heteroclitus. To assess genomic changes, we quantified mRNA expression using high throughput sequencing technologies (RNA-Seq) in F. grandis populations in the marshes and estuaries impacted by DH oil release. This application of RNA-Seq to a non-model, wild, and ecologically significant organism is an important evaluation of the technology to quickly assess similar events in the future. Our de novo assembly of RNA-Seq data produced a large set of sequences which included many duplicates and fragments. In many cases several of these could be associated with a common reference sequence using blast to query a reference database. This reduced the set of significant genes to 1,070 down-regulated and 1,251 up-regulated genes. These genes indicate a broad and complex genomic response to DH oil exposure including the expected AHR-mediated response and CYP genes. In addition a response to hypoxic conditions and an immune response are also indicated. Several genes in the choriogenin family were down-regulated in the exposed group; a response that is consistent with AH exposure. These analyses are in agreement with oligonucleotide-based microarray analyses, and describe only a subset of significant genes with aberrant regulation in the exposed set. RNA-Seq may be successfully applied to feral and extremely polymorphic organisms that do not have an underlying genome sequence assembly to address timely environmental problems. Additionally, the observed changes in a large set of transcript expression levels are indicative of a complex response to the varied petroleum components to which the fish were exposed.

  3. Cell-targetable DNA nanocapsules for spatiotemporal release of caged bioactive small molecules

    NASA Astrophysics Data System (ADS)

    Veetil, Aneesh T.; Chakraborty, Kasturi; Xiao, Kangni; Minter, Myles R.; Sisodia, Sangram S.; Krishnan, Yamuna

    2017-12-01

    Achieving triggered release of small molecules with spatial and temporal precision at designated cells within an organism remains a challenge. By combining a cell-targetable, icosahedral DNA-nanocapsule loaded with photoresponsive polymers, we show cytosolic delivery of small molecules with the spatial resolution of single endosomes in specific cells in Caenorhabditis elegans. Our technology can report on the extent of small molecules released after photoactivation as well as pinpoint the location at which uncaging of the molecules occurred. We apply this technology to release dehydroepiandrosterone (DHEA), a neurosteroid that promotes neurogenesis and neuron survival, and determined the timescale of neuronal activation by DHEA, using light-induced release of DHEA from targeted DNA nanocapsules. Importantly, sequestration inside the DNA capsule prevents photocaged DHEA from activating neurons prematurely. Our methodology can in principle be generalized to diverse neurostimulatory molecules.

  4. Calorie count - sodas and energy drinks

    MedlinePlus

    ... Accessed May 11, 2016. United States Department of Agriculture website. ChooseMyPlate.gov. Make better beverage choices. www. ... Accessed May 11, 2016. United States Department of Agriculture. National nutrient database for standard reference. Release 28. ...

  5. 77 FR 42736 - Common Formats for Patient Safety Data Collection and Event Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-20

    ... Safety Databases (NPSD). Since the initial release of the Common Formats in August 2008, AHRQ has.... The inventory includes many systems from the private sector, including prominent academic settings...

  6. 40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AGENCY AND DEPARTMENT OF JUSTICE ACCIDENTAL RELEASE PREVENTION REQUIREMENTS; RISK MANAGEMENT PROGRAMS... elements in the risk management plan database available on the Internet: (a) The concentration of the...

  7. An Animated Introduction to Relational Databases for Many Majors

    ERIC Educational Resources Information Center

    Dietrich, Suzanne W.; Goelman, Don; Borror, Connie M.; Crook, Sharon M.

    2015-01-01

    Database technology affects many disciplines beyond computer science and business. This paper describes two animations developed with images and color that visually and dynamically introduce fundamental relational database concepts and querying to students of many majors. The goal is for educators in diverse academic disciplines to incorporate the…

  8. Market Pressure and Government Intervention in the Administration and Development of Molecular Databases.

    ERIC Educational Resources Information Center

    Sillince, J. A. A.; Sillince, M.

    1993-01-01

    Discusses molecular databases and the role that government and private companies play in their administration and development. Highlights include copyright and patent issues relating to public databases and the information contained in them; data quality; data structures and technological questions; the international organization of molecular…

  9. Computer Security Products Technology Overview

    DTIC Science & Technology

    1988-10-01

    13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a

  10. Database Software Selection for the Egyptian National STI Network.

    ERIC Educational Resources Information Center

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  11. 76 FR 6789 - Unlicensed Operation in the TV Broadcast Bands

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-08

    ...., Spectrum Bridge Inc., Telcordia Technologies, and WSdb LLC--as TV bands device database administrators. The TV bands databases will be used by fixed and personal portable unlicensed devices to identify unused... administrators to develop the databases that are necessary to enable the introduction of this new class of...

  12. Distributed databases for materials study of thermo-kinetic properties

    NASA Astrophysics Data System (ADS)

    Toher, Cormac

    2015-03-01

    High-throughput computational materials science provides researchers with the opportunity to rapidly generate large databases of materials properties. To rapidly add thermal properties to the AFLOWLIB consortium and Materials Project repositories, we have implemented an automated quasi-harmonic Debye model, the Automatic GIBBS Library (AGL). This enables us to screen thousands of materials for thermal conductivity, bulk modulus, thermal expansion and related properties. The search and sort functions of the online database can then be used to identify suitable materials for more in-depth study using more precise computational or experimental techniques. AFLOW-AGL source code is public domain and will soon be released within the GNU-GPL license.

  13. On the Suitability of Tcl/Tk for SYS

    DTIC Science & Technology

    2003-02-01

    database design, or user interface. CMU/SEI-2003-TN-001 7 4.4 Legacy Systems SYS is not now complete. The system it replaced interfaced with a dozen...a database maintained by a parent organization. Before SYS was released, many of its current users interacted directly with JSYS, so that system...rating. Rather than shades of blue, the full rainbow is exploited. Rather than window proliferation, the usual result of an action is to replace the

  14. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  15. Preliminary Integrated Geologic Map Databases for the United States: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, Rhode Island and Vermont

    USGS Publications Warehouse

    Nicholson, Suzanne W.; Dicken, Connie L.; Horton, John D.; Foose, Michael P.; Mueller, Julia A.L.; Hon, Rudi

    2006-01-01

    The rapid growth in the use of Geographic Information Systems (GIS) has highlighted the need for regional and national scale digital geologic maps that have standardized information about geologic age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. Although two digital geologic maps (Schruben and others, 1994; Reed and Bush, 2004) of the United States currently exist, their scales (1:2,500,000 and 1:5,000,000) are too general for many regional applications. Most states have digital geologic maps at scales of about 1:500,000, but the databases are not comparably structured and, thus, it is difficult to use the digital database for more than one state at a time. This report describes the result for a seven state region of an effort by the U.S. Geological Survey to produce a series of integrated and standardized state geologic map databases that cover the entire United States. In 1997, the United States Geological Survey's Mineral Resources Program initiated the National Surveys and Analysis (NSA) Project to develop national digital databases. One primary activity of this project was to compile a national digital geologic map database, utilizing state geologic maps, to support studies in the range of 1:250,000- to 1:1,000,000-scale. To accomplish this, state databases were prepared using a common standard for the database structure, fields, attribution, and data dictionaries. For Alaska and Hawaii new state maps are being prepared and the preliminary work for Alaska is being released as a series of 1:250,000 scale quadrangle reports. This document provides background information and documentation for the integrated geologic map databases of this report. This report is one of a series of such reports releasing preliminary standardized geologic map databases for the United States. The data products of the project consist of two main parts, the spatial databases and a set of supplemental tables relating to geologic map units. The datasets serve as a data resource to generate a variety of stratigraphic, age, and lithologic maps. This documentation is divided into four main sections: (1) description of the set of data files provided in this report, (2) specifications of the spatial databases, (3) specifications of the supplemental tables, and (4) an appendix containing the data dictionaries used to populate some fields of the spatial database and supplemental tables.

  16. New trends in combined use of gonadotropin-releasing hormone antagonists with gonadotropins or pulsatile gonadotropin-releasing hormone in ovulation induction and assisted reproductive technologies.

    PubMed

    Gordon, K; Danforth, D R; Williams, R F; Hodgen, G D

    1992-10-01

    The use of gonadotropin-releasing hormone agonists as adjunctive therapy with gonadotropins for ovulation induction in in vitro fertilization and other assisted reproductive technologies has become common clinical practice. With the recent advent of potent gonadotropin-releasing hormone antagonists free from the marked histamine-release effects that stymied earlier compounds, an attractive alternative method may be available. We have established the feasibility of combining gonadotropin-releasing hormone antagonist-induced inhibition of endogenous gonadotropins with exogenous gonadotropin therapy for ovulation induction in a nonhuman primate model. Here, the principal benefits to be gained from using the gonadotropin-releasing hormone antagonist rather than the gonadotropin-releasing hormone agonist are the immediate inhibition of pituitary gonadotropin secretion without the "flare effect," which brings greater safety and convenience for patients and the medical team and saves time and money. We have also recently demonstrated the feasibility of combining gonadotropin-releasing hormone antagonist with pulsatile gonadotropin-releasing hormone therapy for the controlled restoration of gonadotropin secretion and gonadal steroidogenesis culminating in apparently normal (singleton) ovulatory cycles. This is feasible only with gonadotropin-releasing hormone antagonists because, unlike gonadotropin-releasing hormone agonists, they achieve control of the pituitary-ovarian axis without down regulation of the gonadotropin-releasing hormone receptor system. This capacity to override gonadotropin-releasing hormone antagonist-induced suppression of pituitary-ovarian function may allow new treatment modalities to be employed for women who suffer from chronic hyperandrogenemia with polycystic ovarian disease.

  17. Insight: An ontology-based integrated database and analysis platform for epilepsy self-management research.

    PubMed

    Sahoo, Satya S; Ramesh, Priya; Welter, Elisabeth; Bukach, Ashley; Valdez, Joshua; Tatsuoka, Curtis; Bamps, Yvan; Stoll, Shelley; Jobst, Barbara C; Sajatovic, Martha

    2016-10-01

    We present Insight as an integrated database and analysis platform for epilepsy self-management research as part of the national Managing Epilepsy Well Network. Insight is the only available informatics platform for accessing and analyzing integrated data from multiple epilepsy self-management research studies with several new data management features and user-friendly functionalities. The features of Insight include, (1) use of Common Data Elements defined by members of the research community and an epilepsy domain ontology for data integration and querying, (2) visualization tools to support real time exploration of data distribution across research studies, and (3) an interactive visual query interface for provenance-enabled research cohort identification. The Insight platform contains data from five completed epilepsy self-management research studies covering various categories of data, including depression, quality of life, seizure frequency, and socioeconomic information. The data represents over 400 participants with 7552 data points. The Insight data exploration and cohort identification query interface has been developed using Ruby on Rails Web technology and open source Web Ontology Language Application Programming Interface to support ontology-based reasoning. We have developed an efficient ontology management module that automatically updates the ontology mappings each time a new version of the Epilepsy and Seizure Ontology is released. The Insight platform features a Role-based Access Control module to authenticate and effectively manage user access to different research studies. User access to Insight is managed by the Managing Epilepsy Well Network database steering committee consisting of representatives of all current collaborating centers of the Managing Epilepsy Well Network. New research studies are being continuously added to the Insight database and the size as well as the unique coverage of the dataset allows investigators to conduct aggregate data analysis that will inform the next generation of epilepsy self-management studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. LISTA, LISTA-HOP and LISTA-HON: a comprehensive compilation of protein encoding sequences and its associated homology databases from the yeast Saccharomyces.

    PubMed Central

    Dölz, R; Mossé, M O; Slonimski, P P; Bairoch, A; Linder, P

    1996-01-01

    We continued our effort to make a comprehensive database (LISTA) for the yeast Saccharomyces cerevisiae. As in previous editions the genetic names are consistently associated to each sequence with a known and confirmed ORF. If necessary, synonyms are given in the case of allelic duplicated sequences. Although the first publication of a sequence gives-according to our rules-the genetic name of a gene, in some instances more commonly used names are given to avoid nomenclature problems and the use of ancient designations which are no longer used. In these cases the old designation is given as synonym. Thus sequences can be found either by the name or by synonyms given in LISTA. Each entry contains the genetic name, the mnemonic from the EMBL data bank, the codon bias, reference of the publication of the sequence, Chromosomal location as far as known, SWISSPROT and EMBL accession numbers. New entries will also contain the name from the systematic sequencing efforts. Since the release of LISTA4.1 we update the database continuously. To obtain more information on the included sequences, each entry has been screened against non-redundant nucleotide and protein data bank collections resulting in LISTA-HON and LISTA-HOP. This release includes reports from full Smith and Watermann peptide-level searches against a non-redundant protein sequence database. The LISTA data base can be linked to the associated data sets or to nucleotide and protein banks by the Sequence Retrieval System (SRS). The database is available by FTP and on World Wide Web. PMID:8594599

  19. Database extraction strategies for low-template evidence.

    PubMed

    Bleka, Øyvind; Dørum, Guro; Haned, Hinda; Gill, Peter

    2014-03-01

    Often in forensic cases, the profile of at least one of the contributors to a DNA evidence sample is unknown and a database search is needed to discover possible perpetrators. In this article we consider two types of search strategies to extract suspects from a database using methods based on probability arguments. The performance of the proposed match scores is demonstrated by carrying out a study of each match score relative to the level of allele drop-out in the crime sample, simulating low-template DNA. The efficiency was measured by random man simulation and we compared the performance using the SGM Plus kit and the ESX 17 kit for the Norwegian population, demonstrating that the latter has greatly enhanced power to discover perpetrators of crime in large national DNA databases. The code for the database extraction strategies will be prepared for release in the R-package forensim. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Damage to offshore infrastructure in the Gulf of Mexico by hurricanes Katrina and Rita

    NASA Astrophysics Data System (ADS)

    Cruz, A. M.; Krausmann, E.

    2009-04-01

    The damage inflicted by hurricanes Katrina and Rita to the Gulf-of-Mexico's (GoM) oil and gas production, both onshore and offshore, has shown the proneness of industry to Natech accidents (natural hazard-triggered hazardous-materials releases). In order to contribute towards a better understanding of Natech events, we assessed the damage to and hazardous-materials releases from offshore oil and natural-gas platforms and pipelines induced by hurricanes Katrina and Rita. Data was obtained through a review of published literature and interviews with government officials and industry representatives from the affected region. We also reviewed over 60,000 records of reported hazardous-materials releases from the National Response Center's (NRC) database to identify and analyze the hazardous-materials releases directly attributed to offshore oil and gas platforms and pipelines affected by the two hurricanes. Our results show that hurricanes Katrina and Rita destroyed at least 113 platforms, and severely damaged at least 53 others. Sixty percent of the facilities destroyed were built 30 years ago or more prior to the adoption of the more stringent design standards that went into effect after 1977. The storms also destroyed 5 drilling rigs and severely damaged 19 mobile offshore drilling units (MODUs). Some 19 MODUs lost their moorings and became adrift during the storms which not only posed a danger to existing facilities but the dragging anchors also damaged pipelines and other infrastructure. Structural damage to platforms included toppling of sections, and tilting or leaning of platforms. Possible causes for failure of structural and non-structural components of platforms included loading caused by wave inundation of the deck. Failure of rigs attached to platforms was also observed resulting in significant damage to the platform or adjacent infrastructure, as well as damage to equipment, living quarters and helipads. The failures are attributable to tie-down components and occurred on both fixed and floating platforms. The total number of pipelines damaged by Hurricanes Katrina and Rita as of May 1, 2006, was 457. Pipeline damage was mostly caused by damage or failure of the host platform or its development and production piping, the impact of dragging and displaced objects, and pipeline interaction at a crossing. Damage to pipelines was a major contributing factor in delaying start up of offshore oil and gas production. During our analysis of the NRC database we identified 611 reported hazardous-materials releases directly attributed to offshore platforms and pipelines affected by the two hurricanes. There were twice as many releases during Hurricane Katrina than during Rita; 80% or more of the releases reported in the NRC database occurred from platforms. Our analysis suggests that the majority of releases were petroleum products, such as crude oil and condensate, followed by natural gas. In both Katrina and Rita, releases were more likely in the front, right quadrant of the storm. Storm-surge values were highest closer to the coastline. This may help explain the higher number of releases in shallow waters. The higher number of hazardous-materials releases from platforms during Katrina may partly be attributed to the higher wind speeds for this storm as it approached land.

  1. Identification of phylogenetic position in the Chlamydiaceae family for Chlamydia strains released from monkeys and humans with chlamydial pathology.

    PubMed

    Karaulov, Alexander; Aleshkin, Vladimir; Slobodenyuk, Vladimir; Grechishnikova, Olga; Afanasyev, Stanislav; Lapin, Boris; Dzhikidze, Eteri; Nesvizhsky, Yuriy; Evsegneeva, Irina; Voropayeva, Elena; Afanasyev, Maxim; Aleshkin, Andrei; Metelskaya, Valeria; Yegorova, Ekaterina; Bayrakova, Alexandra

    2010-01-01

    Based on the results of the comparative analysis concerning relatedness and evolutional difference of the 16S-23S nucleotide sequences of the middle ribosomal cluster and 23S rRNA I domain, and based on identification of phylogenetic position for Chlamydophila pneumoniae and Chlamydia trichomatis strains released from monkeys, relatedness of the above stated isolates with similar strains released from humans and with strains having nucleotide sequences presented in the GenBank electronic database has been detected for the first time ever. Position of these isolates in the Chlamydiaceae family phylogenetic tree has been identified. The evolutional position of the investigated original Chlamydia and Chlamydophila strains close to analogous strains from the Gen-Bank electronic database has been demonstrated. Differences in the 16S-23S nucleotide sequence of the middle ribosomal cluster and 23S rRNA I domain of plasmid and nonplasmid Chlamydia trachomatis strains released from humans and monkeys relative to different genotype groups (group B-B, Ba, D, Da, E, L1, L2, L2a; intermediate group-F, G, Ga) have been revealed for the first time ever. Abnormality in incA chromosomal gene expression resulting in Chlamydia life development cycle disorder, and decrease of Chlamydia virulence can be related to probable changes in the nucleotide sequence of the gene under consideration.

  2. The field campaigns of the European Tracer Experiment (ETEX). overview and results

    NASA Astrophysics Data System (ADS)

    Nodop, K.; Connolly, R.; Girardi, F.

    As part of the European Tracer Experiment (ETEX) two successful atmospheric experiments were carried out in October and November, 1994. Perfluorocarbon (PFC) tracers were released into the atmosphere in Monterfil, Brittany, and air samples were taken at 168 stations in 17 European countries for 72 h after the release. Upper air tracer measurements were made from three aircraft. During the first experiment a westerly air flow transported the tracer plume north-eastwards across Europe. During the second release the flow was eastwards. The results from the ground sampling network allowed the determination of the cloud evolution as far as Sweden, Poland and Bulgaria. This demonstrated that the PFT technique can be successfully applied in long-range tracer experiments up to 2000 km. Typical background concentrations of the tracer used are around 5-7 fl ℓ -1 in ambient air. Concentrations in the plume ranged from 10 to above 200 fl/ℓ -1. The tracer release characteristics, the tracer concentrations at the ground and in upper air, the routine and additional meteorological observations at the ground level and in upper air, trajectories derived from constant-level balloons and the meteorological input fields for long-range transport models are assembled in the ETEX database. The ETEX database is accessible via the Internet. Here, an overview is given of the design of the experiment, the methods used and the data obtained.

  3. JASPAR 2016: a major expansion and update of the open-access database of transcription factor binding profiles.

    PubMed

    Mathelier, Anthony; Fornes, Oriol; Arenillas, David J; Chen, Chih-Yu; Denay, Grégoire; Lee, Jessica; Shi, Wenqiang; Shyr, Casper; Tan, Ge; Worsley-Hunt, Rebecca; Zhang, Allen W; Parcy, François; Lenhard, Boris; Sandelin, Albin; Wasserman, Wyeth W

    2016-01-04

    JASPAR (http://jaspar.genereg.net) is an open-access database storing curated, non-redundant transcription factor (TF) binding profiles representing transcription factor binding preferences as position frequency matrices for multiple species in six taxonomic groups. For this 2016 release, we expanded the JASPAR CORE collection with 494 new TF binding profiles (315 in vertebrates, 11 in nematodes, 3 in insects, 1 in fungi and 164 in plants) and updated 59 profiles (58 in vertebrates and 1 in fungi). The introduced profiles represent an 83% expansion and 10% update when compared to the previous release. We updated the structural annotation of the TF DNA binding domains (DBDs) following a published hierarchical structural classification. In addition, we introduced 130 transcription factor flexible models trained on ChIP-seq data for vertebrates, which capture dinucleotide dependencies within TF binding sites. This new JASPAR release is accompanied by a new web tool to infer JASPAR TF binding profiles recognized by a given TF protein sequence. Moreover, we provide the users with a Ruby module complementing the JASPAR API to ease programmatic access and use of the JASPAR collection of profiles. Finally, we provide the JASPAR2016 R/Bioconductor data package with the data of this release. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. JNDMS Task Authorization 2 Report

    DTIC Science & Technology

    2013-10-01

    uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and

  5. Oral Drug Delivery Systems Comprising Altered Geometric Configurations for Controlled Drug Delivery

    PubMed Central

    Moodley, Kovanya; Pillay, Viness; Choonara, Yahya E.; du Toit, Lisa C.; Ndesendo, Valence M. K.; Kumar, Pradeep; Cooppan, Shivaan; Bawa, Priya

    2012-01-01

    Recent pharmaceutical research has focused on controlled drug delivery having an advantage over conventional methods. Adequate controlled plasma drug levels, reduced side effects as well as improved patient compliance are some of the benefits that these systems may offer. Controlled delivery systems that can provide zero-order drug delivery have the potential for maximizing efficacy while minimizing dose frequency and toxicity. Thus, zero-order drug release is ideal in a large area of drug delivery which has therefore led to the development of various technologies with such drug release patterns. Systems such as multilayered tablets and other geometrically altered devices have been created to perform this function. One of the principles of multilayered tablets involves creating a constant surface area for release. Polymeric materials play an important role in the functioning of these systems. Technologies developed to date include among others: Geomatrix® multilayered tablets, which utilizes specific polymers that may act as barriers to control drug release; Procise®, which has a core with an aperture that can be modified to achieve various types of drug release; core-in-cup tablets, where the core matrix is coated on one surface while the circumference forms a cup around it; donut-shaped devices, which possess a centrally-placed aperture hole and Dome Matrix® as well as “release modules assemblage”, which can offer alternating drug release patterns. This review discusses the novel altered geometric system technologies that have been developed to provide controlled drug release, also focusing on polymers that have been employed in such developments. PMID:22312236

  6. Assessment of infrastructure functional damages caused by natural-technological disasters

    NASA Astrophysics Data System (ADS)

    Massabò, Marco; Trasforini, Eva; Traverso, Stefania; Rudari, Roberto; De Angeli, Silvia; Cecinati, Francesca; Cerruti, Valentina

    2013-04-01

    The assessment of infrastructure damages caused by technological disaster poses several challenges, from gathering needed information on the territorial system to the definition of functionality curves for infrastructures elements (such as, buildings, road school) that are exposed to both natural and technological event. Moreover, areas affected by natural or natech (technological disasters triggered by natural events) disasters have often very large extensions and a rapid survey of them to gather all the needed information is a very difficult task, for many reasons, not least the difficult access to the existing databases and resources. We use multispectral optical imagery with other geographical and unconventional data to identify and characterize exposed elements. Our efforts in the virtual survey and during the investigation steps have different aims: to identify the vulnerability of infrastructures, buildings or activities; to execute calculations of exposition to risk; to estimate physical and functional damages. Subsequently, we apply specific algorithms to estimate values of acting forces and physical and functional damages. The updated picture of target areas in terms of risk-prone people, infrastructures and their connections is very important. It is possible to develop algorithms providing values of systemic functionality for each network element. The methodology is here applied to a natech disaster, arising from the combination of a flood event (specifically, the January 2010 flooding of Drin and Buna rivers, with a worsening in the road safety levels in the Shkoder area) with and the subsequent overturning of a truck transporting hazardous material. The accident causes the loss of containment and the total material release. Once the release has taken place, the evolution will depend on the physical state of the substance spilled (liquid, gas or dust). As a specific case we consider the rupture of a trucks transporting liquid fuels such as gasoline through Shkoder downtown. Goods entering in Albania from north pass through Shkoder, indeed a high traffic road that connects Albania with Montenegro and Kosovo crosses Shkoder downtown. We consider a truck overturned in downtown Shkoder during the flooding of January 2010; the gasoline transported by the truck is completely released and a pool fire develops damaging roads. We use the model CHESRM (Chemical Spill Risk Mapper) for identify the threat zones of the accident and as a basis for assessing the potential leads to functional damages to other elements of the considered system. The application of the methodology shows the potential use not only on real time emergency management or prevention but also during post-event management for the evaluation of the functional damage to the affected infrastructure (villages isolated from the rest of the network, villages unable to reach schools, hospitals or other services...) and to set a hierarchy in restoration activities, giving priority to the reconstruction of links between primary nodes.

  7. Phenazopyridine-phthalimide nano-cocrystal: Release rate and oral bioavailability enhancement.

    PubMed

    Huang, Yu; Li, Jin-Mei; Lai, Zhi-Hui; Wu, Jun; Lu, Tong-Bu; Chen, Jia-Mei

    2017-11-15

    Both cocrystal and nanocrystal technologies have been widely used in the pharmaceutical development for poorly soluble drugs. However, the synergistic effects due to the integration of these two technologies have not been well investigated. The aim of this study is to develop a nano-sized cocrystal of phenazopyridine (PAP) with phthalimide (PI) to enhance the release rate and oral bioavailability of PAP. A PAP-PI nano-cocrystal with particle diameter of 21.4±0.1nm was successfully prepared via a sonochemical approach and characterized by powder X-ray diffraction (PXRD), differential scanning calorimetry (DSC), scanning electron microscopy (SEM) and dynamic light scattering (DLS) analysis. An in vitro release study revealed a significant release rate enhancement for PAP-PI nano-cocrystal as compared to PAP-PI cocrystal and PAP hydrochloride salt. Further, a comparative oral bioavailability study in rats indicated significant improvement in C max and oral bioavailability (AUC 0-∞ ) by 1.39- and 2.44-fold, respectively. This study demonstrated that this novel nano-cocrystal technology can be a new promising option to improve release rate and absorption of poorly soluble compounds in the pharmaceutical industry. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Emergency Response Notification System (ERNS)

    EPA Science Inventory

    The Emergency Response Notification System (ERNS) is a database used to store information on notifications of oil discharges and hazardous substances releases. The ERNS program is a cooperative data sharing effort among the Environmental Protection Agency (EPA) Headquarters, the ...

  9. Effect of Nisin's Controlled Release on Microbial Growth as Modeled for Micrococcus luteus.

    PubMed

    Balasubramanian, Aishwarya; Lee, Dong Sun; Chikindas, Michael L; Yam, Kit L

    2011-06-01

    The need for safe food products has motivated food scientists and industry to find novel technologies for antimicrobial delivery for improving food safety and quality. Controlled release packaging is a novel technology that uses the package to deliver antimicrobials in a controlled manner and sustain antimicrobial stress on the targeted microorganism over the required shelf life. This work studied the effect of controlled release of nisin to inhibit growth of Micrococcus luteus (a model microorganism) using a computerized syringe pump system to mimic the release of nisin from packaging films which was characterized by an initially fast rate and a slower rate as time progressed. The results show that controlled release of nisin was strikingly more effective than instantly added ("formulated") nisin. While instant addition experiments achieved microbial inhibition only at the beginning, controlled release experiments achieved complete microbial inhibition for a longer time, even when as little as 15% of the amount of nisin was used as compared to instant addition.

  10. Atomic Spectra Bibliography Databases at NIST

    NASA Astrophysics Data System (ADS)

    Kramida, Alexander

    2010-03-01

    NIST's Atomic Spectroscopy Data Center maintains three online Bibliographic Databases (BD) [http://physics.nist.gov/PhysRefData/ASBib1/index.html]: -- Atomic Energy Levels and Spectra (AEL BD), Atomic Transition Probability (ATP BD), and Atomic Spectral Line Broadening (ALB BD). This year marks new releases of these BDs -- AEL BD v.2.0, ATP BD v.9.0, and ALB DB v.3.0. These releases incorporate significant improvements in the quantity and quality of bibliographic data since the previous versions published first in 2006. The total number of papers in the three DBs grew from 20,000 to 30,000. The data search is now made easier, and the returned content is enriched with direct links to online journal articles and universal Digital Object Identifiers. Statistics show a nearly constant flow of new publications on atomic spectroscopy, about 600 new papers published each year since 1968. New papers are inserted in our BDs every two weeks on average.

  11. Comparison of the NCI open database with seven large chemical structural databases.

    PubMed

    Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C

    2001-01-01

    Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.

  12. Large-scale contamination of microbial isolate genomes by Illumina PhiX control.

    PubMed

    Mukherjee, Supratim; Huntemann, Marcel; Ivanova, Natalia; Kyrpides, Nikos C; Pati, Amrita

    2015-01-01

    With the rapid growth and development of sequencing technologies, genomes have become the new go-to for exploring solutions to some of the world's biggest challenges such as searching for alternative energy sources and exploration of genomic dark matter. However, progress in sequencing has been accompanied by its share of errors that can occur during template or library preparation, sequencing, imaging or data analysis. In this study we screened over 18,000 publicly available microbial isolate genome sequences in the Integrated Microbial Genomes database and identified more than 1000 genomes that are contaminated with PhiX, a control frequently used during Illumina sequencing runs. Approximately 10% of these genomes have been published in literature and 129 contaminated genomes were sequenced under the Human Microbiome Project. Raw sequence reads are prone to contamination from various sources and are usually eliminated during downstream quality control steps. Detection of PhiX contaminated genomes indicates a lapse in either the application or effectiveness of proper quality control measures. The presence of PhiX contamination in several publicly available isolate genomes can result in additional errors when such data are used in comparative genomics analyses. Such contamination of public databases have far-reaching consequences in the form of erroneous data interpretation and analyses, and necessitates better measures to proofread raw sequences before releasing them to the broader scientific community.

  13. Development of XML Schema for Broadband Digital Seismograms and Data Center Portal

    NASA Astrophysics Data System (ADS)

    Takeuchi, N.; Tsuboi, S.; Ishihara, Y.; Nagao, H.; Yamagishi, Y.; Watanabe, T.; Yanaka, H.; Yamaji, H.

    2008-12-01

    There are a number of data centers around the globe, where the digital broadband seismograms are opened to researchers. Those centers use their own user interfaces and there are no standard to access and retrieve seismograms from different data centers using unified interface. One of the emergent technologies to realize unified user interface for different data centers is the concept of WebService and WebService portal. Here we have developed a prototype of data center portal for digital broadband seismograms. This WebService portal uses WSDL (Web Services Description Language) to accommodate differences among the different data centers. By using the WSDL, alteration and addition of data center user interfaces can be easily managed. This portal, called NINJA Portal, assumes three WebServices: (1) database Query service, (2) Seismic event data request service, and (3) Seismic continuous data request service. Current system supports both station search of database Query service and seismic continuous data request service. Data centers supported by this NINJA portal will be OHP data center in ERI and Pacific21 data center in IFREE/JAMSTEC in the beginning. We have developed metadata standard for seismological data based on QuakeML for parametric data, which has been developed by ETH Zurich, and XML-SEED for waveform data, which was developed by IFREE/JAMSTEC. The prototype of NINJA portal is now released through IFREE web page (http://www.jamstec.go.jp/pacific21/).

  14. Global Collaboration Enhances Technology Literacy

    ERIC Educational Resources Information Center

    Cook, Linda A.; Bell, Meredith L.; Nugent, Jill; Smith, Walter S.

    2016-01-01

    Today's learners routinely use technology outside of school to communicate, collaborate, and gather information about the world around them. Classroom learning experiences are relevant when they include communication technologies such as social networking, blogging, and video conferencing, and information technologies such as databases, browsers,…

  15. Probiotic Encapsulation Technology: From Microencapsulation to Release into the Gut

    PubMed Central

    Gbassi, Gildas K.; Vandamme, Thierry

    2012-01-01

    Probiotic encapsulation technology (PET) has the potential to protect microorgansisms and to deliver them into the gut. Because of the promising preclinical and clinical results, probiotics have been incorporated into a range of products. However, there are still many challenges to overcome with respect to the microencapsulation process and the conditions prevailing in the gut. This paper reviews the methodological approach of probiotics encapsulation including biomaterials selection, choice of appropriate technology, in vitro release studies of encapsulated probiotics, and highlights the challenges to be overcome in this area. PMID:24300185

  16. Nanotechnology patenting trends through an environmental lens: analysis of materials and applications

    NASA Astrophysics Data System (ADS)

    Leitch, Megan E.; Casman, Elizabeth; Lowry, Gregory V.

    2012-12-01

    Many international groups study environmental health and safety (EHS) concerns surrounding the use of engineered nanomaterials (ENMs). These researchers frequently use the "Project on Emerging Nanotechnologies" (PEN) inventory of nano-enabled consumer products to prioritize types of ENMs to study because estimates of life-cycle ENM releases to the environment can be extrapolated from the database. An alternative "snapshot" of nanomaterials likely to enter commerce can be determined from the patent literature. The goal of this research was to provide an overview of nanotechnology intellectual property trends, complementary to the PEN consumer product database, to help identify potentially "risky" nanomaterials for study by the nano-EHS community. Ten years of nanotechnology patents were examined to determine the types of nano-functional materials being patented, the chemical compositions of the ENMs, and the products in which they are likely to appear. Patenting trends indicated different distributions of nano-enabled products and materials compared to the PEN database. Recent nanotechnology patenting is dominated by electrical and information technology applications rather than the hygienic and anti-fouling applications shown by PEN. There is an increasing emphasis on patenting of nano-scale layers, coatings, and other surface modifications rather than traditional nanoparticles, and there is widespread use of nano-functional semiconductor, ceramic, magnetic, and biological materials that are currently less studied by EHS professionals. These commonly patented products and the nano-functional materials they contain may warrant life-cycle evaluations to determine the potential for environmental exposure and toxicity. The patent and consumer product lists contribute different and complementary insights into the emerging nanotechnology industry and its potential for introducing nanomaterials into the environment.

  17. Comparison of in-situ measurements and satellite-derived surface emissivity over Italian volcanic areas

    NASA Astrophysics Data System (ADS)

    Silvestri, Malvina; Musacchio, Massimo; Cammarano, Diego; Fabrizia Buongiorno, Maria; Amici, Stefania; Piscini, Alessandro

    2016-04-01

    In this work we compare ground measurements of emissivity collected during dedicated fields campaign on Mt. Etna and Solfatara of Pozzuoli volcanoes and acquired by means of Micro-FTIR (Fourier Thermal Infrared spectrometer) instrument with the emissivity obtained by using single ASTER data (Advanced Spaceborne Thermal Emission and Reflection Radiometer, ASTER 05) and the ASTER emissivity map extract from ASTER Global Emissivity Database (GED), released by LP DAAC on April 2, 2014. The database was developed by the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL), California Institute of Technology. The database includes land surface emissivity derived from ASTER data acquired over the contiguous United States, Africa, Arabian Peninsula, Australia, Europe, and China. Through this analysis we want to investigate the differences existing between the ASTER-GED dataset (average from 2000 to 2008 seasoning independent) and fall in-situ emissivity measurement. Moreover the role of different spatial resolution characterizing ASTER and MODIS, 90mt and 1km respectively, by comparing them with in situ measurements, is analyzed. Possible differences can be due also to the different algorithms used for the emissivity estimation, Temperature and Emissivity Separation algorithm for ASTER TIR band( Gillespie et al, 1998) and the classification-based emissivity method (Snyder and al, 1998) for MODIS. Finally land surface temperature products generated using ASTER-GED and ASTER 05 emissivity are also analyzed. Gillespie, A. R., Matsunaga, T., Rokugawa, S., & Hook, S. J. (1998). Temperature and emissivity separation from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Transactions on Geoscience and Remote Sensing, 36, 1113-1125. Snyder, W.C., Wan, Z., Zhang, Y., & Feng, Y.-Z. (1998). Classification-based emissivity for land surface temperature measurement from space. International Journal of Remote Sensing, 19, 2753-2574.

  18. Database of Industrial Technological Information in Kanagawa : Networks for Technology Activities

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Shindo, Tadashi

    This system is one of the databases which require participation by its members and of which premise is to open all the data in it. Aiming at free technological cooperation and exchange among industries it was constructed by Kanagawa Prefecture in collaboration with enterprises located in it. The input data is 36 items such as major product, special and advantageous technology, technolagy to be wanted for cooperation, facility and equipment, which technologically characterize each enterprise. They are expressed in 2,000 characters and written by natural language including Kanji except for some coded items. 24 search items are accessed by natural language so that in addition to interactive searching procedures including menu-type it enables extensive searching. The information service started in Oct., 1986 covering data from 2,000 enterprisen.

  19. Facilitating Collaboration, Knowledge Construction and Communication with Web-Enabled Databases.

    ERIC Educational Resources Information Center

    McNeil, Sara G.; Robin, Bernard R.

    This paper presents an overview of World Wide Web-enabled databases that dynamically generate Web materials and focuses on the use of this technology to support collaboration, knowledge construction, and communication. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and…

  20. Implementing a Dynamic Database-Driven Course Using LAMP

    ERIC Educational Resources Information Center

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  1. An Experimental Investigation of Complexity in Database Query Formulation Tasks

    ERIC Educational Resources Information Center

    Casterella, Gretchen Irwin; Vijayasarathy, Leo

    2013-01-01

    Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…

  2. A UNIMARC Bibliographic Format Database for ABCD

    ERIC Educational Resources Information Center

    Megnigbeto, Eustache

    2012-01-01

    Purpose: ABCD is a web-based open and free software suite for library management derived from the UNESCO CDS/ISIS software technology. The first version was launched officially in December 2009 with a MARC 21 bibliographic format database. This paper aims to detail the building of the UNIMARC bibliographic format database for ABCD.…

  3. Generation of an Aerothermal Data Base for the X33 Spacecraft

    NASA Technical Reports Server (NTRS)

    Roberts, Cathy; Huynh, Loc

    1998-01-01

    The X-33 experimental program is a cooperative program between industry and NASA, managed by Lockheed-Martin Skunk Works to develop an experimental vehicle to demonstrate new technologies for a single-stage-to-orbit, fully reusable launch vehicle (RLV). One of the new technologies to be demonstrated is an advanced Thermal Protection System (TPS) being designed by BF Goodrich (formerly Rohr, Inc.) with support from NASA. The calculation of an aerothermal database is crucial to identifying the critical design environment data for the TPS. The NASA Ames X-33 team has generated such a database using Computational Fluid Dynamics (CFD) analyses, engineering analysis methods and various programs to compare and interpolate the results from the CFD and the engineering analyses. This database, along with a program used to query the database, is used extensively by several X-33 team members to help them in designing the X-33. This paper will describe the methods used to generate this database, the program used to query the database, and will show some of the aerothermal analysis results for the X-33 aircraft.

  4. HITCal: a software tool for analysis of video head impulse test responses.

    PubMed

    Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás

    2015-09-01

    The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).

  5. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  6. Non-explosive actuation for the ORBCOMM (TM) satellite

    NASA Technical Reports Server (NTRS)

    Robinson, Anthony; Courtney, Craig; Moran, Tom

    1995-01-01

    Spool-based non-explosive actuator (NEA) devices are used for three important holddown and release functions during the establishment of the ORBCOMM (TM) constellation. Non-explosive separation nuts are used to restrain and release the 26 individual satellites into low earth orbit. Cable release mechanisms based on the same technology are used to release the solar arrays and antenna boom.

  7. The human role in space (THURIS) applications study. Final briefing

    NASA Technical Reports Server (NTRS)

    Maybee, George W.

    1987-01-01

    The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.

  8. Software Re-Engineering of the Human Factors Analysis and Classification System - (Maintenance Extension) Using Object Oriented Methods in a Microsoft Environment

    DTIC Science & Technology

    2001-09-01

    replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft

  9. IRIS Toxicological Review of Hexahydro-1,3,5-Trinitro-1,3,5 ...

    EPA Pesticide Factsheets

    EPA is developing an Integrated Risk Information System (IRIS) assessment of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) and has released the draft assessment for public comment. When final, the assessment will appear on the IRIS database. EPA is undertaking an update of the Integrated Risk Information System (IRIS) health assessment for RDX. The outcome of this project is an updated Toxicological Review and IRIS Summary for RDX that will be entered into the IRIS database.

  10. IRIS Toxicological Review of Benzo[a]pyrene (Public ...

    EPA Pesticide Factsheets

    EPA is developing an Integrated Risk Information System (IRIS) assessment of benzo[a]pyrene and has released the draft assessment for public comment and external peer review. When final, the assessment will appear on the IRIS database. EPA is undertaking an update of the Integrated Risk Information System (IRIS) health assessment for benzo[a]pyrene (BaP). The outcome of this project is an updated Toxicological Review and IRIS Summary for BaP that will be entered into the IRIS database.

  11. U.S. states and territories national tsunami hazard assessment, historic record and sources for waves

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Weaver, C.

    2007-12-01

    In 2005, the U.S. National Science and Technology Council (NSTC) released a joint report by the sub-committee on Disaster Reduction and the U.S. Group on Earth Observations titled Tsunami Risk Reduction for the United States: A Framework for Action (Framework). The Framework outlines the President's&pstrategy for reducing the United States tsunami risk. The first specific action called for in the Framework is to "Develop standardized and coordinated tsunami hazard and risk assessments for all coastal regions of the United States and its territories." Since NOAA is the lead agency for providing tsunami forecasts and warnings and NOAA's National Geophysical Data Center (NGDC) catalogs information on global historic tsunamis, NOAA/NGDC was asked to take the lead in conducting the first national tsunami hazard assessment. Earthquakes or earthquake-generated landslides caused more than 85% of the tsunamis in the NGDC tsunami database. Since the United States Geological Survey (USGS) conducts research on earthquake hazards facing all of the United States and its territories, NGDC and USGS partnered together to conduct the first tsunami hazard assessment for the United States and its territories. A complete tsunami hazard and risk assessment consists of a hazard assessment, exposure and vulnerability assessment of buildings and people, and loss assessment. This report is an interim step towards a tsunami risk assessment. The goal of this report is provide a qualitative assessment of the United States tsunami hazard at the national level. Two different methods are used to assess the U.S. tsunami hazard. The first method involves a careful examination of the NGDC historical tsunami database. This resulted in a qualitative national tsunami hazard assessment based on the distribution of runup heights and the frequency of runups. Although tsunami deaths are a measure of risk rather than hazard, the known tsunami deaths found in the NGDC database search were compared with the qualitative assessments based on frequency and amplitude. The second method to assess tsunami hazard involved using the USGS earthquake databases to search for possible earthquake sources near American coastlines to extend the NOAA/NGDC tsunami databases backward in time. The qualitative tsunami hazard assessment based on the results of the NGDC and USGS database searches will be presented.

  12. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  13. DECADE Web Portal: Integrating MaGa, EarthChem and GVP Will Further Our Knowledge on Earth Degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Frigeri, A.; Lehnert, K. A.; Ash, J.; McCormick, B.; Chiodini, G.; Fischer, T. P.; Cottrell, E.

    2014-12-01

    The release of gases from the Earth's interior to the exosphere takes place in both volcanic and non-volcanic areas of the planet. Fully understanding this complex process requires the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. We are developing interoperability between three of those, which will support more powerful synoptic studies of degassing. The three data systems that will make their data accessible via the DECADE portal are: (1) the Smithsonian Institution's Global Volcanism Program database (GVP) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. These databases are developed and maintained by institutions or groups of experts in a specific field, and data are archived in formats specific to these databases. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing a web portal that will create a powerful search engine of these databases from a single entry point. The portal will return comprehensive multi-component datasets, based on the search criteria selected by the user. For example, a single geographic or temporal search will return data relating to compositions of emitted gases and erupted products, the age of the erupted products, and coincident activity at the volcano. The development of this level of capability for the DECADE Portal requires complete synergy between these databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process. Other data systems can be easily plugged in using the existing framework. Our vision is to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  14. Interactive, Automated Management of Icing Data

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.

    2009-01-01

    IceVal DatAssistant is software (see figure) that provides an automated, interactive solution for the management of data from research on aircraft icing. This software consists primarily of (1) a relational database component used to store ice shape and airfoil coordinates and associated data on operational and environmental test conditions and (2) a graphically oriented database access utility, used to upload, download, process, and/or display data selected by the user. The relational database component consists of a Microsoft Access 2003 database file with nine tables containing data of different types. Included in the database are the data for all publicly releasable ice tracings with complete and verifiable test conditions from experiments conducted to date in the Glenn Research Center Icing Research Tunnel. Ice shapes from computational simulations with the correspond ing conditions performed utilizing the latest version of the LEWICE ice shape prediction code are likewise included, and are linked to the equivalent experimental runs. The database access component includes ten Microsoft Visual Basic 6.0 (VB) form modules and three VB support modules. Together, these modules enable uploading, downloading, processing, and display of all data contained in the database. This component also affords the capability to perform various database maintenance functions for example, compacting the database or creating a new, fully initialized but empty database file.

  15. The NASA ASTP Combined-Cycle Propulsion Database Project

    NASA Technical Reports Server (NTRS)

    Hyde, Eric H.; Escher, Daric W.; Heck, Mary T.; Roddy, Jordan E.; Lyles, Garry (Technical Monitor)

    2000-01-01

    The National Aeronautics and Space Administration (NASA) communicated its long-term R&D goals for aeronautics and space transportation technologies in its 1997-98 annual progress report (Reference 1). Under "Pillar 3, Goal 9" a 25-year-horizon set of objectives has been stated for the Generation 3 Reusable Launch Vehicle ("Gen 3 RLV") class of space transportation systems. An initiative referred to as "Spaceliner 100" is being conducted to identify technology roadmaps in support of these objectives. Responsibility for running "Spaceliner 100" technology development and demonstration activities have been assigned to NASA's agency-wide Advanced Space Transportation Program (ASTP) office located at the Marshall Space Flight Center. A key technology area in which advances will be required in order to meet these objectives is propulsion. In 1996, in order to expand their focus beyond "allrocket" propulsion systems and technologies (see Appendix A for further discussion), ASTP initiated technology development and demonstration work on combined-cycle airbreathing/rocket propulsion systems (ARTT Contracts NAS8-40890 through 40894). Combined-cycle propulsion (CCP) activities (see Appendix B for definitions) have been pursued in the U.S. for over four decades, resulting in a large documented knowledge base on this subject (see Reference 2). In the fall of 1999 the Combined-Cycle Propulsion Database (CCPD) project was established with the primary purpose of collecting and consolidating CCP related technical information in support of the ASTP's ongoing technology development and demonstration program. Science Applications International Corporation (SAIC) was selected to perform the initial development of the Database under its existing support contract with MSFC (Contract NAS8-99060) because of the company's unique combination of capabilities in database development, information technology (IT) and CCP knowledge. The CCPD is summarized in the descriptive 2-page flyer appended to this paper as Appendix C. The purpose of this paper is to provide the reader with an understanding of the objectives of the CCPD and relate the progress that has been made toward meeting those objectives.

  16. Waste-to-Energy biofuel production potential for selected feedstocks in the conterminous United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skaggs, Richard L.; Coleman, Andre M.; Seiple, Timothy E.

    Here, waste-to-Energy (WtE) technologies offer the promise of diverting organic wastes, including wastewater sludge, livestock waste, and food waste, for beneficial energy use while reducing the quantities of waste that are disposed or released to the environment. To ensure economic and environmental viability of WtE feedstocks, it is critical to gain an understanding of the spatial and temporal variability of waste production. Detailed information about waste characteristics, capture/diversion, transport requirements, available conversion technologies, and overall energy conversion efficiency is also required. Building on the development of a comprehensive WtE feedstock database that includes municipal wastewater sludge; animal manure; food processingmore » waste; and fats, oils, and grease for the conterminous United States, we conducted a detailed analysis of the wastes' potential for biofuel production on a site-specific basis. Our analysis indicates that with conversion by hydrothermal liquefaction, these wastes have the potential to produce up to 22.3 GL/y (5.9 Bgal/y) of a biocrude oil intermediate that can be upgraded and refined into a variety of liquid fuels, in particular renewable diesel and aviation kerosene. Conversion to aviation kerosene can potentially meet 23.9% of current U.S. demand.« less

  17. Waste-to-Energy biofuel production potential for selected feedstocks in the conterminous United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skaggs, Richard L.; Coleman, André M.; Seiple, Timothy E.

    Waste-to-Energy (WtE) technologies offer the promise of diverting organic wastes, including wastewater sludge, livestock waste, and food waste, for beneficial energy use while reducing the quantities of waste that are disposed or released to the environment. To ensure economic and environmental viability of WtE feedstocks, it is critical to gain an understanding of the spatial and temporal variability of waste production. Detailed information about waste characteristics, capture/diversion, transport requirements, available conversion technologies, and overall energy conversion efficiency is also required. Building on the development of a comprehensive WtE feedstock database that includes municipal wastewater sludge; animal manure; food processing waste;more » and fats, oils, and grease for the conterminous United States, we conducted a detailed analysis of the wastes’ potential for biofuel production on a site-specific basis. Our analysis indicates that with conversion by hydrothermal liquefaction, these wastes have the potential to produce up to 22.3 GL/y (5.9 Bgal/y) of a biocrude oil intermediate that can be upgraded and refined into a variety of liquid fuels, in particular renewable diesel and aviation kerosene. Conversion to aviation kerosene can potentially meet 23.9% of current U.S. demand.« less

  18. Waste-to-Energy biofuel production potential for selected feedstocks in the conterminous United States

    DOE PAGES

    Skaggs, Richard L.; Coleman, André M.; Seiple, Timothy E.; ...

    2017-10-18

    Waste-to-Energy (WtE) technologies offer the promise of diverting organic wastes, including wastewater sludge, livestock waste, and food waste, for beneficial energy use while reducing the quantities of waste that are disposed or released to the environment. To ensure economic and environmental viability of WtE feedstocks, it is critical to gain an understanding of the spatial and temporal variability of waste production. Detailed information about waste characteristics, capture/diversion, transport requirements, available conversion technologies, and overall energy conversion efficiency is also required. Building on the development of a comprehensive WtE feedstock database that includes municipal wastewater sludge; animal manure; food processing waste;more » and fats, oils, and grease for the conterminous United States, we conducted a detailed analysis of the wastes’ potential for biofuel production on a site-specific basis. Our analysis indicates that with conversion by hydrothermal liquefaction, these wastes have the potential to produce up to 22.3 GL/y (5.9 Bgal/y) of a biocrude oil intermediate that can be upgraded and refined into a variety of liquid fuels, in particular renewable diesel and aviation kerosene. Conversion to aviation kerosene can potentially meet 23.9% of current U.S. demand.« less

  19. Waste-to-Energy biofuel production potential for selected feedstocks in the conterminous United States

    DOE PAGES

    Skaggs, Richard L.; Coleman, Andre M.; Seiple, Timothy E.; ...

    2017-10-18

    Here, waste-to-Energy (WtE) technologies offer the promise of diverting organic wastes, including wastewater sludge, livestock waste, and food waste, for beneficial energy use while reducing the quantities of waste that are disposed or released to the environment. To ensure economic and environmental viability of WtE feedstocks, it is critical to gain an understanding of the spatial and temporal variability of waste production. Detailed information about waste characteristics, capture/diversion, transport requirements, available conversion technologies, and overall energy conversion efficiency is also required. Building on the development of a comprehensive WtE feedstock database that includes municipal wastewater sludge; animal manure; food processingmore » waste; and fats, oils, and grease for the conterminous United States, we conducted a detailed analysis of the wastes' potential for biofuel production on a site-specific basis. Our analysis indicates that with conversion by hydrothermal liquefaction, these wastes have the potential to produce up to 22.3 GL/y (5.9 Bgal/y) of a biocrude oil intermediate that can be upgraded and refined into a variety of liquid fuels, in particular renewable diesel and aviation kerosene. Conversion to aviation kerosene can potentially meet 23.9% of current U.S. demand.« less

  20. GOME and Sciamachy data access using the Netherlands Sciamachy Data Center

    NASA Astrophysics Data System (ADS)

    Som de Cerff, Wim; de Vreede, Ernst; van de Vegte, John; van Hees, Ricard; van der Neut, Ian; Stammes, Piet; Pieters, Ankie; van der A, Ronald

    2010-05-01

    The Netherlands Sciamachy Data Center (NL-SCIA-DC) provides access to satellite data from the GOME and Sciamachy instruments for over 10 years now. GOME and Sciamachy both measure trace gases like Ozone, Methane, NO2 and aerosols, which are important for climate and air quality monitoring. Recently (February 2010) a new release of the NL-SCIA-DC provides an improved processing and archiving structure and an improved user interface. This Java Webstart application allows the user to browse, query and download GOME and Sciamachy data products, including KNMI and SRON GOME and Sciamachy products (cloud products, CH4, NO2, CO). Data can be searched on file and pixel level, and can be graphically displayed. The huge database containing all pixel information of GOME and Sciamachy is unique and allows specific selection, e.g., selecting cloud free pixels. Ordered data is delivered by FTP or email. The data available spans the mission times of GOME and Sciamachy, and is constantly updated as new data becomes available. The data services future upgrades include offering additional functionality to end-users of Sciamachy data. One of the functionalities provided will be the possibility to select and process Sciamachy products using different data processors, using Grid technology. This technology was successfully researched and will be made operationally available in the near future.

  1. Computerized Design Synthesis (CDS), A database-driven multidisciplinary design tool

    NASA Technical Reports Server (NTRS)

    Anderson, D. M.; Bolukbasi, A. O.

    1989-01-01

    The Computerized Design Synthesis (CDS) system under development at McDonnell Douglas Helicopter Company (MDHC) is targeted to make revolutionary improvements in both response time and resource efficiency in the conceptual and preliminary design of rotorcraft systems. It makes the accumulated design database and supporting technology analysis results readily available to designers and analysts of technology, systems, and production, and makes powerful design synthesis software available in a user friendly format.

  2. Extending the data dictionary for data/knowledge management

    NASA Technical Reports Server (NTRS)

    Hydrick, Cecile L.; Graves, Sara J.

    1988-01-01

    Current relational database technology provides the means for efficiently storing and retrieving large amounts of data. By combining techniques learned from the field of artificial intelligence with this technology, it is possible to expand the capabilities of such systems. This paper suggests using the expanded domain concept, an object-oriented organization, and the storing of knowledge rules within the relational database as a solution to the unique problems associated with CAD/CAM and engineering data.

  3. Design and implementation of website information disclosure assessment system.

    PubMed

    Cho, Ying-Chiang; Pan, Jen-Yi

    2015-01-01

    Internet application technologies, such as cloud computing and cloud storage, have increasingly changed people's lives. Websites contain vast amounts of personal privacy information. In order to protect this information, network security technologies, such as database protection and data encryption, attract many researchers. The most serious problems concerning web vulnerability are e-mail address and network database leakages. These leakages have many causes. For example, malicious users can steal database contents, taking advantage of mistakes made by programmers and administrators. In order to mitigate this type of abuse, a website information disclosure assessment system is proposed in this study. This system utilizes a series of technologies, such as web crawler algorithms, SQL injection attack detection, and web vulnerability mining, to assess a website's information disclosure. Thirty websites, randomly sampled from the top 50 world colleges, were used to collect leakage information. This testing showed the importance of increasing the security and privacy of website information for academic websites.

  4. Privacy Technology to Support Data Sharing for Comparative Effectiveness Research: A SYSTEMATIC REVIEW

    PubMed Central

    Jiang, Xiaoqian; Sarwate, Anand D.; Ohno-Machado, Lucila

    2013-01-01

    Objective Effective data sharing is critical for comparative effectiveness research (CER), but there are significant concerns about inappropriate disclosure of patient data. These concerns have spurred the development of new technologies for privacy preserving data sharing and data mining. Our goal is to review existing and emerging techniques that may be appropriate for data sharing related to CER. Material and methods We adapted a systematic review methodology to comprehensively search the research literature. We searched 7 databases and applied three stages of filtering based on titles, abstracts, and full text to identify those works most relevant to CER. Results Based on agreement and using the arbitrage of a third party expert, we selected 97 articles for meta-analysis. Our findings are organized along major types of data sharing in CER applications (i.e., institution-to-institution, institution-hosted, and public release). We made recommendations based on specific scenarios. Limitation We limited the scope of our study to methods that demonstrated practical impact, eliminating many theoretical studies of privacy that have been surveyed elsewhere. We further limited our study to data sharing for data tables, rather than complex genomic, set-valued, time series, text, image, or network data. Conclusion State-of-the-art privacy preserving technologies can guide the development of practical tools that will scale up the CER studies of the future. However, many challenges remain in this fast moving field in terms of practical evaluations as well as applications to a wider range of data types. PMID:23774511

  5. Abuse-deterrent features of an extended-release morphine drug product developed using a novel injection-molding technology for oral drug delivery.

    PubMed

    Skak, Nikolaj; Elhauge, Torben; Dayno, Jeffrey M; Lindhardt, Karsten

    A novel technology platform (Guardian™ Technology, Egalet Corporation, Wayne, PA) was used to manufacture morphine abuse-deterrent (AD), extended-release (ER), injection-molded tablets (morphine-ADER-IMT; ARYMO® ER [morphine sulfate] ER tablets; Egalet Corporation), a recently approved morphine product with AD labeling. The aim of this article is to highlight how the features of Guardian™ Technology are linked to the ER profile and AD characteristics of morphine-ADER-IMT. The ER profile of morphine-ADER-IMT is attributed to the precise release of morphine from the polymer matrix. The approved dosage strengths of morphine-ADER-IMT are bioequivalent to corresponding dosage strengths of morphine ER (MS Contin®; Purdue Pharma LP, Stamford, CT). Morphine-ADER-IMT was very resistant to physical manipulations intended to reduce particle size, with <10 percent of particles being reduced to <500µm, regarded by the US Food and Drug Administration as a relevant cutoff for potential insufflation in their generic solid oral AD opioid guidance. Furthermore, morphine was not readily extracted from the polymer matrix of morphine-ADER-IMT in small- or large-volume solvent extraction studies that evaluated the potential for intravenous and oral abuse. The ER profile and AD characteristics of morphine-ADER-IMT are a result of Guardian™ Technology. The combination of the polyethylene oxide matrix and the use of injection molding differentiate morphine-ADER-IMT from other approved AD opioids that deter abuse using physical and chemical barriers. The high degree of flexibility of the Guardian™ Technology enables the development of products that can be tailored to almost any desired release profile; as such, it is a technology platform that may be useful for the development of a wide range of pharmaceutical products.

  6. iPhone App for Cassini's Magnetospheric Imaging Instrument (MIMI) Browse Products

    NASA Astrophysics Data System (ADS)

    Myers, H. Y.; Kusterer, M. B.; Mitchell, D. G.; Steele, R. J.; Vandegriff, J. D.

    2016-12-01

    We have created a mobile app on the iOS platform to view the years of browse plots from data collected by the MIMI instruments on Cassini. The focus of the app is to bring the browsing capabilities of the MIMI database to the touchscreen technologies that exist on mobile devices such as smartphones and tablets. Among the data products within the MIMI suite that are viewable through the app include the Energetic Neutral Atom (ENA) images and movies of Saturn taken with the Ion and Neutral Camera (INCA), and spectrograms and line plots from the LEMMS and CHEMS particle detectors. The release of this app also coincides with access to a number of MIMI data products previously not available to the public. We will unveil the features of the app and provide a working demo. The CassiniMIMI app will be available for free from Apple's iTunes Store. A sneak preview of some selection screens and a representative plot are shown in the separate image file.

  7. Proceedings of the Monterey Containment Symposium, Monterey, California, August 26-28, 1981. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, B.C.; Jones, E.M.; Keller, C.E.

    1983-02-01

    Since the Atmospheric Test Ban Treaty was signed in 1963, the United States has conducted all nuclear weapons tests underground. To meet US treaty responsibilities and to ensure public safety, the containment community must prevent any release of radioactive gases to the atmosphere. In the past two decades we have gained considerable insight into the scientific and engineering requirements for complete containment, but the papers and discussions at the Monterey Symposium indicate that a great deal remains to be done. Among papers included here, those dealing with mature topics will serve as reviews and introductions for new workers in themore » field. Others, representing first looks at new areas, contain more speculative material. Active research topics include propagation of stress waves in rocks, formation and decay of residual hoop stresses around a cavity, hydrofracture out of a cavity, formation of chimneys, and geologic and geophysical investigations of the Nevada Test Site. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.« less

  8. Version 2.0 of the International Bathymetric Chart of the Arctic Ocean: A new Database for Oceanographers and Mapmakers

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Macnab, R.; Edwards, M.; Schenke, H.; Hatzky, J.

    2007-12-01

    The International Bathymetric Chart of the Arctic Ocean (IBCAO) was first released to the public after its introduction at the American Geophysical Union (AGU) Fall Meeting in 1999 (Jakobsson et al., 2000). This first release consisted of a Digital Bathymetric Model (DBM) on a Polar stereographic projection with grid cell spacing of 2.5 x 2.5 km derived from an accumulated database of all available bathymetric data at the time of compilation. The IBCAO bathymetric database included soundings collected during past and modern expeditions as well as digitized isobaths and depth soundings from published maps. Compared to previous bathymetric maps of the Arctic Ocean, the first released IBCAO compilation was based upon a significantly enhanced database, particularly in the high Arctic. For example, de-classified echo soundings acquired during US and British submarine cruises between 1958 and 1988 were included as well as soundings from icebreaker cruises conducted by Sweden and Germany at the end of the last century. Despite the newly available data in 1999, there were still large areas of the Arctic Ocean where publicly available data were completely absent. Some of these areas had been mapped by Russian agencies, and since these observations were not available to IBCAO, depth contours from the bathymetric contour map published by the Head Department of Navigation and Hydrography (HDNO) (Naryshkin, 1999) were digitized and incorporated in the database. The new IBCAO Version 2.0 comprises the largest update since the first release; moreover, the grid spacing has been decreased to 2 x 2 km. Numerous multibeam data sets that were collected by ice breakers, e.g. USCGC Healy, R/V James Clarke Ross, R/V Polarstern, IB Oden, now form part of the database, as do the swath bathymetric observations acquired during the 1999 SCICEX expedition. The portrayal of the Eastern Arctic Basin is vastly improved due to e.g. the Arctic Mid Ocean Ridge Expedition 2001 (AMORE) and Arctic Gakkel Vents 2007 (AGAVE) expedition while mapping missions aboard the USCGC Healy have revealed the "real" shape of the sea floor of the central Lomonosov Ridge and in areas off Northern Alaska in the Western Arctic. This paper presents an overview of the new data included in Version 2.0 as well as a brief discussion on the improvements and their possible implications for IBCAO users. Jakobsson, M., Cherkis, N., Woodward, J., Macnab, R. and Coakley, B., 2000. New grid of Arctic bathymetry aids scientists and mapmakers. EOS, Transactions American Geophysical Union, 81: 89, 93, 96. Naryshkin, G., 1999. Bottom relief of the Arctic Ocean. In: H.D.o.N.a. Oceanography and A.-R.R.I.f.G.a.M.R.o.t.W. Ocean (Editors). Russian Academy of Sciences, pp. Bathymetric contour map.

  9. IRIS Toxicological Review of Benzo[a]pyrene (Public Comment Draft)

    EPA Science Inventory

    EPA is developing an Integrated Risk Information System (IRIS) assessment of benzo[a]pyrene and has released the draft assessment for public comment and external peer review. When final, the assessment will appear on the IRIS database.

  10. IRIS Toxicological Review of Naphthalene (1998 Final)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Naphthalene: in support of the Integrated Risk Information System (IRIS). The updated Summary for Naphthalene and accompanying toxicological review have been added to the IRIS Database.

  11. IRIS Toxicological Review of Phosgene (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Phosgene: in support of the Integrated Risk Information System (IRIS). The updated Summary for Phosgene and accompanying toxicological review have been added to the IRIS Database.

  12. USDA dietary supplement ingredient database, release 2

    USDA-ARS?s Scientific Manuscript database

    The Nutrient Data Laboratory (NDL),Beltsville Human Nutrition Research Center (BHNRC), Agricultural Research Service (ARS), USDA, in collaboration with the Office of Dietary Supplements, National Institutes of Health (ODS/NIH) and other federal agencies has developed a Dietary Supplement Ingredient ...

  13. IRIS Toxicological Review of Acrolein (2003 Final)

    EPA Science Inventory

    EPA announced the release of the final report, Toxicological Review of Acrolein: in support of the Integrated Risk Information System (IRIS). The updated Summary for Acrolein and accompanying toxicological review have been added to the IRIS Database.

  14. IRIS Toxicological Review of Chloroform (Final Report)

    EPA Science Inventory

    EPA is announcing the release of the final report, Toxicological Review of Chloroform: in support of the Integrated Risk Information System (IRIS). The updated Summary for Chloroform and accompanying Quickview have also been added to the IRIS Database.

  15. Toxics Release Inventory Chemical Hazard Information Profiles (TRI-CHIP) Dataset

    EPA Pesticide Factsheets

    The Toxics Release Inventory (TRI) Chemical Hazard Information Profiles (TRI-CHIP) dataset contains hazard information about the chemicals reported in TRI. Users can use this XML-format dataset to create their own databases and hazard analyses of TRI chemicals. The hazard information is compiled from a series of authoritative sources including the Integrated Risk Information System (IRIS). The dataset is provided as a downloadable .zip file that when extracted provides XML files and schemas for the hazard information tables.

  16. Distributed Database Control and Allocation. Volume 2. Performance Analysis of Concurrency Control Algorithms.

    DTIC Science & Technology

    1983-10-01

    Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC ...84 03 IZ 004 ’KV This report has been reviewed by the RADC Public Affairs Office (PA) an is releasable to the National Technical Information Service...NTIS). At NTIS it will be releasable to the general public , including foreign na~ions. RADC-TR-83-226, Vol II (of three) has been reviewed and is

  17. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  18. Surgical research using national databases

    PubMed Central

    Leland, Hyuma; Heckmann, Nathanael

    2016-01-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945

  19. Surgical research using national databases.

    PubMed

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  20. The Design and Product of National 1:1000000 Cartographic Data of Topographic Map

    NASA Astrophysics Data System (ADS)

    Wang, Guizhi

    2016-06-01

    National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.

  1. Novel slow release nanocomposite nitrogen fertilizers: the impact of polymers on nanocomposite properties and function

    USDA-ARS?s Scientific Manuscript database

    Efficient use of fertilizers, especially nitrogen, is essential and strategic to agricultural production. Among the technologies that can contribute to efficient use of fertilizers are slow or controlled release products. This paper describes the impact on structure, urea release rate and function i...

  2. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    PubMed

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  3. Transterm—extended search facilities and improved integration with other databases

    PubMed Central

    Jacobs, Grant H.; Stockwell, Peter A.; Tate, Warren P.; Brown, Chris M.

    2006-01-01

    Transterm has now been publicly available for >10 years. Major changes have been made since its last description in this database issue in 2002. The current database provides data for key regions of mRNA sequences, a curated database of mRNA motifs and tools to allow users to investigate their own motifs or mRNA sequences. The key mRNA regions database is derived computationally from Genbank. It contains 3′ and 5′ flanking regions, the initiation and termination signal context and coding sequence for annotated CDS features from Genbank and RefSeq. The database is non-redundant, enabling summary files and statistics to be prepared for each species. Advances include providing extended search facilities, the database may now be searched by BLAST in addition to regular expressions (patterns) allowing users to search for motifs such as known miRNA sequences, and the inclusion of RefSeq data. The database contains >40 motifs or structural patterns important for translational control. In this release, patterns from UTRsite and Rfam are also incorporated with cross-referencing. Users may search their sequence data with Transterm or user-defined patterns. The system is accessible at . PMID:16381889

  4. PIGD: a database for intronless genes in the Poaceae.

    PubMed

    Yan, Hanwei; Jiang, Cuiping; Li, Xiaoyu; Sheng, Lei; Dong, Qing; Peng, Xiaojian; Li, Qian; Zhao, Yang; Jiang, Haiyang; Cheng, Beijiu

    2014-10-01

    Intronless genes are a feature of prokaryotes; however, they are widespread and unequally distributed among eukaryotes and represent an important resource to study the evolution of gene architecture. Although many databases on exons and introns exist, there is currently no cohesive database that collects intronless genes in plants into a single database. In this study, we present the Poaceae Intronless Genes Database (PIGD), a user-friendly web interface to explore information on intronless genes from different plants. Five Poaceae species, Sorghum bicolor, Zea mays, Setaria italica, Panicum virgatum and Brachypodium distachyon, are included in the current release of PIGD. Gene annotations and sequence data were collected and integrated from different databases. The primary focus of this study was to provide gene descriptions and gene product records. In addition, functional annotations, subcellular localization prediction and taxonomic distribution are reported. PIGD allows users to readily browse, search and download data. BLAST and comparative analyses are also provided through this online database, which is available at http://pigd.ahau.edu.cn/. PIGD provides a solid platform for the collection, integration and analysis of intronless genes in the Poaceae. As such, this database will be useful for subsequent bio-computational analysis in comparative genomics and evolutionary studies.

  5. Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database

    NASA Technical Reports Server (NTRS)

    Hanke, Jeremy L.

    2011-01-01

    The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.

  6. On the experimental approaches for the assessment of the release of engineered nanomaterials from nanocomposites by physical degradation processes

    NASA Astrophysics Data System (ADS)

    Blázquez, M.; Egizabal, A.; Unzueta, I.

    2014-08-01

    The LIFE+ Project SIRENA, Simulation of the release of nanomaterials from consumer products for environmental exposure assessment, (LIFE11 ENV/ES/596) has set up a Technological Surveillance System (TSS) to trace technical references at worldwide level related to nanocomposites and the release from nanocomposites. So far a total of seventy three items of different nature (from peer reviewed articles to presentations and contributions to congresses) have been selected and classified as "nanomaterials release simulation technologies". In present document, different approaches for the simulation of different life cycle stages through the physical degradation of polymer nanocomposites at laboratory scale are assessed. In absence of a reference methodology, the comparison of the different protocols used still remains a challenge.

  7. CONFERENCE REPORT: Summary of the 8th IAEA Technical Meeting on Fusion Power Plant Safety

    NASA Astrophysics Data System (ADS)

    Girard, J. Ph.; Gulden, W.; Kolbasov, B.; Louzeiro-Malaquias, A.-J.; Petti, D.; Rodriguez-Rodrigo, L.

    2008-01-01

    Reports were presented covering a selection of topics on the safety of fusion power plants. These included a review on licensing studies developed for ITER site preparation surveying common and non-common issues (i.e. site dependent) as lessons to a broader approach for fusion power plant safety. Several fusion power plant models, spanning from accessible technology to more advanced-materials based concepts, were discussed. On the topic related to fusion-specific technology, safety studies were reported on different concepts of breeding blanket modules, tritium handling and auxiliary systems under normal and accident scenarios' operation. The testing of power plant relevant technology in ITER was also assessed in terms of normal operation and accident scenarios, and occupational doses and radioactive releases under these testings have been determined. Other specific safety issues for fusion have also been discussed such as availability and reliability of fusion power plants, dust and tritium inventories and component failure databases. This study reveals that the environmental impact of fusion power plants can be minimized through a proper selection of low activation materials and using recycling technology helping to reduce waste volume and potentially open the route for its reutilization for the nuclear sector or even its clearance into the commercial circuit. Computational codes for fusion safety have been presented in support of the many studies reported. The on-going work on establishing validation approaches aiming at improving the prediction capability of fusion codes has been supported by experimental results and new directions for development have been identified. Fusion standards are not available and fission experience is mostly used as the framework basis for licensing and target design for safe operation and occupational and environmental constraints. It has been argued that fusion can benefit if a specific fusion approach is implemented, in particular for materials selection which will have a large impact on waste disposal and recycling and in the real limits of radiation releases if indexed to the real impact on individuals and the environment given the differences in the types of radiation emitted by tritium when compared with the fission products. Round table sessions resulted in some common recommendations. The discussions also created the awareness of the need for a larger involvement of the IAEA in support of fusion safety standards development.

  8. Distributed Episodic Exploratory Planning (DEEP)

    DTIC Science & Technology

    2008-12-01

    API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of

  9. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  10. New data sources and derived products for the SRER digital spatial database

    Treesearch

    Craig Wissler; Deborah Angell

    2003-01-01

    The Santa Rita Experimental Range (SRER) digital database was developed to automate and preserve ecological data and increase their accessibility. The digital data holdings include a spatial database that is used to integrate ecological data in a known reference system and to support spatial analyses. Recently, the Advanced Resource Technology (ART) facility has added...

  11. Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course

    ERIC Educational Resources Information Center

    Mason, Raina; Seton, Carolyn; Cooper, Graham

    2016-01-01

    Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional…

  12. Common Database Interface for Heterogeneous Software Engineering Tools.

    DTIC Science & Technology

    1987-12-01

    SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager

  13. Charting the Progress

    ERIC Educational Resources Information Center

    CURRENTS, 2010

    2010-01-01

    Advancement technology is reshaping the business of fundraising, alumni relations, communications, and marketing. Through all of these innovations, the backbone of advancement systems remains the constituent database. This article takes a look at advancement databases that track constituent data.

  14. Efficient data management tools for the heterogeneous big data warehouse

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  15. Planned and ongoing projects (pop) database: development and results.

    PubMed

    Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice

    2014-11-01

    The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.

  16. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Consulting report on the NASA technology utilization network system

    NASA Technical Reports Server (NTRS)

    Hlava, Marjorie M. K.

    1992-01-01

    The purposes of this consulting effort are: (1) to evaluate the existing management and production procedures and workflow as they each relate to the successful development, utilization, and implementation of the NASA Technology Utilization Network System (TUNS) database; (2) to identify, as requested by the NASA Project Monitor, the strengths, weaknesses, areas of bottlenecking, and previously unaddressed problem areas affecting TUNS; (3) to recommend changes or modifications of existing procedures as necessary in order to effect corrections for the overall benefit of NASA TUNS database production, implementation, and utilization; and (4) to recommend the addition of alternative procedures, routines, and activities that will consolidate and facilitate the production, implementation, and utilization of the NASA TUNS database.

  18. National health care providers' database (NHCPD) of Slovenia--information technology solution for health care planning and management.

    PubMed

    Albreht, T; Paulin, M

    1999-01-01

    The article describes the possibilities of planning of the health care providers' network enabled by the use of information technology. The cornerstone of such planning is the development and establishment of a quality database on health care providers, health care professionals and their employment statuses. Based on the analysis of information needs, a new database was developed for various users in health care delivery as well as for those in health insurance. The method of information engineering was used in the standard four steps of the information system construction, while the whole project was run in accordance with the principles of two internationally approved project management methods. Special attention was dedicated to a careful analysis of the users' requirements and we believe the latter to be fulfilled to a very large degree. The new NHCPD is a relational database which is set up in two important state institutions, the National Institute of Public Health and the Health Insurance Institute of Slovenia. The former is responsible for updating the database, while the latter is responsible for the technological side as well as for the implementation of data security and protection. NHCPD will be inter linked with several other existing applications in the area of health care, public health and health insurance. Several important state institutions and professional chambers are users of the database in question, thus integrating various aspects of the health care system in Slovenia. The setting up of a completely revised health care providers' database in Slovenia is an important step in the development of a uniform and integrated information system that would support top decision-making processes at the national level.

  19. 77 FR 52766 - Technology and Trading Roundtable

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-30

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-67725; File No. 4-652] Technology and Trading... ``Technology and Trading: Promoting Stability in Today's Markets'' to discuss ways to promote stability in..., implement, and manage complex and inter-connected trading technologies. The roundtable discussion will be...

  20. 77 FR 56697 - Technology and Trading Roundtable

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-13

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-67802; File No. 4-652] Technology and Trading.... SUMMARY: The Securities and Exchange Commission will host a one day roundtable entitled ``Technology and... rely on highly automated systems. The market technology roundtable, which was scheduled for September...

  1. Receptivity of Librarians to Optical Information Technologies and Products.

    ERIC Educational Resources Information Center

    Eaton, Nancy

    1986-01-01

    Examines factors which may affect the receptivity of librarians to the use of optical disk technologies, including hardware and software issues, the content of currently available databases, and the integration of optical technologies into existing library services. (CLB)

  2. Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.

    PubMed

    Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie

    2013-01-01

    Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.

  3. Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases

    PubMed Central

    Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie

    2013-01-01

    Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125

  4. Enabling heterogenous multi-scale database for emergency service functions through geoinformation technologies

    NASA Astrophysics Data System (ADS)

    Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.

  5. A Support Database System for Integrated System Health Management (ISHM)

    NASA Technical Reports Server (NTRS)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.

  6. Blending Technology with Camp Tradition: Technology Can Simplify Camp Operations.

    ERIC Educational Resources Information Center

    Salzman, Jeff

    2000-01-01

    Discusses uses of technology appropriate for camps, which are service organizations based on building relationships. Describes relationship marketing and how it can be enhanced through use of Web sites, interactive brochures, and client databases. Outlines other technology uses at camp: automated dispensing of medications, satellite tracking of…

  7. The New Library, A Hybrid Organization.

    ERIC Educational Resources Information Center

    Waaijers, Leo

    This paper discusses changes in technology in libraries over the last decade, beginning with an overview of the impact of databases, the Internet, and the World Wide Web on libraries. The integration of technology at Delft University of Technology (Netherlands) is described, including use of scanning technology, fax, and e-mail for document…

  8. Reporting medical information: effects of press releases and newsworthiness on medical journal articles' visibility in the news media.

    PubMed

    Stryker, Jo Ellen

    2002-11-01

    Characteristics defining newsworthiness of journal articles appearing in JAMA and NEJM were examined to determine if they affect visibility in the news media. It was also hypothesized that press releases affected the amount of news coverage of a journal article due to the fact that the most newsworthy journal articles are selected for press releases. Journal articles (N = 95) were coded for characteristics believed to describe the "newsworthiness" of journal articles. Quantity of news coverage of the journal articles was estimated using the LEXIS-NEXIS database. Bivariate associations were examined using one-way analysis of variance, and multivariate analyses utilized OLS regression. Characteristics of the newsworthiness of medical journal articles predicted their visibility in newspapers. The issuing of press releases also predicted newspaper coverage. However, press releases predicted newspaper coverage largely because more newsworthy journal articles had accompanying press releases rather than because the press release itself was influential. Journalists report on medical information that is topical, stratifies risk based on demographic and lifestyle variables, and has lifestyle rather than medical implications. Medical journals issue press releases for articles that possess the characteristics journalists are looking for, thereby further highlighting their importance.

  9. Liz Torres | NREL

    Science.gov Websites

    of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005

  10. Evaluation of linking pavement related databases.

    DOT National Transportation Integrated Search

    2007-03-01

    In general, the objectives of this study were to identify and solve various issues in linking pavement performance related database. The detailed objectives were: to evaluate the state-of-the-art in information technology for data integration and dat...

  11. Microcomputers in Libraries.

    ERIC Educational Resources Information Center

    Ertel, Monica M.

    1984-01-01

    This discussion of current microcomputer technologies available to libraries focuses on software applications in four major classifications: communications (online database searching); word processing; administration; and database management systems. Specific examples of library applications are given and six references are cited. (EJS)

  12. The dispersion releaser technology is an effective method for testing drug release from nanosized drug carriers.

    PubMed

    Janas, Christine; Mast, Marc-Phillip; Kirsamer, Li; Angioni, Carlo; Gao, Fiona; Mäntele, Werner; Dressman, Jennifer; Wacker, Matthias G

    2017-06-01

    The dispersion releaser (DR) is a dialysis-based setup for the analysis of the drug release from nanosized drug carriers. It is mounted into dissolution apparatus2 of the United States Pharmacopoeia. The present study evaluated the DR technique investigating the drug release of the model compound flurbiprofen from drug solution and from nanoformulations composed of the drug and the polymer materials poly (lactic acid), poly (lactic-co-glycolic acid) or Eudragit®RSPO. The drug loaded nanocarriers ranged in size between 185.9 and 273.6nm and were characterized by a monomodal size distribution (PDI<0.1). The membrane permeability constants of flurbiprofen were calculated and mathematical modeling was applied to obtain the normalized drug release profiles. For comparing the sensitivities of the DR and the dialysis bag technique, the differences in the membrane permeation rates were calculated. Finally, different formulation designs of flurbiprofen were sensitively discriminated using the DR technology. The mechanism of drug release from the nanosized carriers was analyzed by applying two mathematical models described previously, the reciprocal powered time model and the three parameter model. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Scalable privacy-preserving data sharing methodology for genome-wide association studies.

    PubMed

    Yu, Fei; Fienberg, Stephen E; Slavković, Aleksandra B; Uhler, Caroline

    2014-08-01

    The protection of privacy of individual-level information in genome-wide association study (GWAS) databases has been a major concern of researchers following the publication of "an attack" on GWAS data by Homer et al. (2008). Traditional statistical methods for confidentiality and privacy protection of statistical databases do not scale well to deal with GWAS data, especially in terms of guarantees regarding protection from linkage to external information. The more recent concept of differential privacy, introduced by the cryptographic community, is an approach that provides a rigorous definition of privacy with meaningful privacy guarantees in the presence of arbitrary external information, although the guarantees may come at a serious price in terms of data utility. Building on such notions, Uhler et al. (2013) proposed new methods to release aggregate GWAS data without compromising an individual's privacy. We extend the methods developed in Uhler et al. (2013) for releasing differentially-private χ(2)-statistics by allowing for arbitrary number of cases and controls, and for releasing differentially-private allelic test statistics. We also provide a new interpretation by assuming the controls' data are known, which is a realistic assumption because some GWAS use publicly available data as controls. We assess the performance of the proposed methods through a risk-utility analysis on a real data set consisting of DNA samples collected by the Wellcome Trust Case Control Consortium and compare the methods with the differentially-private release mechanism proposed by Johnson and Shmatikov (2013). Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Rad-Release

    ScienceCinema

    None

    2017-12-09

    The R&D 100 Award winning Rad-Release Chemical Decontamination Technology is a highly effective (up to 99% removal rate), affordable, patented chemical-foam-clay decontamination process tailored to specific radiological and metal contaminants, which is applicable to a wide variety of substrates. For more information about this project, visit http://www.inl.gov/rd100/2011/rad-release/

  15. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  16. Nailing Digital Jelly to a Virtual Tree: Tracking Emerging Technologies for Learning

    ERIC Educational Resources Information Center

    Serim, Ferdi; Schrock, Kathy

    2008-01-01

    Reliable information on emerging technologies for learning is as vital as it is difficult to come by. To meet this need, the International Society for Technology in Education organized the Emerging Technologies Task Force. Its goal is to create a database of contributions from educators highlighting their use of emerging technologies to support…

  17. Italian Present-day Stress Indicators: IPSI Database

    NASA Astrophysics Data System (ADS)

    Mariucci, M. T.; Montone, P.

    2017-12-01

    In Italy, since the 90s of the last century, researches concerning the contemporary stress field have been developing at Istituto Nazionale di Geofisica e Vulcanologia (INGV) with local and regional scale studies. Throughout the years many data have been analysed and collected: now they are organized and available for an easy end-use online. IPSI (Italian Present-day Stress Indicators) database, is the first geo-referenced repository of information on the crustal present-day stress field maintained at INGV through a web application database and website development by Gabriele Tarabusi. Data consist of horizontal stress orientations analysed and compiled in a standardized format and quality-ranked for reliability and comparability on a global scale with other database. Our first database release includes 855 data records updated to December 2015. Here we present an updated version that will be released in 2018, after new earthquake data entry up to December 2017. The IPSI web site (http://ipsi.rm.ingv.it/) allows accessing data on a standard map viewer and choose which data (category and/or quality) to plot easily. The main information of each single element (type, quality, orientation) can be viewed simply going over the related symbol, all the information appear by clicking the element. At the same time, simple basic information on the different data type, tectonic regime assignment, quality ranking method are available with pop-up windows. Data records can be downloaded in some common formats, moreover it is possible to download a file directly usable with SHINE, a web based application to interpolate stress orientations (http://shine.rm.ingv.it). IPSI is mainly conceived for those interested in studying the characters of Italian peninsula and surroundings although Italian data are part of the World Stress Map (http://www.world-stress-map.org/) as evidenced by many links that redirect to this database for more details on standard practices in this field.

  18. Evolving the US Army Research Laboratory (ARL) Technical Communication Strategy

    DTIC Science & Technology

    2016-10-01

    of added value and enhanced tech transfer, and strengthened relationships with academic and industry collaborators. In support of increasing ARL’s...communication skills; and Prong 3: Promote a Stakeholder Database to implement a stakeholder database (including names and preferences) and use a...Group, strategic planning, communications strategy, stakeholder database , workforce improvement, science and technology, S&T 16. SECURITY

  19. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  20. NIST Gas Hydrate Research Database and Web Dissemination Channel.

    PubMed

    Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D

    2010-01-01

    To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.

  1. Advances in Targeted Pesticides with Environmentally Responsive Controlled Release by Nanotechnology

    PubMed Central

    Huang, Bingna; Chen, Feifei; Shen, Yue; Wang, Yan; Sun, Changjiao; Zhao, Xiang; Cui, Bo; Gao, Fei; Zeng, Zhanghua; Cui, Haixin

    2018-01-01

    Pesticides are the basis for defending against major biological disasters and important for ensuring national food security. Biocompatible, biodegradable, intelligent, and responsive materials are currently an emerging area of interest in the field of efficient, safe, and green pesticide formulation. Using nanotechnology to design and prepare targeted pesticides with environmentally responsive controlled release via compound and chemical modifications has also shown great potential in creating novel formulations. In this review, special attention has been paid to intelligent pesticides with precise controlled release modes that can respond to micro-ecological environment changes such as light-sensitivity, thermo-sensitivity, humidity sensitivity, soil pH, and enzyme activity. Moreover, establishing intelligent and controlled pesticide release technologies using nanomaterials are reported. These technologies could increase pesticide-loading, improve the dispersibility and stability of active ingredients, and promote target ability. PMID:29439498

  2. Expanded national database collection and data coverage in the FINDbase worldwide database for clinically relevant genomic variation allele frequencies

    PubMed Central

    Viennas, Emmanouil; Komianou, Angeliki; Mizzi, Clint; Stojiljkovic, Maja; Mitropoulou, Christina; Muilu, Juha; Vihinen, Mauno; Grypioti, Panagiota; Papadaki, Styliani; Pavlidis, Cristiana; Zukic, Branka; Katsila, Theodora; van der Spek, Peter J.; Pavlovic, Sonja; Tzimas, Giannis; Patrinos, George P.

    2017-01-01

    FINDbase (http://www.findbase.org) is a comprehensive data repository that records the prevalence of clinically relevant genomic variants in various populations worldwide, such as pathogenic variants leading mostly to monogenic disorders and pharmacogenomics biomarkers. The database also records the incidence of rare genetic diseases in various populations, all in well-distinct data modules. Here, we report extensive data content updates in all data modules, with direct implications to clinical pharmacogenomics. Also, we report significant new developments in FINDbase, namely (i) the release of a new version of the ETHNOS software that catalyzes development curation of national/ethnic genetic databases, (ii) the migration of all FINDbase data content into 90 distinct national/ethnic mutation databases, all built around Microsoft's PivotViewer (http://www.getpivot.com) software (iii) new data visualization tools and (iv) the interrelation of FINDbase with DruGeVar database with direct implications in clinical pharmacogenomics. The abovementioned updates further enhance the impact of FINDbase, as a key resource for Genomic Medicine applications. PMID:27924022

  3. The database of the PREDICTS (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems) project.

    PubMed

    Hudson, Lawrence N; Newbold, Tim; Contu, Sara; Hill, Samantha L L; Lysenko, Igor; De Palma, Adriana; Phillips, Helen R P; Alhusseini, Tamera I; Bedford, Felicity E; Bennett, Dominic J; Booth, Hollie; Burton, Victoria J; Chng, Charlotte W T; Choimes, Argyrios; Correia, David L P; Day, Julie; Echeverría-Londoño, Susy; Emerson, Susan R; Gao, Di; Garon, Morgan; Harrison, Michelle L K; Ingram, Daniel J; Jung, Martin; Kemp, Victoria; Kirkpatrick, Lucinda; Martin, Callum D; Pan, Yuan; Pask-Hale, Gwilym D; Pynegar, Edwin L; Robinson, Alexandra N; Sanchez-Ortiz, Katia; Senior, Rebecca A; Simmons, Benno I; White, Hannah J; Zhang, Hanbin; Aben, Job; Abrahamczyk, Stefan; Adum, Gilbert B; Aguilar-Barquero, Virginia; Aizen, Marcelo A; Albertos, Belén; Alcala, E L; Del Mar Alguacil, Maria; Alignier, Audrey; Ancrenaz, Marc; Andersen, Alan N; Arbeláez-Cortés, Enrique; Armbrecht, Inge; Arroyo-Rodríguez, Víctor; Aumann, Tom; Axmacher, Jan C; Azhar, Badrul; Azpiroz, Adrián B; Baeten, Lander; Bakayoko, Adama; Báldi, András; Banks, John E; Baral, Sharad K; Barlow, Jos; Barratt, Barbara I P; Barrico, Lurdes; Bartolommei, Paola; Barton, Diane M; Basset, Yves; Batáry, Péter; Bates, Adam J; Baur, Bruno; Bayne, Erin M; Beja, Pedro; Benedick, Suzan; Berg, Åke; Bernard, Henry; Berry, Nicholas J; Bhatt, Dinesh; Bicknell, Jake E; Bihn, Jochen H; Blake, Robin J; Bobo, Kadiri S; Bóçon, Roberto; Boekhout, Teun; Böhning-Gaese, Katrin; Bonham, Kevin J; Borges, Paulo A V; Borges, Sérgio H; Boutin, Céline; Bouyer, Jérémy; Bragagnolo, Cibele; Brandt, Jodi S; Brearley, Francis Q; Brito, Isabel; Bros, Vicenç; Brunet, Jörg; Buczkowski, Grzegorz; Buddle, Christopher M; Bugter, Rob; Buscardo, Erika; Buse, Jörn; Cabra-García, Jimmy; Cáceres, Nilton C; Cagle, Nicolette L; Calviño-Cancela, María; Cameron, Sydney A; Cancello, Eliana M; Caparrós, Rut; Cardoso, Pedro; Carpenter, Dan; Carrijo, Tiago F; Carvalho, Anelena L; Cassano, Camila R; Castro, Helena; Castro-Luna, Alejandro A; Rolando, Cerda B; Cerezo, Alexis; Chapman, Kim Alan; Chauvat, Matthieu; Christensen, Morten; Clarke, Francis M; Cleary, Daniel F R; Colombo, Giorgio; Connop, Stuart P; Craig, Michael D; Cruz-López, Leopoldo; Cunningham, Saul A; D'Aniello, Biagio; D'Cruze, Neil; da Silva, Pedro Giovâni; Dallimer, Martin; Danquah, Emmanuel; Darvill, Ben; Dauber, Jens; Davis, Adrian L V; Dawson, Jeff; de Sassi, Claudio; de Thoisy, Benoit; Deheuvels, Olivier; Dejean, Alain; Devineau, Jean-Louis; Diekötter, Tim; Dolia, Jignasu V; Domínguez, Erwin; Dominguez-Haydar, Yamileth; Dorn, Silvia; Draper, Isabel; Dreber, Niels; Dumont, Bertrand; Dures, Simon G; Dynesius, Mats; Edenius, Lars; Eggleton, Paul; Eigenbrod, Felix; Elek, Zoltán; Entling, Martin H; Esler, Karen J; de Lima, Ricardo F; Faruk, Aisyah; Farwig, Nina; Fayle, Tom M; Felicioli, Antonio; Felton, Annika M; Fensham, Roderick J; Fernandez, Ignacio C; Ferreira, Catarina C; Ficetola, Gentile F; Fiera, Cristina; Filgueiras, Bruno K C; Fırıncıoğlu, Hüseyin K; Flaspohler, David; Floren, Andreas; Fonte, Steven J; Fournier, Anne; Fowler, Robert E; Franzén, Markus; Fraser, Lauchlan H; Fredriksson, Gabriella M; Freire, Geraldo B; Frizzo, Tiago L M; Fukuda, Daisuke; Furlani, Dario; Gaigher, René; Ganzhorn, Jörg U; García, Karla P; Garcia-R, Juan C; Garden, Jenni G; Garilleti, Ricardo; Ge, Bao-Ming; Gendreau-Berthiaume, Benoit; Gerard, Philippa J; Gheler-Costa, Carla; Gilbert, Benjamin; Giordani, Paolo; Giordano, Simonetta; Golodets, Carly; Gomes, Laurens G L; Gould, Rachelle K; Goulson, Dave; Gove, Aaron D; Granjon, Laurent; Grass, Ingo; Gray, Claudia L; Grogan, James; Gu, Weibin; Guardiola, Moisès; Gunawardene, Nihara R; Gutierrez, Alvaro G; Gutiérrez-Lamus, Doris L; Haarmeyer, Daniela H; Hanley, Mick E; Hanson, Thor; Hashim, Nor R; Hassan, Shombe N; Hatfield, Richard G; Hawes, Joseph E; Hayward, Matt W; Hébert, Christian; Helden, Alvin J; Henden, John-André; Henschel, Philipp; Hernández, Lionel; Herrera, James P; Herrmann, Farina; Herzog, Felix; Higuera-Diaz, Diego; Hilje, Branko; Höfer, Hubert; Hoffmann, Anke; Horgan, Finbarr G; Hornung, Elisabeth; Horváth, Roland; Hylander, Kristoffer; Isaacs-Cubides, Paola; Ishida, Hiroaki; Ishitani, Masahiro; Jacobs, Carmen T; Jaramillo, Víctor J; Jauker, Birgit; Hernández, F Jiménez; Johnson, McKenzie F; Jolli, Virat; Jonsell, Mats; Juliani, S Nur; Jung, Thomas S; Kapoor, Vena; Kappes, Heike; Kati, Vassiliki; Katovai, Eric; Kellner, Klaus; Kessler, Michael; Kirby, Kathryn R; Kittle, Andrew M; Knight, Mairi E; Knop, Eva; Kohler, Florian; Koivula, Matti; Kolb, Annette; Kone, Mouhamadou; Kőrösi, Ádám; Krauss, Jochen; Kumar, Ajith; Kumar, Raman; Kurz, David J; Kutt, Alex S; Lachat, Thibault; Lantschner, Victoria; Lara, Francisco; Lasky, Jesse R; Latta, Steven C; Laurance, William F; Lavelle, Patrick; Le Féon, Violette; LeBuhn, Gretchen; Légaré, Jean-Philippe; Lehouck, Valérie; Lencinas, María V; Lentini, Pia E; Letcher, Susan G; Li, Qi; Litchwark, Simon A; Littlewood, Nick A; Liu, Yunhui; Lo-Man-Hung, Nancy; López-Quintero, Carlos A; Louhaichi, Mounir; Lövei, Gabor L; Lucas-Borja, Manuel Esteban; Luja, Victor H; Luskin, Matthew S; MacSwiney G, M Cristina; Maeto, Kaoru; Magura, Tibor; Mallari, Neil Aldrin; Malone, Louise A; Malonza, Patrick K; Malumbres-Olarte, Jagoba; Mandujano, Salvador; Måren, Inger E; Marin-Spiotta, Erika; Marsh, Charles J; Marshall, E J P; Martínez, Eliana; Martínez Pastur, Guillermo; Moreno Mateos, David; Mayfield, Margaret M; Mazimpaka, Vicente; McCarthy, Jennifer L; McCarthy, Kyle P; McFrederick, Quinn S; McNamara, Sean; Medina, Nagore G; Medina, Rafael; Mena, Jose L; Mico, Estefania; Mikusinski, Grzegorz; Milder, Jeffrey C; Miller, James R; Miranda-Esquivel, Daniel R; Moir, Melinda L; Morales, Carolina L; Muchane, Mary N; Muchane, Muchai; Mudri-Stojnic, Sonja; Munira, A Nur; Muoñz-Alonso, Antonio; Munyekenye, B F; Naidoo, Robin; Naithani, A; Nakagawa, Michiko; Nakamura, Akihiro; Nakashima, Yoshihiro; Naoe, Shoji; Nates-Parra, Guiomar; Navarrete Gutierrez, Dario A; Navarro-Iriarte, Luis; Ndang'ang'a, Paul K; Neuschulz, Eike L; Ngai, Jacqueline T; Nicolas, Violaine; Nilsson, Sven G; Noreika, Norbertas; Norfolk, Olivia; Noriega, Jorge Ari; Norton, David A; Nöske, Nicole M; Nowakowski, A Justin; Numa, Catherine; O'Dea, Niall; O'Farrell, Patrick J; Oduro, William; Oertli, Sabine; Ofori-Boateng, Caleb; Oke, Christopher Omamoke; Oostra, Vicencio; Osgathorpe, Lynne M; Otavo, Samuel Eduardo; Page, Navendu V; Paritsis, Juan; Parra-H, Alejandro; Parry, Luke; Pe'er, Guy; Pearman, Peter B; Pelegrin, Nicolás; Pélissier, Raphaël; Peres, Carlos A; Peri, Pablo L; Persson, Anna S; Petanidou, Theodora; Peters, Marcell K; Pethiyagoda, Rohan S; Phalan, Ben; Philips, T Keith; Pillsbury, Finn C; Pincheira-Ulbrich, Jimmy; Pineda, Eduardo; Pino, Joan; Pizarro-Araya, Jaime; Plumptre, A J; Poggio, Santiago L; Politi, Natalia; Pons, Pere; Poveda, Katja; Power, Eileen F; Presley, Steven J; Proença, Vânia; Quaranta, Marino; Quintero, Carolina; Rader, Romina; Ramesh, B R; Ramirez-Pinilla, Martha P; Ranganathan, Jai; Rasmussen, Claus; Redpath-Downing, Nicola A; Reid, J Leighton; Reis, Yana T; Rey Benayas, José M; Rey-Velasco, Juan Carlos; Reynolds, Chevonne; Ribeiro, Danilo Bandini; Richards, Miriam H; Richardson, Barbara A; Richardson, Michael J; Ríos, Rodrigo Macip; Robinson, Richard; Robles, Carolina A; Römbke, Jörg; Romero-Duque, Luz Piedad; Rös, Matthias; Rosselli, Loreta; Rossiter, Stephen J; Roth, Dana S; Roulston, T'ai H; Rousseau, Laurent; Rubio, André V; Ruel, Jean-Claude; Sadler, Jonathan P; Sáfián, Szabolcs; Saldaña-Vázquez, Romeo A; Sam, Katerina; Samnegård, Ulrika; Santana, Joana; Santos, Xavier; Savage, Jade; Schellhorn, Nancy A; Schilthuizen, Menno; Schmiedel, Ute; Schmitt, Christine B; Schon, Nicole L; Schüepp, Christof; Schumann, Katharina; Schweiger, Oliver; Scott, Dawn M; Scott, Kenneth A; Sedlock, Jodi L; Seefeldt, Steven S; Shahabuddin, Ghazala; Shannon, Graeme; Sheil, Douglas; Sheldon, Frederick H; Shochat, Eyal; Siebert, Stefan J; Silva, Fernando A B; Simonetti, Javier A; Slade, Eleanor M; Smith, Jo; Smith-Pardo, Allan H; Sodhi, Navjot S; Somarriba, Eduardo J; Sosa, Ramón A; Soto Quiroga, Grimaldo; St-Laurent, Martin-Hugues; Starzomski, Brian M; Stefanescu, Constanti; Steffan-Dewenter, Ingolf; Stouffer, Philip C; Stout, Jane C; Strauch, Ayron M; Struebig, Matthew J; Su, Zhimin; Suarez-Rubio, Marcela; Sugiura, Shinji; Summerville, Keith S; Sung, Yik-Hei; Sutrisno, Hari; Svenning, Jens-Christian; Teder, Tiit; Threlfall, Caragh G; Tiitsaar, Anu; Todd, Jacqui H; Tonietto, Rebecca K; Torre, Ignasi; Tóthmérész, Béla; Tscharntke, Teja; Turner, Edgar C; Tylianakis, Jason M; Uehara-Prado, Marcio; Urbina-Cardona, Nicolas; Vallan, Denis; Vanbergen, Adam J; Vasconcelos, Heraldo L; Vassilev, Kiril; Verboven, Hans A F; Verdasca, Maria João; Verdú, José R; Vergara, Carlos H; Vergara, Pablo M; Verhulst, Jort; Virgilio, Massimiliano; Vu, Lien Van; Waite, Edward M; Walker, Tony R; Wang, Hua-Feng; Wang, Yanping; Watling, James I; Weller, Britta; Wells, Konstans; Westphal, Catrin; Wiafe, Edward D; Williams, Christopher D; Willig, Michael R; Woinarski, John C Z; Wolf, Jan H D; Wolters, Volkmar; Woodcock, Ben A; Wu, Jihua; Wunderle, Joseph M; Yamaura, Yuichi; Yoshikura, Satoko; Yu, Douglas W; Zaitsev, Andrey S; Zeidler, Juliane; Zou, Fasheng; Collen, Ben; Ewers, Rob M; Mace, Georgina M; Purves, Drew W; Scharlemann, Jörn P W; Purvis, Andy

    2017-01-01

    The PREDICTS project-Projecting Responses of Ecological Diversity In Changing Terrestrial Systems (www.predicts.org.uk)-has collated from published studies a large, reasonably representative database of comparable samples of biodiversity from multiple sites that differ in the nature or intensity of human impacts relating to land use. We have used this evidence base to develop global and regional statistical models of how local biodiversity responds to these measures. We describe and make freely available this 2016 release of the database, containing more than 3.2 million records sampled at over 26,000 locations and representing over 47,000 species. We outline how the database can help in answering a range of questions in ecology and conservation biology. To our knowledge, this is the largest and most geographically and taxonomically representative database of spatial comparisons of biodiversity that has been collated to date; it will be useful to researchers and international efforts wishing to model and understand the global status of biodiversity.

  4. GenBank.

    PubMed

    Benson, Dennis A; Karsch-Mizrachi, Ilene; Lipman, David J; Ostell, James; Wheeler, David L

    2008-01-01

    GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 260 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the European Molecular Biology Laboratory Nucleotide Sequence Database in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage: www.ncbi.nlm.nih.gov.

  5. GenBank

    PubMed Central

    Benson, Dennis A.; Karsch-Mizrachi, Ilene; Lipman, David J.; Ostell, James; Wheeler, David L.

    2008-01-01

    GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 260 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the European Molecular Biology Laboratory Nucleotide Sequence Database in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage: www.ncbi.nlm.nih.gov PMID:18073190

  6. Biomarkers of Fatigue: Metabolomics Profiles Predictive of Cognitive Performance

    DTIC Science & Technology

    2013-05-01

    metabolites. The latest version of the Human Metabolome Database (v. 2.5; released August , 2009) includes approximately 8,000 identified mammalian...monoamine oxidase; COMT , catechol-O-methyl transferase. (Modiefied from Rubí and Maechler, 2010). Ovals indicate metabolites found to be significantly

  7. IRIS Toxicological Review of Vinyl Chloride (Final Report, 2000)

    EPA Science Inventory

    EPA is announcing the release of the final report, Toxicological Review of Vinyl Chloride: in support of the Integrated Risk Information System (IRIS). The updated Summary for Vinyl Chloride and accompanying Quickview have also been added to the IRIS Database.

  8. Draft secure medical database standard.

    PubMed

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  9. Advanced technologies for scalable ATLAS conditions database access on the grid

    NASA Astrophysics Data System (ADS)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  10. DSSTox and Chemical Information Technologies in Support of PredictiveToxicology

    EPA Science Inventory

    The EPA NCCT Distributed Structure-Searchable Toxicity (DSSTox) Database project initially focused on the curation and publication of high-quality, standardized, chemical structure-annotated toxicity databases for use in structure-activity relationship (SAR) modeling. In recent y...

  11. Teaching Historians with Databases.

    ERIC Educational Resources Information Center

    Burton, Vernon

    1993-01-01

    Asserts that, although pressures to publish have detracted from the quality of teaching at the college level, recent innovations in educational technology have created opportunities for instructional improvement. Describes the use of computer-assisted instruction and databases in college-level history courses. (CFR)

  12. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    EPA Science Inventory

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  13. Evaluation of "shotgun" proteomics for identification of biological threat agents in complex environmental matrixes: experimental simulations.

    PubMed

    Verberkmoes, Nathan C; Hervey, W Judson; Shah, Manesh; Land, Miriam; Hauser, Loren; Larimer, Frank W; Van Berkel, Gary J; Goeringer, Douglas E

    2005-02-01

    There is currently a great need for rapid detection and positive identification of biological threat agents, as well as microbial species in general, directly from complex environmental samples. This need is most urgent in the area of homeland security, but also extends into medical, environmental, and agricultural sciences. Mass-spectrometry-based analysis is one of the leading technologies in the field with a diversity of different methodologies for biothreat detection. Over the past few years, "shotgun"proteomics has become one method of choice for the rapid analysis of complex protein mixtures by mass spectrometry. Recently, it was demonstrated that this methodology is capable of distinguishing a target species against a large database of background species from a single-component sample or dual-component mixtures with relatively the same concentration. Here, we examine the potential of shotgun proteomics to analyze a target species in a background of four contaminant species. We tested the capability of a common commercial mass-spectrometry-based shotgun proteomics platform for the detection of the target species (Escherichia coli) at four different concentrations and four different time points of analysis. We also tested the effect of database size on positive identification of the four microbes used in this study by testing a small (13-species) database and a large (261-species) database. The results clearly indicated that this technology could easily identify the target species at 20% in the background mixture at a 60, 120, 180, or 240 min analysis time with the small database. The results also indicated that the target species could easily be identified at 20% or 6% but could not be identified at 0.6% or 0.06% in either a 240 min analysis or a 30 h analysis with the small database. The effects of the large database were severe on the target species where detection above the background at any concentration used in this study was impossible, though the three other microbes used in this study were clearly identified above the background when analyzed with the large database. This study points to the potential application of this technology for biological threat agent detection but highlights many areas of needed research before the technology will be useful in real world samples.

  14. An indoor positioning technology in the BLE mobile payment system

    NASA Astrophysics Data System (ADS)

    Han, Tiantian; Ding, Lei

    2017-05-01

    Mobile payment system for large supermarkets, the core function is through the BLE low-power Bluetooth technology to achieve the amount of payment in the mobile payment system, can through an indoor positioning technology to achieve value-added services. The technology by collecting Bluetooth RSSI, the fingerprint database of sampling points corresponding is established. To get Bluetooth module RSSI by the AP. Then, to use k-Nearest Neighbor match the value of the fingerprint database. Thereby, to help businesses find customers through the mall location, combined settlement amount of the customer's purchase of goods, to analyze customer's behavior. When the system collect signal strength, the distribution of the sampling points of RSSI is analyzed and the value is filtered. The system, used in the laboratory is designed to demonstrate the feasibility.

  15. Experimental quantum private queries with linear optics

    NASA Astrophysics Data System (ADS)

    de Martini, Francesco; Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo; Nagali, Eleonora; Sansoni, Linda; Sciarrino, Fabio

    2009-07-01

    The quantum private query is a quantum cryptographic protocol to recover information from a database, preserving both user and data privacy: the user can test whether someone has retained information on which query was asked and the database provider can test the amount of information released. Here we discuss a variant of the quantum private query algorithm that admits a simple linear optical implementation: it employs the photon’s momentum (or time slot) as address qubits and its polarization as bus qubit. A proof-of-principle experimental realization is implemented.

  16. Release of Interim Policy on Federal Enforceability of Limitations on Potential to Emit

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  17. IRIS Toxicological Review of Tert-Butyl Alcohol (Tert-Butanol) ...

    EPA Pesticide Factsheets

    On April 29, 2016, the Toxicological Review of tert-Butyl Alcohol (tert-Butanol) (Public Comment Draft) was released for public comment. The draft Toxicological Review and charge were reviewed internally by EPA and by other federal agencies and the Executive Office of the President during Step 3 (Interagency Science Consultation) before public release. As part of the IRIS process, all written interagency comments on IRIS assessments will be made publicly available. Accordingly, interagency comments with EPA's response and the interagency science consultation drafts of the IRIS Toxicological Review of tert-Butanol and charge to external peer reviewers are posted on this site. EPA is undertaking a new health assessment for t-butyl alcohol (tert-butanol) for the Integrated Risk Information System (IRIS). The outcome of this project will be a Toxicological Review and IRIS and IRIS Summary of TBA that will be entered on the IRIS database. IRIS is an EPA database containing Agency scientific positions on potential adverse human health effects that may result from chronic (or lifetime) exposure to chemicals in the environment. IRIS contains chemical-specific summaries of qualitative and quantitative health information to evaluate potential public health risks associated with environmental contaminants. The IRIS database is relied on for the development of risk assessments, site-specific environmental decisions, and rule making.

  18. IRIS Toxicological Review of Biphenyl (Interagency Science ...

    EPA Pesticide Factsheets

    On September 30, 2011, the draft Toxicological Review of Biphenyl and the charge to external peer reviewers were released for external peer review and public comment. The Toxicological Review and charge were reviewed internally by EPA and by other federal agencies and White House Offices before public release. In the new IRIS process (May 2009), introduced by the EPA Administrator, all written comments on IRIS assessments submitted by other federal agencies and White House Offices will be made publicly available. Accordingly, interagency comments and the interagency science consultation draft of the IRIS Toxicological Review of Biphenyl and the charge to external peer reviewers are posted on this site. EPA is undertaking a new health assessment for biphenyl for the Integrated Risk Information System (IRIS). The outcome of this project will be a Toxicological Review and IRIS and IRIS Summary of biohenyl that will be entered on the IRIS database. IRIS is an EPA database containing Agency scientific positions on potential adverse human health effects that may result from chronic (or lifetime) exposure to chemicals in the environment. IRIS contains chemical-specific summaries of qualitative and quantitative health information to evaluate potential public health risks associated with exposure assessment information to evaluate potential public health risks associated with environmental contaminants. The IRIS database is relied on for the development of risk ass

  19. ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments

    PubMed Central

    Chèneby, Jeanne; Gheorghe, Marius; Artufel, Marie

    2018-01-01

    Abstract With this latest release of ReMap (http://remap.cisreg.eu), we present a unique collection of regulatory regions in human, as a result of a large-scale integrative analysis of ChIP-seq experiments for hundreds of transcriptional regulators (TRs) such as transcription factors, transcriptional co-activators and chromatin regulators. In 2015, we introduced the ReMap database to capture the genome regulatory space by integrating public ChIP-seq datasets, covering 237 TRs across 13 million (M) peaks. In this release, we have extended this catalog to constitute a unique collection of regulatory regions. Specifically, we have collected, analyzed and retained after quality control a total of 2829 ChIP-seq datasets available from public sources, covering a total of 485 TRs with a catalog of 80M peaks. Additionally, the updated database includes new search features for TR names as well as aliases, including cell line names and the ability to navigate the data directly within genome browsers via public track hubs. Finally, full access to this catalog is available online together with a TR binding enrichment analysis tool. ReMap 2018 provides a significant update of the ReMap database, providing an in depth view of the complexity of the regulatory landscape in human. PMID:29126285

  20. Critical review of controlled release packaging to improve food safety and quality.

    PubMed

    Chen, Xi; Chen, Mo; Xu, Chenyi; Yam, Kit L

    2018-03-19

    Controlled release packaging (CRP) is an innovative technology that uses the package to release active compounds in a controlled manner to improve safety and quality for a wide range of food products during storage. This paper provides a critical review of the uniqueness, design considerations, and research gaps of CRP, with a focus on the kinetics and mechanism of active compounds releasing from the package. Literature data and practical examples are presented to illustrate how CRP controls what active compounds to release, when and how to release, how much and how fast to release, in order to improve food safety and quality.

  1. Petroleum and hazardous material releases from industrial facilities associated with Hurricane Katrina.

    PubMed

    Santella, Nicholas; Steinberg, Laura J; Sengul, Hatice

    2010-04-01

    Hurricane Katrina struck an area dense with industry, causing numerous releases of petroleum and hazardous materials. This study integrates information from a number of sources to describe the frequency, causes, and effects of these releases in order to inform analysis of risk from future hurricanes. Over 200 onshore releases of hazardous chemicals, petroleum, or natural gas were reported. Storm surge was responsible for the majority of petroleum releases and failure of storage tanks was the most common mechanism of release. Of the smaller number of hazardous chemical releases reported, many were associated with flaring from plant startup, shutdown, or process upset. In areas impacted by storm surge, 10% of the facilities within the Risk Management Plan (RMP) and Toxic Release Inventory (TRI) databases and 28% of SIC 1311 facilities experienced accidental releases. In areas subject only to hurricane strength winds, a lower fraction (1% of RMP and TRI and 10% of SIC 1311 facilities) experienced a release while 1% of all facility types reported a release in areas that experienced tropical storm strength winds. Of industrial facilities surveyed, more experienced indirect disruptions such as displacement of workers, loss of electricity and communication systems, and difficulty acquiring supplies and contractors for operations or reconstruction (55%), than experienced releases. To reduce the risk of hazardous material releases and speed the return to normal operations under these difficult conditions, greater attention should be devoted to risk-based facility design and improved prevention and response planning.

  2. Assistive technology for ultrasound-guided central venous catheter placement.

    PubMed

    Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana

    2018-01-01

    This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.

  3. Technology in the Public Library: Results from the 1992 PLDS Survey of Technology.

    ERIC Educational Resources Information Center

    Fidler, Linda M.; Johnson, Debra Wilcox

    1994-01-01

    Discusses and compares the incorporation of technology by larger public libraries in Canada and the United States. Technology mentioned includes online public access catalogs; remote and local online database searching; microcomputers and software for public use; and fax, voice mail, and Telecommunication Devices for the Deaf and Teletype writer…

  4. The ADAMS interactive interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rietscha, E.R.

    1990-12-17

    The ADAMS (Advanced DAta Management System) project is exploring next generation database technology. Database management does not follow the usual programming paradigm. Instead, the database dictionary provides an additional name space environment that should be interactively created and tested before writing application code. This document describes the implementation and operation of the ADAMS Interpreter, an interactive interface to the ADAMS data dictionary and runtime system. The Interpreter executes individual statements of the ADAMS Interface Language, providing a fast, interactive mechanism to define and access persistent databases. 5 refs.

  5. Bringing novel semiochemical formulations to the market

    USDA-ARS?s Scientific Manuscript database

    SPLAT® (Specialized Pheromone and Lure Application Technology) matrix is a unique controlled-release technology that can be adapted to dispense and protect a wide variety of compounds from degradation, including semi chemicals, pesticides, and phagostimulants, in diverse environments. ISCA Technolog...

  6. The IMGT/HLA database

    PubMed Central

    Robinson, James; Waller, Matthew J.; Fail, Sylvie C.; McWilliam, Hamish; Lopez, Rodrigo; Parham, Peter; Marsh, Steven G. E.

    2009-01-01

    It is 10 years since the IMGT/HLA database was released, providing the HLA community with a searchable repository of highly curated HLA sequences. The HLA complex is located within the 6p21.3 region of human chromosome 6 and contains more than 220 genes of diverse function. Many of the genes encode proteins of the immune system and are highly polymorphic. The naming of these HLA genes and alleles, and their quality control is the responsibility of the WHO Nomenclature Committee for Factors of the HLA System. Through the work of the HLA Informatics Group and in collaboration with the European Bioinformatics Institute, we are able to provide public access to this data through the website http://www.ebi.ac.uk/imgt/hla/. The first release contained 964 sequences, the most recent release 3300 sequences, with around 450 new sequences been added each year. The tools provided on the website have been updated to allow more complex alignments, which include genomic sequence data, as well as the development of tools for probe and primer design and the inclusion of data from the HLA Dictionary. Regular updates to the website ensure that new and confirmatory sequences are dispersed to the HLA community, and the wider research and clinical communities. PMID:18838392

  7. JASPAR 2018: update of the open-access database of transcription factor binding profiles and its web framework.

    PubMed

    Khan, Aziz; Fornes, Oriol; Stigliani, Arnaud; Gheorghe, Marius; Castro-Mondragon, Jaime A; van der Lee, Robin; Bessy, Adrien; Chèneby, Jeanne; Kulkarni, Shubhada R; Tan, Ge; Baranasic, Damir; Arenillas, David J; Sandelin, Albin; Vandepoele, Klaas; Lenhard, Boris; Ballester, Benoît; Wasserman, Wyeth W; Parcy, François; Mathelier, Anthony

    2018-01-04

    JASPAR (http://jaspar.genereg.net) is an open-access database of curated, non-redundant transcription factor (TF)-binding profiles stored as position frequency matrices (PFMs) and TF flexible models (TFFMs) for TFs across multiple species in six taxonomic groups. In the 2018 release of JASPAR, the CORE collection has been expanded with 322 new PFMs (60 for vertebrates and 262 for plants) and 33 PFMs were updated (24 for vertebrates, 8 for plants and 1 for insects). These new profiles represent a 30% expansion compared to the 2016 release. In addition, we have introduced 316 TFFMs (95 for vertebrates, 218 for plants and 3 for insects). This release incorporates clusters of similar PFMs in each taxon and each TF class per taxon. The JASPAR 2018 CORE vertebrate collection of PFMs was used to predict TF-binding sites in the human genome. The predictions are made available to the scientific community through a UCSC Genome Browser track data hub. Finally, this update comes with a new web framework with an interactive and responsive user-interface, along with new features. All the underlying data can be retrieved programmatically using a RESTful API and through the JASPAR 2018 R/Bioconductor package. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. JASPAR 2018: update of the open-access database of transcription factor binding profiles and its web framework

    PubMed Central

    Fornes, Oriol; Stigliani, Arnaud; Gheorghe, Marius; Castro-Mondragon, Jaime A; Bessy, Adrien; Chèneby, Jeanne; Kulkarni, Shubhada R; Tan, Ge; Baranasic, Damir; Arenillas, David J; Vandepoele, Klaas; Parcy, François

    2018-01-01

    Abstract JASPAR (http://jaspar.genereg.net) is an open-access database of curated, non-redundant transcription factor (TF)-binding profiles stored as position frequency matrices (PFMs) and TF flexible models (TFFMs) for TFs across multiple species in six taxonomic groups. In the 2018 release of JASPAR, the CORE collection has been expanded with 322 new PFMs (60 for vertebrates and 262 for plants) and 33 PFMs were updated (24 for vertebrates, 8 for plants and 1 for insects). These new profiles represent a 30% expansion compared to the 2016 release. In addition, we have introduced 316 TFFMs (95 for vertebrates, 218 for plants and 3 for insects). This release incorporates clusters of similar PFMs in each taxon and each TF class per taxon. The JASPAR 2018 CORE vertebrate collection of PFMs was used to predict TF-binding sites in the human genome. The predictions are made available to the scientific community through a UCSC Genome Browser track data hub. Finally, this update comes with a new web framework with an interactive and responsive user-interface, along with new features. All the underlying data can be retrieved programmatically using a RESTful API and through the JASPAR 2018 R/Bioconductor package. PMID:29140473

  9. National security and national competitiveness: Open source solutions; NASA requirements and capabilities

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.

    1993-01-01

    Foreign competitors are challenging the world leadership of the U.S. aerospace industry, and increasingly tight budgets everywhere make international cooperation in aerospace science necessary. The NASA STI Program has as part of its mission to support NASA R&D, and to that end has developed a knowledge base of aerospace-related information known as the NASA Aerospace Database. The NASA STI Program is already involved in international cooperation with NATO/AGARD/TIP, CENDI, ICSU/ICSTI, and the U.S. Japan Committee on STI. With the new more open political climate, the perceived dearth of foreign information in the NASA Aerospace Database, and the development of the ESA database and DELURA, the German databases, the NASA STI Program is responding by sponsoring workshops on foreign acquisitions and by increasing its cooperation with international partners and with other U.S. agencies. The STI Program looks to the future of improved database access through networking and a GUI; new media; optical disk, video, and full text; and a Technology Focus Group that will keep the NASA STI Program current with technology.

  10. ESO telbib: Linking In and Reaching Out

    NASA Astrophysics Data System (ADS)

    Grothkopf, U.; Meakins, S.

    2015-04-01

    Measuring an observatory's research output is an integral part of its science operations. Like many other observatories, ESO tracks scholarly papers that use observational data from ESO facilities and uses state-of-the-art tools to create, maintain, and further develop the Telescope Bibliography database (telbib). While telbib started out as a stand-alone tool mostly used to compile lists of papers, it has by now developed into a multi-faceted, interlinked system. The core of the telbib database is links between scientific papers and observational data generated by the La Silla Paranal Observatory residing in the ESO archive. This functionality has also been deployed for ALMA data. In addition, telbib reaches out to several other systems, including ESO press releases, the NASA ADS Abstract Service, databases at the CDS Strasbourg, and impact scores at Altmetric.com. We illustrate these features to show how the interconnected telbib system enhances the content of the database as well as the user experience.

  11. GenBank

    PubMed Central

    Benson, Dennis A.; Karsch-Mizrachi, Ilene; Lipman, David J.; Ostell, James; Wheeler, David L.

    2007-01-01

    GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 240 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the EMBL Data Library in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage (). PMID:17202161

  12. SEMICONDUCTOR TECHNOLOGY Supercritical carbon dioxide process for releasing stuck cantilever beams

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Chaoqun, Gao; Lei, Wang; Yupeng, Jing

    2010-10-01

    The multi-SCCO2 (supercritical carbon dioxide) release and dry process based on our specialized SCCO2 semiconductor process equipment is investigated and the releasing mechanism is discussed. The experiment results show that stuck cantilever beams were held up again under SCCO2 high pressure treatment and the repeatability of this process is nearly 100%.

  13. Ingestion Pathway Consequences of a Major Release from SRTC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, A.

    1999-06-08

    The food ingestion consequences due to radioactive particulates of an accidental release, scenario 1-RD-3, are evaluated for Savannah River Technology Center. The sizes of land areas requiring the protective action of food interdiction are calculated. The consequences of the particulate portion of the release are evaluated with the HOTSPOT model and an EXCEL spreadsheet for particulates.

  14. TechTracS: NASA's commercial technology management system

    NASA Astrophysics Data System (ADS)

    Barquinero, Kevin; Cannon, Douglas

    1996-03-01

    The Commercial Technology Mission is a primary NASA mission, comparable in importance to those in aeronautics and space. This paper will discuss TechTracS, NASA Commercial Technology Management System that has been put into place in FY 1995 to implement this mission. This system is designed to identify and capture the NASA technologies which have commercial potential into an off-the-shelf database application, and then track the technologies' progress in realizing the commercial potential through collaborations with industry. The management system consists of four stages. The first is to develop an inventory database of the agency's entire technology portfolio and assess it for relevance to the commercial marketplace. Those technologies that are identified as having commercial potential will then be actively marketed to appropriate industries—this is the second stage. The third stage is when a NASA-industry partnership is entered into for the purposes of commercializing the technology. The final stage is to track the technology's success or failure in the marketplace. The collection of this information in TechTracS enables metrics evaluation and can accelerate the establishment on direct contacts between and NASA technologist and an industry technologist. This connection is the beginning of the technology commercialization process.

  15. Thermal and Evolved Gas Analysis of "Nanophase" Carbonates: Implications for Thermal and Evolved Gas Analysis on Mars Missions

    NASA Technical Reports Server (NTRS)

    Lauer, Howard V., Jr.; Archer, P. D., Jr.; Sutter, B.; Niles, P. B.; Ming, Douglas W.

    2012-01-01

    Data collected by the Mars Phoenix Lander's Thermal and Evolved Gas Analyzer (TEGA) suggested the presence of calcium-rich carbonates as indicated by a high temperature CO2 release while a low temperature (approx.400-680 C) CO2 release suggested possible Mg- and/or Fe-carbonates [1,2]. Interpretations of the data collected by Mars remote instruments is done by comparing the mission data to a database on the thermal properties of well-characterized Martian analog materials collected under reduced and Earth ambient pressures [3,4]. We are proposing that "nano-phase" carbonates may also be contributing to the low temperature CO2 release. The objectives of this paper is to (1) characterize the thermal and evolved gas proper-ties of carbonates of varying particle size, (2) evaluate the CO2 releases from CO2 treated CaO samples and (3) examine the secondary CO2 release from reheated calcite of varying particle size.

  16. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.

  17. Heterogeneous Integration Technology

    DTIC Science & Technology

    2017-05-19

    Distribution A. Approved for public release; distribution unlimited. (APRS-RY-17-0383) Heterogeneous Integration Technology Dr. Burhan...2013 and 2015 [4]. ...................................... 9 Figure 3: 3D integration of similar or diverse technology components follows More Moore and...10 Figure 4: Many different technologies are used in the implementation of modern microelectronics systems can benefit from

  18. Manipulation of insect behavior with Specialized Pheromone & Lure Application Technology (SPLAT®)

    Treesearch

    Agenor Mafra-Neto; Frédérique M. de Lame; Christopher J. Fettig; A. Steven Munson; Thomas M. Perring; Lukasz L. Stelinski; Lyndsie Stoltman; Leandro E.J. Mafra; Rafael Borges; Roger I. Vargas

    2013-01-01

    SPLAT® (Specialized Pheromone and Lure Application Technology) emulsion is a unique controlled-release technology that can be adapted to dispense and protect a wide variety of compounds from degradation, including semiochemicals, pesticides, and phagostimulants, in diverse environments. ISCA Technologies, Inc., in collaboration with colleagues in academia, government,...

  19. Finding Qualitative Research Evidence for Health Technology Assessment.

    PubMed

    DeJean, Deirdre; Giacomini, Mita; Simeonov, Dorina; Smith, Andrea

    2016-08-01

    Health technology assessment (HTA) agencies increasingly use reviews of qualitative research as evidence for evaluating social, experiential, and ethical aspects of health technologies. We systematically searched three bibliographic databases (MEDLINE, CINAHL, and Social Science Citation Index [SSCI]) using published search filters or "hedges" and our hybrid filter to identify qualitative research studies pertaining to chronic obstructive pulmonary disease and early breast cancer. The search filters were compared in terms of sensitivity, specificity, and precision. Our screening by title and abstract revealed that qualitative research constituted only slightly more than 1% of all published research on each health topic. The performance of the published search filters varied greatly across topics and databases. Compared with existing search filters, our hybrid filter demonstrated a consistently high sensitivity across databases and topics, and minimized the resource-intensive process of sifting through false positives. We identify opportunities for qualitative health researchers to improve the uptake of qualitative research into evidence-informed policy making. © The Author(s) 2016.

  20. Information management systems for pharmacogenomics.

    PubMed

    Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko

    2002-09-01

    The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.

Top