Sample records for relational database technology

  1. Relational Database Technology: An Overview.

    ERIC Educational Resources Information Center

    Melander, Nicole

    1987-01-01

    Describes the development of relational database technology as it applies to educational settings. Discusses some of the new tools and models being implemented in an effort to provide educators with technologically advanced ways of answering questions about education programs and data. (TW)

  2. Leveraging Relational Technology through Industry Partnerships.

    ERIC Educational Resources Information Center

    Brush, Leonard M.; Schaller, Anthony J.

    1988-01-01

    Carnegie Mellon University has leveraged its technological expertise with database management systems (DBMS) into joint technological and developmental partnerships with DBMS and application software vendors. Carnegie's relational database strategy, the strategy of partnerships and how they were formed, and how the partnerships are doing are…

  3. Evaluation of relational and NoSQL database architectures to manage genomic annotations.

    PubMed

    Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard

    2016-12-01

    While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  5. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Relational Data Bases--Are You Ready?

    ERIC Educational Resources Information Center

    Marshall, Dorothy M.

    1989-01-01

    Migrating from a traditional to a relational database technology requires more than traditional project management techniques. An overview of what to consider before migrating to relational database technology is presented. Leadership, staffing, vendor support, hardware, software, and application development are discussed. (MLW)

  7. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  8. XML technology planning database : lessons learned

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  9. The Effect of Relational Database Technology on Administrative Computing at Carnegie Mellon University.

    ERIC Educational Resources Information Center

    Golden, Cynthia; Eisenberger, Dorit

    1990-01-01

    Carnegie Mellon University's decision to standardize its administrative system development efforts on relational database technology and structured query language is discussed and its impact is examined in one of its larger, more widely used applications, the university information system. Advantages, new responsibilities, and challenges of the…

  10. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly

  11. Technology in Science and Mathematics Education.

    ERIC Educational Resources Information Center

    Buccino, Alphonse

    Provided are several perspectives on technology, addressing changes in learners related to technology, changes in contemporary life related to technology, and changes in subject areas related to technology (indicating that technology has created such new tools for inquiry as computer programming, word processing, online database searches, and…

  12. An Animated Introduction to Relational Databases for Many Majors

    ERIC Educational Resources Information Center

    Dietrich, Suzanne W.; Goelman, Don; Borror, Connie M.; Crook, Sharon M.

    2015-01-01

    Database technology affects many disciplines beyond computer science and business. This paper describes two animations developed with images and color that visually and dynamically introduce fundamental relational database concepts and querying to students of many majors. The goal is for educators in diverse academic disciplines to incorporate the…

  13. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  14. A New Methodology for Systematic Exploitation of Technology Databases.

    ERIC Educational Resources Information Center

    Bedecarrax, Chantal; Huot, Charles

    1994-01-01

    Presents the theoretical aspects of a data analysis methodology that can help transform sequential raw data from a database into useful information, using the statistical analysis of patents as an example. Topics discussed include relational analysis and a technology watch approach. (Contains 17 references.) (LRW)

  15. Soil Organic Carbon for Global Benefits - assessing potential SOC increase under SLM technologies worldwide and evaluating tradeoffs and gains of upscaling SLM technologies

    NASA Astrophysics Data System (ADS)

    Wolfgramm, Bettina; Hurni, Hans; Liniger, Hanspeter; Ruppen, Sebastian; Milne, Eleanor; Bader, Hans-Peter; Scheidegger, Ruth; Amare, Tadele; Yitaferu, Birru; Nazarmavloev, Farrukh; Conder, Malgorzata; Ebneter, Laura; Qadamov, Aslam; Shokirov, Qobiljon; Hergarten, Christian; Schwilch, Gudrun

    2013-04-01

    There is a fundamental mutual interest between enhancing soil organic carbon (SOC) in the world's soils and the objectives of the major global environmental conventions (UNFCCC, UNCBD, UNCCD). While there is evidence at the case study level that sustainable land management (SLM) technologies increase SOC stocks and SOC related benefits, there is no quantitative data available on the potential for increasing SOC benefits from different SLM technologies and especially from case studies in the developing countries, and a clear understanding of the trade-offs related to SLM up-scaling is missing. This study aims at assessing the potential increase of SOC under SLM technologies worldwide, evaluating tradeoffs and gains in up-scaling SLM for case studies in Tajikistan, Ethiopia and Switzerland. It makes use of the SLM technologies documented in the online database of the World Overview of Conservation Approaches and Technologies (WOCAT). The study consists of three components: 1) Identifying SOC benefits contributing to the major global environmental issues for SLM technologies worldwide as documented in the WOCAT global database 2) Validation of SOC storage potentials and SOC benefit predictions for SLM technologies from the WOCAT database using results from existing comparative case studies at the plot level, using soil spectral libraries and standardized documentations of ecosystem service from the WOCAT database. 3) Understanding trade-offs and win-win scenarios of up-scaling SLM technologies from the plot to the household and landscape level using material flow analysis. This study builds on the premise that the most promising way to increase benefits from land management is to consider already existing sustainable strategies. Such SLM technologies from all over the world documented are accessible in a standardized way in the WOCAT online database. The study thus evaluates SLM technologies from the WOCAT database by calculating the potential SOC storage increase and related benefits by comparing SOC estimates before-and-after establishment of the SLM technology. These results are validated using comparative case studies of plots with-and-without SLM technologies (existing SLM systems versus surrounding, degrading systems). In view of upscaling SLM technologies, it is crucial to understand tradeoffs and gains supporting or hindering the further spread. Systemic biomass management analysis using material flow analysis allows quantifying organic carbon flows and storages for different land management options at the household, but also at landscape level. The study shows results relevant for science, policy and practice for accounting, monitoring and evaluating SOC related ecosystem services: - A comprehensive methodology for SLM impact assessments allowing quantification of SOC storage and SOC related benefits under different SLM technologies, and - Improved understanding of upscaling options for SLM technologies and tradeoffs as well as win-win opportunities for biomass management, SOC content increase, and ecosystem services improvement at the plot and household level.

  16. Evaluation of linking pavement related databases.

    DOT National Transportation Integrated Search

    2007-03-01

    In general, the objectives of this study were to identify and solve various issues in linking pavement performance related database. The detailed objectives were: to evaluate the state-of-the-art in information technology for data integration and dat...

  17. Extending the data dictionary for data/knowledge management

    NASA Technical Reports Server (NTRS)

    Hydrick, Cecile L.; Graves, Sara J.

    1988-01-01

    Current relational database technology provides the means for efficiently storing and retrieving large amounts of data. By combining techniques learned from the field of artificial intelligence with this technology, it is possible to expand the capabilities of such systems. This paper suggests using the expanded domain concept, an object-oriented organization, and the storing of knowledge rules within the relational database as a solution to the unique problems associated with CAD/CAM and engineering data.

  18. Directory of On-Line Networks, Databases and Bulletin Boards on Assistive Technology. Second Edition. RESNA Technical Assistance Project.

    ERIC Educational Resources Information Center

    RESNA: Association for the Advancement of Rehabilitation Technology, Washington, DC.

    This resource directory provides a selective listing of electronic networks, online databases, and bulletin boards that highlight technology-related services and products. For each resource, the following information is provided: name, address, and telephone number; description; target audience; hardware/software needs to access the system;…

  19. Selecting a Relational Database Management System for Library Automation Systems.

    ERIC Educational Resources Information Center

    Shekhel, Alex; O'Brien, Mike

    1989-01-01

    Describes the evaluation of four relational database management systems (RDBMSs) (Informix Turbo, Oracle 6.0 TPS, Unify 2000 and Relational Technology's Ingres 5.0) to determine which is best suited for library automation. The evaluation criteria used to develop a benchmark specifically designed to test RDBMSs for libraries are discussed. (CLB)

  20. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  1. Use of Software Tools in Teaching Relational Database Design.

    ERIC Educational Resources Information Center

    McIntyre, D. R.; And Others

    1995-01-01

    Discusses the use of state-of-the-art software tools in teaching a graduate, advanced, relational database design course. Results indicated a positive student response to the prototype of expert systems software and a willingness to utilize this new technology both in their studies and in future work applications. (JKP)

  2. Market Pressure and Government Intervention in the Administration and Development of Molecular Databases.

    ERIC Educational Resources Information Center

    Sillince, J. A. A.; Sillince, M.

    1993-01-01

    Discusses molecular databases and the role that government and private companies play in their administration and development. Highlights include copyright and patent issues relating to public databases and the information contained in them; data quality; data structures and technological questions; the international organization of molecular…

  3. Initial experiences with building a health care infrastructure based on Java and object-oriented database technology.

    PubMed

    Dionisio, J D; Sinha, U; Dai, B; Johnson, D B; Taira, R K

    1999-01-01

    A multi-tiered telemedicine system based on Java and object-oriented database technology has yielded a number of practical insights and experiences on their effectiveness and suitability as implementation bases for a health care infrastructure. The advantages and drawbacks to their use, as seen within the context of the telemedicine system's development, are discussed. Overall, these technologies deliver on their early promise, with a few remaining issues that are due primarily to their relative newness.

  4. Charting the Progress

    ERIC Educational Resources Information Center

    CURRENTS, 2010

    2010-01-01

    Advancement technology is reshaping the business of fundraising, alumni relations, communications, and marketing. Through all of these innovations, the backbone of advancement systems remains the constituent database. This article takes a look at advancement databases that track constituent data.

  5. Critical Needs for Robust and Reliable Database for Design and Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, M.

    1999-01-01

    Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.

  6. Implementing a Dynamic Database-Driven Course Using LAMP

    ERIC Educational Resources Information Center

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  7. An Experimental Investigation of Complexity in Database Query Formulation Tasks

    ERIC Educational Resources Information Center

    Casterella, Gretchen Irwin; Vijayasarathy, Leo

    2013-01-01

    Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…

  8. Integrating heterogeneous databases in clustered medic care environments using object-oriented technology

    NASA Astrophysics Data System (ADS)

    Thakore, Arun K.; Sauer, Frank

    1994-05-01

    The organization of modern medical care environments into disease-related clusters, such as a cancer center, a diabetes clinic, etc., has the side-effect of introducing multiple heterogeneous databases, often containing similar information, within the same organization. This heterogeneity fosters incompatibility and prevents the effective sharing of data amongst applications at different sites. Although integration of heterogeneous databases is now feasible, in the medical arena this is often an ad hoc process, not founded on proven database technology or formal methods. In this paper we illustrate the use of a high-level object- oriented semantic association method to model information found in different databases into an integrated conceptual global model that integrates the databases. We provide examples from the medical domain to illustrate an integration approach resulting in a consistent global view, without attacking the autonomy of the underlying databases.

  9. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  10. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    PubMed

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  11. New perspectives in toxicological information management, and the role of ISSTOX databases in assessing chemical mutagenicity and carcinogenicity.

    PubMed

    Benigni, Romualdo; Battistelli, Chiara Laura; Bossa, Cecilia; Tcheremenskaia, Olga; Crettaz, Pierre

    2013-07-01

    Currently, the public has access to a variety of databases containing mutagenicity and carcinogenicity data. These resources are crucial for the toxicologists and regulators involved in the risk assessment of chemicals, which necessitates access to all the relevant literature, and the capability to search across toxicity databases using both biological and chemical criteria. Towards the larger goal of screening chemicals for a wide range of toxicity end points of potential interest, publicly available resources across a large spectrum of biological and chemical data space must be effectively harnessed with current and evolving information technologies (i.e. systematised, integrated and mined), if long-term screening and prediction objectives are to be achieved. A key to rapid progress in the field of chemical toxicity databases is that of combining information technology with the chemical structure as identifier of the molecules. This permits an enormous range of operations (e.g. retrieving chemicals or chemical classes, describing the content of databases, finding similar chemicals, crossing biological and chemical interrogations, etc.) that other more classical databases cannot allow. This article describes the progress in the technology of toxicity databases, including the concepts of Chemical Relational Database and Toxicological Standardized Controlled Vocabularies (Ontology). Then it describes the ISSTOX cluster of toxicological databases at the Istituto Superiore di Sanitá. It consists of freely available databases characterised by the use of modern information technologies and by curation of the quality of the biological data. Finally, this article provides examples of analyses and results made possible by ISSTOX.

  12. Solar Sail Propulsion Technology Readiness Level Database

    NASA Technical Reports Server (NTRS)

    Adams, Charles L.

    2004-01-01

    The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).

  13. The data and system Nikkei Telecom "Industry/Technology Information Service"

    NASA Astrophysics Data System (ADS)

    Kurata, Shizuya; Sueyoshi, Yukio

    Nihoh Keizai Shimbun started supplying "Industry/Technology Information Service" from July 1989 as a part of Nikkei Telecom Package, which is online information service using personal computers for its terminals. Previously Nikkei's database service mainly covered such areas as economy, corporations and markets. On the other hand, the new "Industry/Technology Information Service" (main data covers industry by industry information-semi macro) is attracting a good deal of attention as it is the first to supply science and technology related database which has not been touched before. Moreover it is attracting attention technically as it has an access by gateway system to JOIS which is the first class science technology file in Japan. This report introduces data and system of "Industry/Technology Information Service" briefly.

  14. The Design and Implementation of a Relational to Network Query Translator for a Distributed Database Management System.

    DTIC Science & Technology

    1985-12-01

    RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney

  15. A Knowledge Database on Thermal Control in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Hirasawa, Shigeki; Satoh, Isao

    A prototype version of a knowledge database on thermal control in manufacturing processes, specifically, molding, semiconductor manufacturing, and micro-scale manufacturing has been developed. The knowledge database has search functions for technical data, evaluated benchmark data, academic papers, and patents. The database also displays trends and future roadmaps for research topics. It has quick-calculation functions for basic design. This paper summarizes present research topics and future research on thermal control in manufacturing engineering to collate the information to the knowledge database. In the molding process, the initial mold and melt temperatures are very important parameters. In addition, thermal control is related to many semiconductor processes, and the main parameter is temperature variation in wafers. Accurate in-situ temperature measurment of wafers is important. And many technologies are being developed to manufacture micro-structures. Accordingly, the knowledge database will help further advance these technologies.

  16. Technology and Microcomputers for an Information Centre/Special Library.

    ERIC Educational Resources Information Center

    Daehn, Ralph M.

    1984-01-01

    Discusses use of microcomputer hardware and software, telecommunications methods, and advanced library methods to create a specialized information center's database of literature relating to farm machinery and food processing. Systems and services (electronic messaging, serials control, database creation, cataloging, collections, circulation,…

  17. Genome-wide association as a means to understanding the mammary gland

    USDA-ARS?s Scientific Manuscript database

    Next-generation sequencing and related technologies have facilitated the creation of enormous public databases that catalogue genomic variation. These databases have facilitated a variety of approaches to discover new genes that regulate normal biology as well as disease. Genome wide association (...

  18. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...

  19. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...

  20. Mediagraphy: Print and Nonprint Resources.

    ERIC Educational Resources Information Center

    Educational Media and Technology Yearbook, 1996

    1996-01-01

    This annotated list includes media-related resources classified under the following headings: artificial intelligence and robotics, CD-ROM, computer-assisted instruction, databases and online searching, distance education, educational research, educational technology, electronic publishing, information science and technology, instructional design…

  1. Mediagraphy: Print and Nonprint Resources.

    ERIC Educational Resources Information Center

    Educational Media and Technology Yearbook, 1997

    1997-01-01

    This annotated list includes media-related resources classified under the following headings: artificial intelligence and robotics, CD-ROM, computer-assisted instruction, databases and online searching, distance education, educational research, educational technology, electronic publishing, information science and technology, instructional design…

  2. Evaluating Technology Integration in the Elementary School: A Site-Based Approach.

    ERIC Educational Resources Information Center

    Mowe, Richard

    This book enables educators at the elementary level to conduct formative evaluations of their technology programs in minimum time. Most of the technology is computer related, including word processing, graphics, desktop publishing, spreadsheets, databases, instructional software, programming, and telecommunications. The design of the book is aimed…

  3. In the Literature.

    ERIC Educational Resources Information Center

    Kilpatrick, Thomas L., Ed.

    1998-01-01

    Provides annotations of 29 journal articles and six book reviews on a variety of topics related to technology in libraries, including collection development, computer-assisted instruction, databases, distance education, ergonomics, hardware, information technology, interlibrary loan and document supply, Internet, online catalogs, preservation,…

  4. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  5. Copyright in Context: The OCLC Database.

    ERIC Educational Resources Information Center

    Mason, Marilyn Gell

    1988-01-01

    Discusses topics related to OCLC adoption of guidelines for the use and transfer of OCLC-derived records, including the purpose of OCLC; the legal basis of copyrighting; technological change; compilation copyright; rationale for copyright of the OCLC database; impact on libraries; impact on networks; and relationships between OCLC and libraries. A…

  6. Components of spatial information management in wildlife ecology: Software for statistical and modeling analysis [Chapter 14

    Treesearch

    Hawthorne L. Beyer; Jeff Jenness; Samuel A. Cushman

    2010-01-01

    Spatial information systems (SIS) is a term that describes a wide diversity of concepts, techniques, and technologies related to the capture, management, display and analysis of spatial information. It encompasses technologies such as geographic information systems (GIS), global positioning systems (GPS), remote sensing, and relational database management systems (...

  7. The NASA ASTP Combined-Cycle Propulsion Database Project

    NASA Technical Reports Server (NTRS)

    Hyde, Eric H.; Escher, Daric W.; Heck, Mary T.; Roddy, Jordan E.; Lyles, Garry (Technical Monitor)

    2000-01-01

    The National Aeronautics and Space Administration (NASA) communicated its long-term R&D goals for aeronautics and space transportation technologies in its 1997-98 annual progress report (Reference 1). Under "Pillar 3, Goal 9" a 25-year-horizon set of objectives has been stated for the Generation 3 Reusable Launch Vehicle ("Gen 3 RLV") class of space transportation systems. An initiative referred to as "Spaceliner 100" is being conducted to identify technology roadmaps in support of these objectives. Responsibility for running "Spaceliner 100" technology development and demonstration activities have been assigned to NASA's agency-wide Advanced Space Transportation Program (ASTP) office located at the Marshall Space Flight Center. A key technology area in which advances will be required in order to meet these objectives is propulsion. In 1996, in order to expand their focus beyond "allrocket" propulsion systems and technologies (see Appendix A for further discussion), ASTP initiated technology development and demonstration work on combined-cycle airbreathing/rocket propulsion systems (ARTT Contracts NAS8-40890 through 40894). Combined-cycle propulsion (CCP) activities (see Appendix B for definitions) have been pursued in the U.S. for over four decades, resulting in a large documented knowledge base on this subject (see Reference 2). In the fall of 1999 the Combined-Cycle Propulsion Database (CCPD) project was established with the primary purpose of collecting and consolidating CCP related technical information in support of the ASTP's ongoing technology development and demonstration program. Science Applications International Corporation (SAIC) was selected to perform the initial development of the Database under its existing support contract with MSFC (Contract NAS8-99060) because of the company's unique combination of capabilities in database development, information technology (IT) and CCP knowledge. The CCPD is summarized in the descriptive 2-page flyer appended to this paper as Appendix C. The purpose of this paper is to provide the reader with an understanding of the objectives of the CCPD and relate the progress that has been made toward meeting those objectives.

  8. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    PubMed

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  9. Draft secure medical database standard.

    PubMed

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  10. Information technologies in public health management: a database on biocides to improve quality of life.

    PubMed

    Roman, C; Scripcariu, L; Diaconescu, Rm; Grigoriu, A

    2012-01-01

    Biocides for prolonging the shelf life of a large variety of materials have been extensively used over the last decades. It has estimated that the worldwide biocide consumption to be about 12.4 billion dollars in 2011, and is expected to increase in 2012. As biocides are substances we get in contact with in our everyday lives, access to this type of information is of paramount importance in order to ensure an appropriate living environment. Consequently, a database where information may be quickly processed, sorted, and easily accessed, according to different search criteria, is the most desirable solution. The main aim of this work was to design and implement a relational database with complete information about biocides used in public health management to improve the quality of life. Design and implementation of a relational database for biocides, by using the software "phpMyAdmin". A database, which allows for an efficient collection, storage, and management of information including chemical properties and applications of a large quantity of biocides, as well as its adequate dissemination into the public health environment. The information contained in the database herein presented promotes an adequate use of biocides, by means of information technologies, which in consequence may help achieve important improvement in our quality of life.

  11. A COSTAR interface using WWW technology.

    PubMed Central

    Rabbani, U.; Morgan, M.; Barnett, O.

    1998-01-01

    The concentration of industry on modern relational databases has left many nonrelational and proprietary databases without support for integration with new technologies. Emerging interface tools and data-access methodologies can be applied with difficulty to medical record systems which have proprietary data representation. Users of such medical record systems usually must access the clinical content of such record systems with keyboard-intensive and time-consuming interfaces. COSTAR is a legacy ambulatory medical record system developed over 25 years ago that is still popular and extensively used at the Massachusetts General Hospital. We define a model for using middle layer services to extract and cache data from non-relational databases, and present an intuitive World-Wide Web interface to COSTAR. This model has been implemented and successfully piloted in the Internal Medicine Associates at Massachusetts General Hospital. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9929310

  12. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse event reporting, confidential enquiries, disease-only registers and health surveys. Databases in group I can be used not only to assess effectiveness but also to assess diffusion and equity. Databases in group II can only assess diffusion. Group III has restricted scope for assessing HTs, except for analysis of adverse events. For use in costing, databases need to include unit costs or prices. Some databases included unit cost as well as a specific HT. A list of around 270 databases was identified at the level of UK, England and Wales or England (over 1000 including Scotland, Wales and Northern Ireland). Allocation of these to the above groups identified around 60 databases with some potential for HT assessment, roughly half to group I. Eighteen clinical registers were identified as having the greatest potential although the clinical administrative datasets had potential mainly owing to their inclusion of a wide range of technologies. Only two databases were identified that could directly be used in costing. The review of the potential capture of HTs prioritized by the UK's NHS R&D HTA programme showed that only 10% would be captured in these databases, mainly drugs prescribed in primary care. The review of the use of routine databases in any form of HT assessment indicated that clinical registers were mainly used for national comparative audit. Some databases have only been used in annual reports, usually time trend analysis. A few peer-reviewed papers used a clinical register to assess the effectiveness of a technology. Accessibility is suggested as a barrier to using most databases. Clinical administrative databases (group Ib) have mainly been used to build population needs indices and performance indicators. A review of the validity of used databases showed that although internal consistency checks were common, relatively few had any form of external audit. Some comparative audit databases have data scrutinised by participating units. Issues around coverage and coding have, in general, received little attention. NHS funding of databases has been mainly for 'Central Returns' for management purposes, which excludes those databases with the greatest potential for HT assessment. Funding for databases was various, but some are unfunded, relying on goodwill. The estimated total cost of databases in group I plus selected databases from groups II and III has been estimated at pound 50 million or around 0.1% of annual NHS spend. A few databases with limited potential for HT assessment account for the bulk of spending. Suggestions for policy include clarification of responsibility for the strategic development of databases, improved resourcing, and issues around coding, confidentiality, ownership and access, maintenance of clinical support, optimal use of information technology, filling gaps and remedying deficiencies. Recommendations for researchers include closer policy links between routine data and R&D, and selective investment in the more promising databases. Recommended research topics include optimal capture and coding of the range of HTs, international comparisons of the role, funding and use of routine data in healthcare systems and use of routine database in trials and in modelling. Independent evaluations are recommended for information strategies (such as those around the National Service Frameworks and various collaborations) and for electronic patient and health records.

  13. Consulting report on the NASA technology utilization network system

    NASA Technical Reports Server (NTRS)

    Hlava, Marjorie M. K.

    1992-01-01

    The purposes of this consulting effort are: (1) to evaluate the existing management and production procedures and workflow as they each relate to the successful development, utilization, and implementation of the NASA Technology Utilization Network System (TUNS) database; (2) to identify, as requested by the NASA Project Monitor, the strengths, weaknesses, areas of bottlenecking, and previously unaddressed problem areas affecting TUNS; (3) to recommend changes or modifications of existing procedures as necessary in order to effect corrections for the overall benefit of NASA TUNS database production, implementation, and utilization; and (4) to recommend the addition of alternative procedures, routines, and activities that will consolidate and facilitate the production, implementation, and utilization of the NASA TUNS database.

  14. Collaboration spotting for dental science.

    PubMed

    Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A

    2014-10-06

    The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to Dental Science. In order to create a Sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro--maxillo--facial critical size defects, namely the use of Porous HydroxyApatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex--vivo of Mesenchymal Stem Cells. We produced the Sociograms for these technologies and the resulting maps are now accessible on--line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state--of--the--art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used for Dental Science and produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for Dental Science research.

  15. Collaboration Spotting for oral medicine.

    PubMed

    Leonardi, E; Agocs, A; Fragkiskos, S; Kasfikis, N; Le Goff, J M; Cristalli, M P; Luzzi, V; Polimeni, A

    2014-09-01

    The goal of the Collaboration Spotting project is to create an automatic system to collect information about publications and patents related to a given technology, to identify the key players involved, and to highlight collaborations and related technologies. The collected information can be visualized in a web browser as interactive graphical maps showing in an intuitive way the players and their collaborations (Sociogram) and the relations among the technologies (Technogram). We propose to use the system to study technologies related to oral medicine. In order to create a sociogram, we create a logical filter based on a set of keywords related to the technology under study. This filter is used to extract a list of publications from the Web of Science™ database. The list is validated by an expert in the technology and sent to CERN where it is inserted in the Collaboration Spotting database. Here, an automatic software system uses the data to generate the final maps. We studied a set of recent technologies related to bone regeneration procedures of oro-maxillo-facial critical size defects, namely the use of porous hydroxyapatite (HA) as a bone substitute alone (bone graft) or as a tridimensional support (scaffold) for insemination and differentiation ex vivo of mesenchymal stem cells. We produced the sociograms for these technologies and the resulting maps are now accessible on-line. The Collaboration Spotting system allows the automatic creation of interactive maps to show the current and historical state of research on a specific technology. These maps are an ideal tool both for researchers who want to assess the state-of-the-art in a given technology, and for research organizations who want to evaluate their contribution to the technological development in a given field. We demonstrated that the system can be used in oral medicine as is produced the maps for an initial set of technologies in this field. We now plan to enlarge the set of mapped technologies in order to make the Collaboration Spotting system a useful reference tool for oral medicine research.

  16. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    PubMed

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  17. Annual patents review, January-December 2004

    Treesearch

    Roland Gleisner; Karen Scallon; Michael Fleischmann; Julie Blankenburg; Marguerite Sykes

    2005-01-01

    This review summarizes patents related to paper recycling that first appeared in patent databases during the 2004. Two on-line databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This feature is intended to inform readers about recent developments in equipment design, chemicals, and process technologies for recycling...

  18. Reactome graph database: Efficient access to complex pathway data

    PubMed Central

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  19. Reactome graph database: Efficient access to complex pathway data.

    PubMed

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  20. A Blind Reversible Robust Watermarking Scheme for Relational Databases

    PubMed Central

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as “histogram shifting of adjacent pixel difference” (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks. PMID:24223033

  1. A blind reversible robust watermarking scheme for relational databases.

    PubMed

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.

  2. Data Architecture in an Open Systems Environment.

    ERIC Educational Resources Information Center

    Bernbom, Gerald; Cromwell, Dennis

    1993-01-01

    The conceptual basis for structured data architecture, and its integration with open systems technology at Indiana University, are described. Key strategic goals guiding these efforts are discussed: commitment to improved data access; migration to relational database technology, and deployment of a high-speed, multiprotocol network; and…

  3. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  4. DOE technology information management system database study report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less

  5. Economic evaluations in gastroenterology in Brazil: A systematic review.

    PubMed

    de Paiva Haddad, Luciana Bertocco; Decimoni, Tassia Cristina; Turri, Jose Antonio; Leandro, Roseli; de Soárez, Patrícia Coelho

    2016-02-06

    To systematically review economic evaluations in gastroenterology, relating to Brazil, published between 1980 and 2013. We selected full and partial economic evaluations from among those retrieved by searching the following databases: MEDLINE (PubMed); Excerpta Medica; the Latin American and Caribbean Health Sciences Literature database; the Scientific Electronic Library Online; the database of the Centre for Reviews and Dissemination; the National Health Service (NHS) Economic Evaluation Database; the NHS Health Technology Assessment database; the Health Economics database of the Brazilian Virtual Library of Health; Scopus; Web of Science; and the Brazilian Network for the Evaluation of Health Technologies. Two researchers, working independently, selected the studies and extracted the data. We identified 535 health economic evaluations relating to Brazil and published in the 1980-2013 period. Of those 535 articles, only 40 dealt with gastroenterology. Full and partial economic evaluations respectively accounted for 23 (57.5%) and 17 (42.5%) of the 40 studies included. Among the 23 full economic evaluations, there were 11 cost-utility analyses, seven cost-effectiveness analyses, four cost-consequence analyses, and one cost-minimization analysis. Of the 40 studies, 25 (62.5%) evaluated medications; 7 (17.5%) evaluated procedures; and 3 (7.5%) evaluated equipment. Most (55%) of the studies were related to viral hepatitis, and most (63.4%) were published after 2010. Other topics included gastrointestinal cancer, liver transplantation, digestive diseases and hernias. Over the 33-year period examined, the number of such economic evaluations relating to Brazil, especially of those evaluating medications for the treatment of hepatitis, increased considerably. Further studies are needed in order to ensure that expenditures on health care in Brazil are made as fairly and efficiently as possible.

  6. Economic evaluations in gastroenterology in Brazil: A systematic review

    PubMed Central

    de Paiva Haddad, Luciana Bertocco; Decimoni, Tassia Cristina; Turri, Jose Antonio; Leandro, Roseli; de Soárez, Patrícia Coelho

    2016-01-01

    AIM: To systematically review economic evaluations in gastroenterology, relating to Brazil, published between 1980 and 2013. METHODS: We selected full and partial economic evaluations from among those retrieved by searching the following databases: MEDLINE (PubMed); Excerpta Medica; the Latin American and Caribbean Health Sciences Literature database; the Scientific Electronic Library Online; the database of the Centre for Reviews and Dissemination; the National Health Service (NHS) Economic Evaluation Database; the NHS Health Technology Assessment database; the Health Economics database of the Brazilian Virtual Library of Health; Scopus; Web of Science; and the Brazilian Network for the Evaluation of Health Technologies. Two researchers, working independently, selected the studies and extracted the data. RESULTS: We identified 535 health economic evaluations relating to Brazil and published in the 1980-2013 period. Of those 535 articles, only 40 dealt with gastroenterology. Full and partial economic evaluations respectively accounted for 23 (57.5%) and 17 (42.5%) of the 40 studies included. Among the 23 full economic evaluations, there were 11 cost-utility analyses, seven cost-effectiveness analyses, four cost-consequence analyses, and one cost-minimization analysis. Of the 40 studies, 25 (62.5%) evaluated medications; 7 (17.5%) evaluated procedures; and 3 (7.5%) evaluated equipment. Most (55%) of the studies were related to viral hepatitis, and most (63.4%) were published after 2010. Other topics included gastrointestinal cancer, liver transplantation, digestive diseases and hernias. Over the 33-year period examined, the number of such economic evaluations relating to Brazil, especially of those evaluating medications for the treatment of hepatitis, increased considerably. CONCLUSION: Further studies are needed in order to ensure that expenditures on health care in Brazil are made as fairly and efficiently as possible. PMID:26855823

  7. The construction of the spatio-temporal database of the ancient Silk Road within Xinjiang province during the Han and Tang dynasties

    NASA Astrophysics Data System (ADS)

    Bi, Jiantao; Luo, Guilin; Wang, Xingxing; Zhu, Zuojia

    2014-03-01

    As the bridge over the Chinese and Western civilization, the ancient Silk Road has made a huge contribution to cultural, economic, political exchanges between China and western countries. In this paper, we treated the historical period of Western Han Dynasty, Eastern Han Dynasty and Tang Dynasty as the research time domain, and the Western Regions' countries that were existed along the Silk Road at the mean time as the research spatial domain. Then we imported these data into the SQL Server database we constructed, from which we could either query the attribute information such as population, military force, the era of the Central Plains empire, the significant events taking place in the country and some related attribute information of these events like the happened calendar year in addition to some related spatial information such as the present location, the coordinates of the capital and the territory by inputting the name of the Western countries. At the same time we could query the significant events, government institution in Central Plains and the existent Western countries at the mean time by inputting the calendar year. Based on the database, associated with GIS, RS, Flex, C# and other related information technology and network technology, we could not only browsing, searching and editing the information of the ancient Silk Road in Xinjiang Province during the Han and Tang Dynasties, but preliminary analysing as well. This is the combination of archaeology and modern information technology, and the database could also be a reference to further study, research and practice in the related fields in the future.

  8. Performance Evaluation of NoSQL Databases: A Case Study

    DTIC Science & Technology

    2015-02-01

    a centralized relational database. The customer decided to consider NoSQL technologies for two specific uses, namely:  the primary data store for...17 custom specific 6. FU NoSQL availab data mo arking of data g a specific wo sin benchmark f hmark for tran le workload de o publish meas their...The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the

  9. Scale out databases for CERN use cases

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper

    2015-12-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.

  10. An Updating System for the Gridded Population Database of China Based on Remote Sensing, GIS and Spatial Database Technologies.

    PubMed

    Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui

    2009-01-01

    The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.

  11. Publication of nuclear magnetic resonance experimental data with semantic web technology and the application thereof to biomedical research of proteins.

    PubMed

    Yokochi, Masashi; Kobayashi, Naohiro; Ulrich, Eldon L; Kinjo, Akira R; Iwata, Takeshi; Ioannidis, Yannis E; Livny, Miron; Markley, John L; Nakamura, Haruki; Kojima, Chojiro; Fujiwara, Toshimichi

    2016-05-05

    The nuclear magnetic resonance (NMR) spectroscopic data for biological macromolecules archived at the BioMagResBank (BMRB) provide a rich resource of biophysical information at atomic resolution. The NMR data archived in NMR-STAR ASCII format have been implemented in a relational database. However, it is still fairly difficult for users to retrieve data from the NMR-STAR files or the relational database in association with data from other biological databases. To enhance the interoperability of the BMRB database, we present a full conversion of BMRB entries to two standard structured data formats, XML and RDF, as common open representations of the NMR-STAR data. Moreover, a SPARQL endpoint has been deployed. The described case study demonstrates that a simple query of the SPARQL endpoints of the BMRB, UniProt, and Online Mendelian Inheritance in Man (OMIM), can be used in NMR and structure-based analysis of proteins combined with information of single nucleotide polymorphisms (SNPs) and their phenotypes. We have developed BMRB/XML and BMRB/RDF and demonstrate their use in performing a federated SPARQL query linking the BMRB to other databases through standard semantic web technologies. This will facilitate data exchange across diverse information resources.

  12. ERIC/IT Update, 2001.

    ERIC Educational Resources Information Center

    ERIC/IT Update, 2001

    2001-01-01

    The majority of this publication is comprised of 13 feature articles covering a wide range of topics in the areas of educational technology and library and information sciences. Also offered are related abstracts found in the ERIC Database and the latest news at the ERIC Clearinghouse on Information & Technology, including the publication of…

  13. The Computer Catalog: A Democratic or Authoritarian Technology?

    ERIC Educational Resources Information Center

    Adams, Judith A.

    1988-01-01

    Discussion of consequences of library automation argues that technology should be used to augment access to information. Online public access catalogs are considered in this context, along with several related issues such as system incompatibility, invasion of privacy, barriers to database access and manipulation, and user fees, which contribute…

  14. Towards G2G: Systems of Technology Database Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David

    2005-01-01

    We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.

  15. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  16. Current situation and future usage of anticancer drug databases.

    PubMed

    Wang, Hongzhi; Yin, Yuanyuan; Wang, Peiqi; Xiong, Chenyu; Huang, Lingyu; Li, Sijia; Li, Xinyi; Fu, Leilei

    2016-07-01

    Cancer is a deadly disease with increasing incidence and mortality rates and affects the life quality of millions of people per year. The past 15 years have witnessed the rapid development of targeted therapy for cancer treatment, with numerous anticancer drugs, drug targets and related gene mutations been identified. The demand for better anticancer drugs and the advances in database technologies have propelled the development of databases related to anticancer drugs. These databases provide systematic collections of integrative information either directly on anticancer drugs or on a specific type of anticancer drugs with their own emphases on different aspects, such as drug-target interactions, the relationship between mutations in drug targets and drug resistance/sensitivity, drug-drug interactions, natural products with anticancer activity, anticancer peptides, synthetic lethality pairs and histone deacetylase inhibitors. We focus on a holistic view of the current situation and future usage of databases related to anticancer drugs and further discuss their strengths and weaknesses, in the hope of facilitating the discovery of new anticancer drugs with better clinical outcomes.

  17. The research of network database security technology based on web service

    NASA Astrophysics Data System (ADS)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  18. Assistive technology for ultrasound-guided central venous catheter placement.

    PubMed

    Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana

    2018-01-01

    This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.

  19. Object-oriented structures supporting remote sensing databases

    NASA Technical Reports Server (NTRS)

    Wichmann, Keith; Cromp, Robert F.

    1995-01-01

    Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.

  20. NNDC Stand: Activities and Services of the National Nuclear Data Center

    NASA Astrophysics Data System (ADS)

    Pritychenko, B.; Arcilla, R.; Burrows, T. W.; Dunford, C. L.; Herman, M. W.; McLane, V.; Obložinský, P.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2005-05-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic nuclear research, applied nuclear technologies including energy, shielding, medical and homeland security. In 2004, to answer the needs of nuclear data users community, NNDC completed a project to modernize data storage and management of its databases and began offering new nuclear data Web services. The principles of database and Web application development as well as related nuclear reaction and structure database services are briefly described.

  1. Backend Control Processor for a Multi-Processor Relational Database Computer System.

    DTIC Science & Technology

    1984-12-01

    SCHOOL OF ENGI. UNCRSIFID MPONTIFF DEC 84 AFXT/GCS/ENG/84D-22 F/O 9/2 L ommhhhhmhhml mhhhommhhhhhm i-2 8 -- U0. 11111= Q. 2 111.8IIII- 1111111..6...THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...development of a Backend Multi-Processor Relational Database Computer System. This thesis addresses a single component of this system, the Backend Control

  2. Develop a Prototype Personal Health Record Application (PHR-A) that Captures Information About Daily Living Important for Diabetes and Provides Decision Support with Actionable Advice for Diabetes Self Care

    DTIC Science & Technology

    2012-10-01

    higher  Java v5Apache Struts v2  Hibernate v2  C3PO  SQL*Net client / JDBC Database Server  Oracle 10.0.2 Desktop Client  Internet Explorer...for mobile Smartphones - A Java -based framework utilizing Apache Struts on the server - Relational database to handle data storage requirements B...technologies are as follows: Technology Use Requirements Java Application Provides the backend application software to drive the PHR-A 7 BEA Web

  3. MTO-like reference mask modeling for advanced inverse lithography technology patterns

    NASA Astrophysics Data System (ADS)

    Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min

    2017-07-01

    Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.

  4. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  5. MIPS: analysis and annotation of proteins from whole genomes in 2005

    PubMed Central

    Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839

  6. MIPS: analysis and annotation of proteins from whole genomes in 2005.

    PubMed

    Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).

  7. The human role in space (THURIS) applications study. Final briefing

    NASA Technical Reports Server (NTRS)

    Maybee, George W.

    1987-01-01

    The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.

  8. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  9. Videotelephony and Disability: A Bibliography. Technology, Communication and Disability, Report No. 5.

    ERIC Educational Resources Information Center

    Brodin, Jane; Magnusson, Magnus

    This annotated bibliography on videotelephony and disability is based on a literature search in nine databases, as well as information collected from literature lists in published reports. The bibliography's scope includes telephony and related fields such as telematics, the use of different kinds of picture telephones, information technology,…

  10. Database constraints applied to metabolic pathway reconstruction tools.

    PubMed

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  11. Traffic accident in Cuiabá-MT: an analysis through the data mining technology.

    PubMed

    Galvão, Noemi Dreyer; de Fátima Marin, Heimar

    2010-01-01

    The traffic road accidents (ATT) are non-intentional events with an important magnitude worldwide, mainly in the urban centers. This article aims to analyzes data related to the victims of ATT recorded by the Justice Secretariat and Public Security (SEJUSP) in hospital morbidity and mortality incidence at the city of Cuiabá-MT during 2006, using data mining technology. An observational, retrospective and exploratory study of the secondary data bases was carried out. The three database selected were related using the probabilistic method, through the free software RecLink. One hundred and thirty-nine (139) real pairs of victims of ATT were obtained. In this related database the data mining technology was applied with the software WEKA using the Apriori algorithm. The result generated 10 best rules, six of them were considered according to the parameters established that indicated a useful and comprehensible knowledge to characterize the victims of accidents in Cuiabá. Finally, the findings of the associative rules showed peculiarities of the road traffic accident victims in Cuiabá and highlight the need of prevention measures in the collision accidents for males.

  12. Aviation Trends Related to Atmospheric Environment Safety Technologies Project Technical Challenges

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.; Withrow, Colleen A.; Barr, Lawrence C.; Evans, Joni K.; Leone, Karen M.; Jones, Sharon M.

    2014-01-01

    Current and future aviation safety trends related to the National Aeronautics and Space Administration's Atmospheric Environment Safety Technologies Project's three technical challenges (engine icing characterization and simulation capability; airframe icing simulation and engineering tool capability; and atmospheric hazard sensing and mitigation technology capability) were assessed by examining the National Transportation Safety Board (NTSB) accident database (1989 to 2008), incidents from the Federal Aviation Administration (FAA) accident/incident database (1989 to 2006), and literature from various industry and government sources. The accident and incident data were examined for events involving fixed-wing airplanes operating under Federal Aviation Regulation (FAR) Parts 121, 135, and 91 for atmospheric conditions related to airframe icing, ice-crystal engine icing, turbulence, clear air turbulence, wake vortex, lightning, and low visibility (fog, low ceiling, clouds, precipitation, and low lighting). Five future aviation safety risk areas associated with the three AEST technical challenges were identified after an exhaustive survey of a variety of sources and include: approach and landing accident reduction, icing/ice detection, loss of control in flight, super density operations, and runway safety.

  13. National security and national competitiveness: Open source solutions; NASA requirements and capabilities

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.

    1993-01-01

    Foreign competitors are challenging the world leadership of the U.S. aerospace industry, and increasingly tight budgets everywhere make international cooperation in aerospace science necessary. The NASA STI Program has as part of its mission to support NASA R&D, and to that end has developed a knowledge base of aerospace-related information known as the NASA Aerospace Database. The NASA STI Program is already involved in international cooperation with NATO/AGARD/TIP, CENDI, ICSU/ICSTI, and the U.S. Japan Committee on STI. With the new more open political climate, the perceived dearth of foreign information in the NASA Aerospace Database, and the development of the ESA database and DELURA, the German databases, the NASA STI Program is responding by sponsoring workshops on foreign acquisitions and by increasing its cooperation with international partners and with other U.S. agencies. The STI Program looks to the future of improved database access through networking and a GUI; new media; optical disk, video, and full text; and a Technology Focus Group that will keep the NASA STI Program current with technology.

  14. NIST Gas Hydrate Research Database and Web Dissemination Channel.

    PubMed

    Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D

    2010-01-01

    To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.

  15. NCBI GEO: mining millions of expression profiles--database and tools.

    PubMed

    Barrett, Tanya; Suzek, Tugba O; Troup, Dennis B; Wilhite, Stephen E; Ngau, Wing-Chi; Ledoux, Pierre; Rudnev, Dmitry; Lash, Alex E; Fujibuchi, Wataru; Edgar, Ron

    2005-01-01

    The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest fully public repository for high-throughput molecular abundance data, primarily gene expression data. The database has a flexible and open design that allows the submission, storage and retrieval of many data types. These data include microarray-based experiments measuring the abundance of mRNA, genomic DNA and protein molecules, as well as non-array-based technologies such as serial analysis of gene expression (SAGE) and mass spectrometry proteomic technology. GEO currently holds over 30,000 submissions representing approximately half a billion individual molecular abundance measurements, for over 100 organisms. Here, we describe recent database developments that facilitate effective mining and visualization of these data. Features are provided to examine data from both experiment- and gene-centric perspectives using user-friendly Web-based interfaces accessible to those without computational or microarray-related analytical expertise. The GEO database is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

  16. Image Format Conversion to DICOM and Lookup Table Conversion to Presentation Value of the Japanese Society of Radiological Technology (JSRT) Standard Digital Image Database.

    PubMed

    Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki

    2016-01-01

    Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.

  17. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  18. JICST Factual Database(1)

    NASA Astrophysics Data System (ADS)

    Kurosawa, Shinji

    The outline of JICST factual database (JOIS-F), which JICST has started from January, 1988, and its online service are described in this paper. First, the author mentions the circumstances from 1973, when its planning was started, to the present, and its relation to "Project by Special Coordination Founds for Promoting Science and Technology". Secondly, databases, which are now under development aiming to start its services from fiscal 1988 or fiscal 1989, of DNA, metallic material intensity, crystal structure, chemical substance regulations, and so forth, are described. Lastly, its online service is briefly explained.

  19. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  20. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  1. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    NASA Astrophysics Data System (ADS)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  2. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  3. Driver acceptance of commercial vehicle operations (CVO) technology in the motor carrier environment. Executive summary, Critical issues relating to acceptance of technology by interstate truck and bus drivers

    DOT National Transportation Integrated Search

    2000-05-01

    The California database incorporated in the Highway Safety Information System (HSIS) is derived from the California TASAS (Traffic Accident Surveillance and Analysis System). The system, maintained by the Traffic Operations Office of Caltrans, is a m...

  4. Data-Base Software For Tracking Technological Developments

    NASA Technical Reports Server (NTRS)

    Aliberti, James A.; Wright, Simon; Monteith, Steve K.

    1996-01-01

    Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.

  5. The ability of older adults to use customized online medical databases to improve their health-related knowledge.

    PubMed

    Freund, Ophir; Reychav, Iris; McHaney, Roger; Goland, Ella; Azuri, Joseph

    2017-06-01

    Patient compliance with medical advice and recommended treatment depends on perception of health condition, medical knowledge, attitude, and self-efficacy. This study investigated how use of customized online medical databases, intended to improve knowledge in a variety of relevant medical topics, influenced senior adults' perceptions. Seventy-nine older adults in residence homes completed a computerized, tablet-based questionnaire, with medical scenarios and related questions. Following an intervention, control group participants answered questions without online help while an experimental group received internet links that directed them to customized, online medical databases. Medical knowledge and test scores among the experimental group significantly improved from pre- to post-intervention (p<0.0001) and was higher in comparison with the control group (p<0.0001). No significant change occurred in the control group. Older adults improved their knowledge in desired medical topic areas using customized online medical databases. The study demonstrated how such databases help solve health-related questions among older adult population members, and that older patients appear willing to consider technology usage in information acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Database security and encryption technology research and application

    NASA Astrophysics Data System (ADS)

    Zhu, Li-juan

    2013-03-01

    The main purpose of this paper is to discuss the current database information leakage problem, and discuss the important role played by the message encryption techniques in database security, As well as MD5 encryption technology principle and the use in the field of website or application. This article is divided into introduction, the overview of the MD5 encryption technology, the use of MD5 encryption technology and the final summary. In the field of requirements and application, this paper makes readers more detailed and clearly understood the principle, the importance in database security, and the use of MD5 encryption technology.

  7. A comprehensive view of the web-resources related to sericulture

    PubMed Central

    Singh, Deepika; Chetia, Hasnahana; Kabiraj, Debajyoti; Sharma, Swagata; Kumar, Anil; Sharma, Pragya; Deka, Manab; Bora, Utpal

    2016-01-01

    Recent progress in the field of sequencing and analysis has led to a tremendous spike in data and the development of data science tools. One of the outcomes of this scientific progress is development of numerous databases which are gaining popularity in all disciplines of biology including sericulture. As economically important organism, silkworms are studied extensively for their numerous applications in the field of textiles, biomaterials, biomimetics, etc. Similarly, host plants, pests, pathogens, etc. are also being probed to understand the seri-resources more efficiently. These studies have led to the generation of numerous seri-related databases which are extremely helpful for the scientific community. In this article, we have reviewed all the available online resources on silkworm and its related organisms, including databases as well as informative websites. We have studied their basic features and impact on research through citation count analysis, finally discussing the role of emerging sequencing and analysis technologies in the field of seri-data science. As an outcome of this review, a web portal named SeriPort, has been created which will act as an index for the various sericulture-related databases and web resources available in cyberspace. Database URL: http://www.seriport.in/ PMID:27307138

  8. Applications of Technology to CAS Data-Base Production.

    ERIC Educational Resources Information Center

    Weisgerber, David W.

    1984-01-01

    Reviews the economic importance of applying computer technology to Chemical Abstracts Service database production from 1973 to 1983. Database building, technological applications for editorial processing (online editing, Author Index Manufacturing System), and benefits (increased staff productivity, reduced rate of increase of cost of services,…

  9. Development of a database system for near-future climate change projections under the Japanese National Project SI-CAT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.

    2017-12-01

    Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.

  10. GIDL: a rule based expert system for GenBank Intelligent Data Loading into the Molecular Biodiversity database

    PubMed Central

    2012-01-01

    Background In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. Methods The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing. Results and conclusions Entries coming from Virus (814,122), Plant (1,365,360) and Invertebrate (959,065) divisions of GenBank rel.180 have been loaded in the Molecular Biodiversity Database by GIDL. Our system, combining the Sequence Ontology and the Chado schema, allows a more powerful query expressiveness compared with the most commonly used sequence retrieval systems like Entrez or SRS. PMID:22536971

  11. Military Personnel: DOD Has Processes for Operating and Managing Its Sexual Assault Incident Database

    DTIC Science & Technology

    2017-01-01

    each change and its implementation status as well as supporting the audit of products to verify conformance to requirements. Through these change...management process for modifying DSAID aligns with information technology and project management industry standards. GAO reviewed DOD documents, and...Acknowledgments 32 Related GAO Products 33 Tables Table 1: Roles and Access Rights for Users of the Defense Sexual Assault Incident Database (DSAID

  12. Potentials of Advanced Database Technology for Military Information Systems

    DTIC Science & Technology

    2001-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010866 TITLE: Potentials of Advanced Database Technology for Military... Technology for Military Information Systems Sunil Choennia Ben Bruggemanb a National Aerospace Laboratory, NLR, P.O. Box 90502, 1006 BM Amsterdam...application of advanced information tech- nology, including database technology , as underpin- actions X and Y as dangerous or not? ning is

  13. Implications of Multilingual Interoperability of Speech Technology for Military Use (Les implications de l’interoperabilite multilingue des technologies vocales pour applications militaires)

    DTIC Science & Technology

    2004-09-01

    Databases 2-2 2.3.1 Translanguage English Database 2-2 2.3.2 Australian National Database of Spoken Language 2-3 2.3.3 Strange Corpus 2-3 2.3.4...some relevance to speech technology research. 2.3.1 Translanguage English Database In a daring plan Joseph Mariani, then at LIMSI-CNRS, proposed to...native speakers. The database is known as the ‘ Translanguage English Database’ but is often referred to as the ‘terrible English database.’ About 28

  14. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    PubMed Central

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes. PMID:25202745

  15. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies

    PubMed Central

    Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431

  16. Designing Corporate Databases to Support Technology Innovation

    ERIC Educational Resources Information Center

    Gultz, Michael Jarett

    2012-01-01

    Based on a review of the existing literature on database design, this study proposed a unified database model to support corporate technology innovation. This study assessed potential support for the model based on the opinions of 200 technology industry executives, including Chief Information Officers, Chief Knowledge Officers and Chief Learning…

  17. National health care providers' database (NHCPD) of Slovenia--information technology solution for health care planning and management.

    PubMed

    Albreht, T; Paulin, M

    1999-01-01

    The article describes the possibilities of planning of the health care providers' network enabled by the use of information technology. The cornerstone of such planning is the development and establishment of a quality database on health care providers, health care professionals and their employment statuses. Based on the analysis of information needs, a new database was developed for various users in health care delivery as well as for those in health insurance. The method of information engineering was used in the standard four steps of the information system construction, while the whole project was run in accordance with the principles of two internationally approved project management methods. Special attention was dedicated to a careful analysis of the users' requirements and we believe the latter to be fulfilled to a very large degree. The new NHCPD is a relational database which is set up in two important state institutions, the National Institute of Public Health and the Health Insurance Institute of Slovenia. The former is responsible for updating the database, while the latter is responsible for the technological side as well as for the implementation of data security and protection. NHCPD will be inter linked with several other existing applications in the area of health care, public health and health insurance. Several important state institutions and professional chambers are users of the database in question, thus integrating various aspects of the health care system in Slovenia. The setting up of a completely revised health care providers' database in Slovenia is an important step in the development of a uniform and integrated information system that would support top decision-making processes at the national level.

  18. Relax with CouchDB--into the non-relational DBMS era of bioinformatics.

    PubMed

    Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R

    2012-07-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  20. 77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... Database AGENCY: Office of the National Coordinator for Health Information Technology, HHS. ACTION: Notice of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has launched a new Health Information Technology Federal Advisory Committee Workgroup Application Database...

  1. Prisoners' expectations of the national forensic DNA database: surveillance and reconfiguration of individual rights.

    PubMed

    Machado, Helena; Santos, Filipe; Silva, Susana

    2011-07-15

    In this paper we aim to discuss how Portuguese prisoners know and what they feel about surveillance mechanisms related to the inclusion and deletion of the DNA profiles of convicted criminals in the national forensic database. Through a set of interviews with individuals currently imprisoned we focus on the ways this group perceives forensic DNA technologies. While the institutional and political discourses maintain that the restricted use and application of DNA profiles within the national forensic database protects individuals' rights, the prisoners claim that police misuse of such technologies potentially makes it difficult to escape from surveillance and acts as a mean of reinforcing the stigma of delinquency. The prisoners also argue that additional intensive and extensive use of surveillance devices might be more protective of their own individual rights and might possibly increase potential for exoneration. Crown Copyright © 2011. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Identifying and Synchronizing Health Information Technology (HIT) Events from FDA Medical Device Reports.

    PubMed

    Kang, Hong; Wang, Frank; Zhou, Sicheng; Miao, Qi; Gong, Yang

    2017-01-01

    Health information technology (HIT) events, a subtype of patient safety events, pose a major threat and barrier toward a safer healthcare system. It is crucial to gain a better understanding of the nature of the errors and adverse events caused by current HIT systems. The scarcity of HIT event-exclusive databases and event reporting systems indicates the challenge of identifying the HIT events from existing resources. FDA Manufacturer and User Facility Device Experience (MAUDE) database is a potential resource for HIT events. However, the low proportion and the rapid evolvement of HIT-related events present challenges for distinguishing them from other equipment failures and hazards. We proposed a strategy to identify and synchronize HIT events from MAUDE by using a filter based on structured features and classifiers based on unstructured features. The strategy will help us develop and grow an HIT event-exclusive database, keeping pace with updates to MAUDE toward shared learning.

  3. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies.

    PubMed

    Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.

  4. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Dezaki, Kyoko; Saeki, Makoto

    Rapid progress in advanced informationalization has increased need to enforce documentation activities in industries. Responding to it Tokin Corporation has been engaged in database construction for patent information, technical reports and so on accumulated inside the Company. Two results are obtained; One is TOPICS, inhouse patent information management system, the other is TOMATIS, management and technical information system by use of personal computers and all-purposed relational database software. These systems aim at compiling databases of patent and technological management information generated internally and externally by low labor efforts as well as low cost, and providing for comprehensive information company-wide. This paper introduces the outline of these systems and how they are actually used.

  5. Optoelectronics-related competence building in Japanese and Western firms

    NASA Astrophysics Data System (ADS)

    Miyazaki, Kumiko

    1992-05-01

    In this paper, an analysis is made of how different firms in Japan and the West have developed competence related to optoelectronics on the basis of their previous experience and corporate strategies. The sample consists of a set of seven Japanese and four Western firms in the industrial, consumer electronics and materials sectors. Optoelectronics is divided into subfields including optical communications systems, optical fibers, optoelectronic key components, liquid crystal displays, optical disks, and others. The relative strengths and weaknesses of companies in the various subfields are determined using the INSPEC database, from 1976 to 1989. Parallel data are analyzed using OTAF U.S. patent statistics and the two sets of data are compared. The statistical analysis from the database is summarized for firms in each subfield in the form of an intra-firm technology index (IFTI), a new technique introduced to assess the revealed technology advantage of firms. The quantitative evaluation is complemented by results from intensive interviews with the management and scientists of the firms involved. The findings show that there is a marked variation in the way firms' technological trajectories have evolved giving rise to strength in some and weakness in other subfields for the different companies, which are related to their accumulated core competencies, previous core business activities, organizational, marketing, and competitive factors.

  6. Learning Asset Technology Integration Support Tool Design Document

    DTIC Science & Technology

    2010-05-11

    language known as Hypertext Preprocessor ( PHP ) and by MySQL – a relational database management system that can also be used for content management. It...Requirements The LATIST tool will be implemented utilizing a WordPress platform with MySQL as the database. Also the LATIST system must effectively work... MySQL . When designing the LATIST system there are several considerations which must be accounted for in the working prototype. These include: • DAU

  7. Cloud-Based Distributed Control of Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to

  8. Partial automation of database processing of simulation outputs from L-systems models of plant morphogenesis.

    PubMed

    Chen, Yi- Ping Phoebe; Hanan, Jim

    2002-01-01

    Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly.

  9. [Current status of DNA databases in the forensic field: new progress, new legal needs].

    PubMed

    Baeta, Miriam; Martínez-Jarreta, Begoña

    2009-01-01

    One of the most polemic issues regarding the use of deoxyribonucleic acid (DNA) in the legal sphere, refers to the creation of DNA databases. Until relatively recently, Spain did not have a law to support the establishment of a national DNA profile bank for forensic purposes, and preserve the fundamental rights of subjects whose data are archived therein. The regulatory law of police databases regarding identifiers obtained from DNA approved in 2007, covers this void in the Spanish legislation and responds to the incessant need to adapt the laws to continuous scientific and technological progress.

  10. A high performance, ad-hoc, fuzzy query processing system for relational databases

    NASA Technical Reports Server (NTRS)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  11. Fifty years of Brazilian Dental Materials Group: scientific contributions of dental materials field evaluated by systematic review

    PubMed Central

    ROSA, Wellington Luiz de Oliveira; SILVA, Tiago Machado; LIMA, Giana da Silveira; SILVA, Adriana Fernandes; PIVA, Evandro

    2016-01-01

    ABSTRACT Objective A systematic review was conducted to analyze Brazilian scientific and technological production related to the dental materials field over the past 50 years. Material and Methods This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (Prisma) statement. Searches were performed until December 2014 in six databases: MedLine (PubMed), Scopus, LILACS, IBECS, BBO, and the Cochrane Library. Additionally, the Brazilian patent database (INPI - Instituto Nacional de Propriedade Industrial) was screened in order to get an overview of Brazilian technological development in the dental materials field. Two reviewers independently analyzed the documents. Only studies and patents related to dental materials were included in this review. Data regarding the material category, dental specialty, number of documents and patents, filiation countries, and the number of citations were tabulated and analyzed in Microsoft Office Excel (Microsoft Corporation, Redmond, Washington, United States). Results A total of 115,806 studies and 53 patents were related to dental materials and were included in this review. Brazil had 8% affiliation in studies related to dental materials, and the majority of the papers published were related to dental implants (1,137 papers), synthetic resins (681 papers), dental cements (440 papers), dental alloys (392 papers) and dental adhesives (361 papers). The Brazilian technological development with patented dental materials was smaller than the scientific production. The most patented type of material was dental alloys (11 patents), followed by dental implants (8 patents) and composite resins (7 patents). Conclusions Dental materials science has had a substantial number of records, demonstrating an important presence in scientific and technological development of dentistry. In addition, it is important to approximate the relationship between academia and industry to expand the technological development in countries such as Brazil. PMID:27383712

  12. Fifty years of Brazilian Dental Materials Group: scientific contributions of dental materials field evaluated by systematic review.

    PubMed

    Rosa, Wellington Luiz de Oliveira; Silva, Tiago Machado; Lima, Giana da Silveira; Silva, Adriana Fernandes; Piva, Evandro

    2016-01-01

    A systematic review was conducted to analyze Brazilian scientific and technological production related to the dental materials field over the past 50 years. This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (Prisma) statement. Searches were performed until December 2014 in six databases: MedLine (PubMed), Scopus, LILACS, IBECS, BBO, and the Cochrane Library. Additionally, the Brazilian patent database (INPI - Instituto Nacional de Propriedade Industrial) was screened in order to get an overview of Brazilian technological development in the dental materials field. Two reviewers independently analyzed the documents. Only studies and patents related to dental materials were included in this review. Data regarding the material category, dental specialty, number of documents and patents, filiation countries, and the number of citations were tabulated and analyzed in Microsoft Office Excel (Microsoft Corporation, Redmond, Washington, United States). A total of 115,806 studies and 53 patents were related to dental materials and were included in this review. Brazil had 8% affiliation in studies related to dental materials, and the majority of the papers published were related to dental implants (1,137 papers), synthetic resins (681 papers), dental cements (440 papers), dental alloys (392 papers) and dental adhesives (361 papers). The Brazilian technological development with patented dental materials was smaller than the scientific production. The most patented type of material was dental alloys (11 patents), followed by dental implants (8 patents) and composite resins (7 patents). Dental materials science has had a substantial number of records, demonstrating an important presence in scientific and technological development of dentistry. In addition, it is important to approximate the relationship between academia and industry to expand the technological development in countries such as Brazil.

  13. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  14. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  15. NASA Aerospace Flight Battery Systems Program Update

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle; ODonnell, Patricia

    1997-01-01

    The objectives of NASA's Aerospace Flight Battery Systems Program is to: develop, maintain and provide tools for the validation and assessment of aerospace battery technologies; accelerate the readiness of technology advances and provide infusion paths for emerging technologies; provide NASA projects with the required database and validation guidelines for technology selection of hardware and processes relating to aerospace batteries; disseminate validation and assessment tools, quality assurance, reliability, and availability information to the NASA and aerospace battery communities; and ensure that safe, reliable batteries are available for NASA's future missions.

  16. The Technology Education Graduate Research Database, 1892-2000. CTTE Monograph.

    ERIC Educational Resources Information Center

    Reed, Philip A., Ed.

    The Technology Education Graduate Research Database (TEGRD) was designed in two parts. The first part was a 384 page bibliography of theses and dissertations from 1892-2000. The second part was an online, searchable database of graduate research completed within technology education from 1892 to the present. The primary goals of the project were:…

  17. International energy: Research organizations, 1986--1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, P.; Jordan, S.

    The International Energy: Research Organizations publication contains the standardized names of energy research organizations used in energy information databases. Involved in this cooperative task are (1) the technical staff of the USDOE Office of Scientific and Technical Information (OSTI) in cooperation with the member countries of the Energy Technology Data Exchange (ETDE) and (2) the International Nuclear Information System (INIS). This publication identifies current organizations doing research in all energy fields, standardizes the format for recording these organization names in bibliographic citations, assigns a numeric code to facilitate data entry, and identifies report number prefixes assigned by these organizations. Thesemore » research organization names may be used in searching the databases Energy Science Technology'' on DIALOG and Energy'' on STN International. These organization names are also used in USDOE databases on the Integrated Technical Information System. Research organizations active in the past five years, as indicated by database records, were identified to form this publication. This directory includes approximately 34,000 organizations that reported energy-related literature from 1986 to 1990 and updates the DOE Energy Data Base: Corporate Author Entries.« less

  18. The Social and Organizational Life Data Archive (SOLDA).

    ERIC Educational Resources Information Center

    Reed, Ken; Blunsdon, Betsy; Rimme, Malcolm

    2000-01-01

    Outlines the rationale and design of the Social and Organizational Life Data Archive (SOLDA), an on-line collection of survey and other statistical data relevant to research in the fields of management, organizational studies, industrial relations, marketing, and related social sciences. The database uses CD-ROM technology and the World Wide Web…

  19. Learners' Reflections in Technological Learning Environments: Why To Promote and How To Evaluate.

    ERIC Educational Resources Information Center

    Rimor, Rikki; Kozminsky, Ely

    In this study, 24 9th-grade students investigated several issues related to modern Israeli society. In their investigation, students were engaged in activities such as data search, data sorting, making inquiries, project writing, and construction of a new computerized database related to the subjects of their investigations. Students were…

  20. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less

  1. Visibility of medical informatics regarding bibliometric indices and databases

    PubMed Central

    2011-01-01

    Background The quantitative study of the publication output (bibliometrics) deeply influences how scientific work is perceived (bibliometric visibility). Recently, new bibliometric indices and databases have been established, which may change the visibility of disciplines, institutions and individuals. This study examines the effects of the new indices on the visibility of Medical Informatics. Methods By objective criteria, three sets of journals are chosen, two representing Medical Informatics and a third addressing Internal Medicine as a benchmark. The availability of index data (index coverage) and the aggregate scores of these corpora are compared for journal-related (Journal impact factor, Eigenfactor metrics, SCImago journal rank) and author-related indices (Hirsch-index, Egghes G-index). Correlation analysis compares the dependence of author-related indices. Results The bibliometric visibility depended on the research focus and the citation database: Scopus covers more journals relevant for Medical Informatics than ISI/Thomson Reuters. Journals focused on Medical Informatics' methodology were negatively affected by the Eigenfactor metrics, while the visibility profited from an interdisciplinary research focus. The correlation between Hirsch-indices computed on citation databases and the Internet was strong. Conclusions The visibility of smaller technology-oriented disciplines like Medical Informatics is changed by the new bibliometric indices and databases possibly leading to suitably changed publication strategies. Freely accessible author-related indices enable an easy and adequate individual assessment. PMID:21496230

  2. [Integrated DNA barcoding database for identifying Chinese animal medicine].

    PubMed

    Shi, Lin-Chun; Yao, Hui; Xie, Li-Fang; Zhu, Ying-Jie; Song, Jing-Yuan; Zhang, Hui; Chen, Shi-Lin

    2014-06-01

    In order to construct an integrated DNA barcoding database for identifying Chinese animal medicine, the authors and their cooperators have completed a lot of researches for identifying Chinese animal medicines using DNA barcoding technology. Sequences from GenBank have been analyzed simultaneously. Three different methods, BLAST, barcoding gap and Tree building, have been used to confirm the reliabilities of barcode records in the database. The integrated DNA barcoding database for identifying Chinese animal medicine has been constructed using three different parts: specimen, sequence and literature information. This database contained about 800 animal medicines and the adulterants and closely related species. Unknown specimens can be identified by pasting their sequence record into the window on the ID page of species identification system for traditional Chinese medicine (www. tcmbarcode. cn). The integrated DNA barcoding database for identifying Chinese animal medicine is significantly important for animal species identification, rare and endangered species conservation and sustainable utilization of animal resources.

  3. Computer Literacy for Teachers.

    ERIC Educational Resources Information Center

    Sarapin, Marvin I.; Post, Paul E.

    Basic concepts of computer literacy are discussed as they relate to industrial arts/technology education. Computer hardware development is briefly examined, and major software categories are defined, including database management, computer graphics, spreadsheet programs, telecommunications and networking, word processing, and computer assisted and…

  4. Database Technology Activities and Assessment for Defense Modeling and Simulation Office (DMSO) (August 1991-November 1992). A Documented Briefing

    DTIC Science & Technology

    1994-01-01

    databases and identifying new data entities, data elements, and relationships . - Standard data naming conventions, schema, and definition processes...management system. The use of such a tool could offer: (1) structured support for representation of objects and their relationships to each other (and...their relationships to related multimedia objects such as an engineering drawing of the tank object or a satellite image that contains the installation

  5. Semantic encoding of relational databases in wireless networks

    NASA Astrophysics Data System (ADS)

    Benjamin, David P.; Walker, Adrian

    2005-03-01

    Semantic Encoding is a new, patented technology that greatly increases the speed of transmission of distributed databases over networks, especially over ad hoc wireless networks, while providing a novel method of data security. It reduces bandwidth consumption and storage requirements, while speeding up query processing, encryption and computation of digital signatures. We describe the application of Semantic Encoding in a wireless setting and provide an example of its operation in which a compression of 290:1 would be achieved.

  6. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data.

    PubMed

    Nettling, Martin; Thieme, Nils; Both, Andreas; Grosse, Ivo

    2014-02-04

    New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics.

  7. DRUMS: a human disease related unique gene mutation search engine.

    PubMed

    Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan

    2011-10-01

    With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.

  8. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  9. Technology-based management of environmental organizations using an Environmental Management Information System (EMIS): Design and development

    NASA Astrophysics Data System (ADS)

    Kouziokas, Georgios N.

    2016-01-01

    The adoption of Information and Communication Technologies (ICT) in environmental management has become a significant demand nowadays with the rapid growth of environmental information. This paper presents a prototype Environmental Management Information System (EMIS) that was developed to provide a systematic way of managing environmental data and human resources of an environmental organization. The system was designed using programming languages, a Database Management System (DBMS) and other technologies and programming tools and combines information from the relational database in order to achieve the principal goals of the environmental organization. The developed application can be used to store and elaborate information regarding: human resources data, environmental projects, observations, reports, data about the protected species, environmental measurements of pollutant factors or other kinds of analytical measurements and also the financial data of the organization. Furthermore, the system supports the visualization of spatial data structures by using geographic information systems (GIS) and web mapping technologies. This paper describes this prototype software application, its structure, its functions and how this system can be utilized to facilitate technology-based environmental management and decision-making process.

  10. High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan

    2010-10-04

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less

  11. Major technology issues in surgical data collection.

    PubMed

    Kirschenbaum, I H

    1995-10-01

    Surgical scheduling and data collection is a field that has a long history as well as a bright future. Historically, surgical cases have always involved some amount of data collection. Surgical cases are scheduled and then reviewed. The classic method, that large black surgical log, actually still exists in many hospitals. In fact, there is nothing new about the recording or reporting of surgical cases. If we only needed to record the information and produce a variety of reports on the data, then modern electronic technology would function as a glorified fast index card box--or, in computer database terms, a simple flat file database. But, this is not the future of technology in surgical case management. This article makes the general case for integrating surgical data systems. Instead of reviewing specific software, it essentially addresses the issues of strategic planning related to this important aspect of medical information systems.

  12. Using bibliographic databases in technology transfer

    NASA Technical Reports Server (NTRS)

    Huffman, G. David

    1987-01-01

    When technology developed for a specific purpose is used in another application, the process is called technology transfer--the application of an existing technology to a new use or user for purposes other than those for which the technology was originally intended. Using Bibliographical Databases in Technology Transfer deals with demand-pull transfer, technology transfer that arises from need recognition, and is a guide for conducting demand-pull technology transfer studies. It can be used by a researcher as a self-teaching manual or by an instructor as a classroom text. A major problem of technology transfer is finding applicable technology to transfer. Described in detail is the solution to this problem, the use of computerized, bibliographic databases, which currently contain virtually all documented technology of the past 15 years. A general framework for locating technology is described. NASA technology organizations and private technology transfer firms are listed for consultation.

  13. The comparative effectiveness of conventional and digital image libraries.

    PubMed

    McColl, R I; Johnson, A

    2001-03-01

    Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.

  14. Craniofacial imaging informatics and technology development.

    PubMed

    Vannier, M W

    2003-01-01

    'Craniofacial imaging informatics' refers to image and related scientific data from the dentomaxillofacial complex, and application of 'informatics techniques' (derived from disciplines such as applied mathematics, computer science and statistics) to understand and organize the information associated with the data. Major trends in information technology determine the progress made in craniofacial imaging and informatics. These trends include industry consolidation, disruptive technologies, Moore's law, electronic atlases and on-line databases. Each of these trends is explained and documented, relative to their influence on craniofacial imaging. Craniofacial imaging is influenced by major trends that affect all medical imaging and related informatics applications. The introduction of cone beam craniofacial computed tomography scanners is an example of a disruptive technology entering the field. An important opportunity lies in the integration of biologic knowledge repositories with craniofacial images. The progress of craniofacial imaging will continue subject to limitations imposed by the underlying technologies, especially imaging informatics. Disruptive technologies will play a major role in the evolution of this field.

  15. Epistemonikos: a free, relational, collaborative, multilingual database of health evidence.

    PubMed

    Rada, Gabriel; Pérez, Daniel; Capurro, Daniel

    2013-01-01

    Epistemonikos (www.epistemonikos.org) is a free, multilingual database of the best available health evidence. This paper describes the design, development and implementation of the Epistemonikos project. Using several web technologies to store systematic reviews, their included articles, overviews of reviews and structured summaries, Epistemonikos is able to provide a simple and powerful search tool to access health evidence for sound decision making. Currently, Epistemonikos stores more than 115,000 unique documents and more than 100,000 relationships between documents. In addition, since its database is translated into 9 different languages, Epistemonikos ensures that non-English speaking decision-makers can access the best available evidence without language barriers.

  16. Usage of the Jess Engine, Rules and Ontology to Query a Relational Database

    NASA Astrophysics Data System (ADS)

    Bak, Jaroslaw; Jedrzejek, Czeslaw; Falkowski, Maciej

    We present a prototypical implementation of a library tool, the Semantic Data Library (SDL), which integrates the Jess (Java Expert System Shell) engine, rules and ontology to query a relational database. The tool extends functionalities of previous OWL2Jess with SWRL implementations and takes full advantage of the Jess engine, by separating forward and backward reasoning. The optimization of integration of all these technologies is an advancement over previous tools. We discuss the complexity of the query algorithm. As a demonstration of capability of the SDL library, we execute queries using crime ontology which is being developed in the Polish PPBW project.

  17. Evaluation of "shotgun" proteomics for identification of biological threat agents in complex environmental matrixes: experimental simulations.

    PubMed

    Verberkmoes, Nathan C; Hervey, W Judson; Shah, Manesh; Land, Miriam; Hauser, Loren; Larimer, Frank W; Van Berkel, Gary J; Goeringer, Douglas E

    2005-02-01

    There is currently a great need for rapid detection and positive identification of biological threat agents, as well as microbial species in general, directly from complex environmental samples. This need is most urgent in the area of homeland security, but also extends into medical, environmental, and agricultural sciences. Mass-spectrometry-based analysis is one of the leading technologies in the field with a diversity of different methodologies for biothreat detection. Over the past few years, "shotgun"proteomics has become one method of choice for the rapid analysis of complex protein mixtures by mass spectrometry. Recently, it was demonstrated that this methodology is capable of distinguishing a target species against a large database of background species from a single-component sample or dual-component mixtures with relatively the same concentration. Here, we examine the potential of shotgun proteomics to analyze a target species in a background of four contaminant species. We tested the capability of a common commercial mass-spectrometry-based shotgun proteomics platform for the detection of the target species (Escherichia coli) at four different concentrations and four different time points of analysis. We also tested the effect of database size on positive identification of the four microbes used in this study by testing a small (13-species) database and a large (261-species) database. The results clearly indicated that this technology could easily identify the target species at 20% in the background mixture at a 60, 120, 180, or 240 min analysis time with the small database. The results also indicated that the target species could easily be identified at 20% or 6% but could not be identified at 0.6% or 0.06% in either a 240 min analysis or a 30 h analysis with the small database. The effects of the large database were severe on the target species where detection above the background at any concentration used in this study was impossible, though the three other microbes used in this study were clearly identified above the background when analyzed with the large database. This study points to the potential application of this technology for biological threat agent detection but highlights many areas of needed research before the technology will be useful in real world samples.

  18. A New Approach To Secure Federated Information Bases Using Agent Technology.

    ERIC Educational Resources Information Center

    Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang

    2003-01-01

    Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…

  19. Geo-spatial Service and Application based on National E-government Network Platform and Cloud

    NASA Astrophysics Data System (ADS)

    Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.

    2014-04-01

    With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.

  20. Information persistence using XML database technology

    NASA Astrophysics Data System (ADS)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and improved retrieval techniques.

  1. EST databases and web tools for EST projects.

    PubMed

    Shen, Yao-Qing; O'Brien, Emmet; Koski, Liisa; Lang, B Franz; Burger, Gertraud

    2009-01-01

    This chapter outlines key considerations for constructing and implementing an EST database. Instead of showing the technological details step by step, emphasis is put on the design of an EST database suited to the specific needs of EST projects and how to choose the most suitable tools. Using TBestDB as an example, we illustrate the essential factors to be considered for database construction and the steps for data population and annotation. This process employs technologies such as PostgreSQL, Perl, and PHP to build the database and interface, and tools such as AutoFACT for data processing and annotation. We discuss these in comparison to other available technologies and tools, and explain the reasons for our choices.

  2. Copyright, Licensing Agreements and Gateways.

    ERIC Educational Resources Information Center

    Elias, Arthur W.

    1990-01-01

    Discusses technological developments in information distribution and management in relation to concepts of ownership. A historical overview of the concept of copyright is presented; licensing elements for databases are examined; and implications for gateway systems are explored, including ownership, identification of users, and allowable uses of…

  3. FINDING COMMON GROUND IN MANAGING DATA USED IN REGIONAL ENVIRONMENTAL ASSESSMENTS

    EPA Science Inventory

    Evaluating the overall environmental health of a region invariably involves using data-bases from multiple organizations. Several approaches to deal with the related technological and sociological issues have been used by various programs. Flexible data systems are required to de...

  4. Writing Apprehension, Computer Anxiety and Telecomputing: A Pilot Study.

    ERIC Educational Resources Information Center

    Harris, Judith; Grandgenett, Neal

    1992-01-01

    A study measured graduate students' writing apprehension and computer anxiety levels before and after using electronic mail, computer conferencing, and remote database searching facilities during an educational technology course. Results indicted postcourse computer anxiety levels significantly related to usage statistics. Precourse writing…

  5. High performance semantic factoring of giga-scale semantic graph databases.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    al-Saffar, Sinan; Adolf, Bob; Haglin, David

    2010-10-01

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less

  6. BIRS - Bioterrorism Information Retrieval System.

    PubMed

    Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar

    2013-01-01

    Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. The database is freely available at http://www.bioterrorism.biowaves.org.

  7. The development of digital library system for drug research information.

    PubMed

    Kim, H J; Kim, S R; Yoo, D S; Lee, S H; Suh, O K; Cho, J H; Shin, H T; Yoon, J P

    1998-01-01

    The sophistication of computer technology and information transmission on internet has made various cyber information repository available to information consumers. In the era of information super-highway, the digital library which can be accessed from remote sites at any time is considered the prototype of information repository. Using object-oriented DBMS, the very first model of digital library for pharmaceutical researchers and related professionals in Korea has been developed. The published research papers and researchers' personal information was included in the database. For database with research papers, 13 domestic journals were abstracted and scanned for full-text image files which can be viewed by Internet web browsers. The database with researchers' personal information was also developed and interlinked to the database with research papers. These database will be continuously updated and will be combined with world-wide information as the unique digital library in the field of pharmacy.

  8. Military, Charter, Unreported Domestic Traffic and General Aviation 1976, 1984, 1992, and 2015 Emission Scenarios

    NASA Technical Reports Server (NTRS)

    Mortlock, Alan; VanAlstyne, Richard

    1998-01-01

    The report describes development of databases estimating aircraft engine exhaust emissions for the years 1976 and 1984 from global operations of Military, Charter, historic Soviet and Chinese, Unreported Domestic traffic, and General Aviation (GA). These databases were developed under the National Aeronautics and Space Administration's (NASA) Advanced Subsonic Assessment (AST). McDonnell Douglas Corporation's (MDC), now part of the Boeing Company has previously estimated engine exhaust emissions' databases for the baseline year of 1992 and a 2015 forecast year scenario. Since their original creation, (Ward, 1994 and Metwally, 1995) revised technology algorithms have been developed. Additionally, GA databases have been created and all past NIDC emission inventories have been updated to reflect the new technology algorithms. Revised data (Baughcum, 1996 and Baughcum, 1997) for the scheduled inventories have been used in this report to provide a comparison of the total aviation emission forecasts from various components. Global results of two historic years (1976 and 1984), a baseline year (1992) and a forecast year (2015) are presented. Since engine emissions are directly related to fuel usage, an overview of individual aviation annual global fuel use for each inventory component is also given in this report.

  9. Information technologies for astrophysics circa 2001

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.

  10. Technology and its role in rehabilitation for people with cognitive-communication disability following a traumatic brain injury (TBI).

    PubMed

    Brunner, Melissa; Hemsley, Bronwyn; Togher, Leanne; Palmer, Stuart

    2017-01-01

    To review the literature on communication technologies in rehabilitation for people with a traumatic brain injury (TBI), and: (a) determine its application to cognitive-communicative rehabilitation, and b) develop a model to guide communication technology use with people after TBI. This integrative literature review of communication technology in TBI rehabilitation and cognitive-communication involved searching nine scientific databases and included 95 studies. Three major types of communication technologies (assistive technology, augmentative and alternative communication technology, and information communication technology) and multiple factors relating to use of technology by or with people after TBI were categorized according to: (i) individual needs, motivations and goals; (ii) individual impairments, activities, participation and environmental factors; and (iii) technologies. While there is substantial research relating to communication technologies and cognitive rehabilitation after TBI, little relates specifically to cognitive-communication rehabilitation. Further investigation is needed into the experiences and views of people with TBI who use communication technologies, to provide the 'user' perspective and influence user-centred design. Research is necessary to investigate the training interventions that address factors fundamental for success, and any impact on communication. The proposed model provides an evidence-based framework for incorporating technology into speech pathology clinical practice and research.

  11. The design and implementation of image query system based on color feature

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Dong; Jia, Da-Chun; Li, Lin

    2013-07-01

    ASP.NET technology was used to construct the B/S mode image query system. The theory and technology of database design, color feature extraction from image, index and retrieval in the construction of the image repository were researched. The campus LAN and WAN environment were used to test the system. From the test results, the needs of user queries about related resources were achieved by system architecture design.

  12. 34 CFR 361.23 - Requirements related to the statewide workforce investment system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... technology for individuals with disabilities; (ii) The use of information and financial management systems... statistics, job vacancies, career planning, and workforce investment activities; (iii) The use of customer service features such as common intake and referral procedures, customer databases, resource information...

  13. Implementation of a Computerized Maintenance Management System

    NASA Technical Reports Server (NTRS)

    Shen, Yong-Hong; Askari, Bruce

    1994-01-01

    A primer Computerized Maintenance Management System (CMMS) has been established for NASA Ames pressure component certification program. The CMMS takes full advantage of the latest computer technology and SQL relational database to perform periodic services for vital pressure components. The Ames certification program is briefly described and the aspects of the CMMS implementation are discussed as they are related to the certification objectives.

  14. Accuracy of LightCycler(R) SeptiFast for the detection and identification of pathogens in the blood of patients with suspected sepsis: a systematic review protocol.

    PubMed

    Dark, Paul; Wilson, Claire; Blackwood, Bronagh; McAuley, Danny F; Perkins, Gavin D; McMullan, Ronan; Gates, Simon; Warhurst, Geoffrey

    2012-01-01

    Background There is growing interest in the potential utility of molecular diagnostics in improving the detection of life-threatening infection (sepsis). LightCycler® SeptiFast is a multipathogen probe-based real-time PCR system targeting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here the protocol of the first systematic review of published clinical diagnostic accuracy studies of this technology when compared with blood culture in the setting of suspected sepsis. Methods/design Data sources: the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), The Cochrane Library, MEDLINE, EMBASE, ISI Web of Science, BIOSIS Previews, MEDION and the Aggressive Research Intelligence Facility Database (ARIF). diagnostic accuracy studies that compare the real-time PCR technology with standard culture results performed on a patient's blood sample during the management of sepsis. three reviewers, working independently, will determine the level of evidence, methodological quality and a standard data set relating to demographics and diagnostic accuracy metrics for each study. Statistical analysis/data synthesis: heterogeneity of studies will be investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in Receiver Operator Characteristic (ROC) space. Bivariate model method will be used to estimate summary sensitivity and specificity. The authors will investigate reporting biases using funnel plots based on effective sample size and regression tests of asymmetry. Subgroup analyses are planned for adults, children and infection setting (hospital vs community) if sufficient data are uncovered. Dissemination Recommendations will be made to the Department of Health (as part of an open-access HTA report) as to whether the real-time PCR technology has sufficient clinical diagnostic accuracy potential to move forward to efficacy testing during the provision of routine clinical care. Registration PROSPERO-NIHR Prospective Register of Systematic Reviews (CRD42011001289).

  15. [Effect of 3D printing technology on pelvic fractures:a Meta-analysis].

    PubMed

    Zhang, Yu-Dong; Wu, Ren-Yuan; Xie, Ding-Ding; Zhang, Lei; He, Yi; Zhang, Hong

    2018-05-25

    To evaluate the effect of 3D printing technology applied in the surgical treatment of pelvic fractures through the published literatures by Meta-analysis. The PubMed database, EMCC database, CBM database, CNKI database, VIP database and Wanfang database were searched from the date of database foundation to August 2017 to collect the controlled clinical trials in wich 3D printing technology was applied in preoperative planning of pelvic fracture surgery. The retrieved literatures were screened according to predefined inclusion and exclusion criteria, and quality evaluation were performed. Then, the available data were extracted and analyzed with the RevMan5.3 software. Totally 9 controlled clinical trials including 638 cases were chosen. Among them, 279 cases were assigned to the 3D printing technology group and 359 cases to the conventional group. The Meta-analysis results showed that the operative time[SMD=-2.81, 95%CI(-3.76, -1.85)], intraoperative blood loss[SMD=-3.28, 95%CI(-4.72, -1.85)] and the rate of complication [OR=0.47, 95%CI(0.25, 0.87)] in the 3D printing technology were all lower than those in the conventional group;the excellent and good rate of pelvic fracture reduction[OR=2.09, 95%CI(1.32, 3.30)] and postoperative pelvic functional restoration [OR=1.94, 95%CI(1.15, 3.28) in the 3D printing technology were all superior to those in the conventional group. 3D printing technology applied in the surgical treatment of pelvic fractures has the advantage of shorter operative time, less intraoperative blood loss and lower rate of complication, and can improve the quality of pelvic fracture reduction and the recovery of postoperative pelvic function. Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.

  16. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  17. Advanced Satellite Research Project: SCAR Research Database. Bibliographic analysis

    NASA Technical Reports Server (NTRS)

    Pelton, Joseph N.

    1991-01-01

    The literature search was provided to locate and analyze the most recent literature that was relevant to the research. This was done by cross-relating books, articles, monographs, and journals that relate to the following topics: (1) Experimental Systems - Advanced Communications Technology Satellite (ACTS), and (2) Integrated System Digital Network (ISDN) and Advance Communication Techniques (ISDN and satellites, ISDN standards, broadband ISDN, flame relay and switching, computer networks and satellites, satellite orbits and technology, satellite transmission quality, and network configuration). Bibliographic essay on literature citations and articles reviewed during the literature search task is provided.

  18. Analysis of Aviation Safety Reporting System Incident Data Associated With the Technical Challenges of the Vehicle Systems Safety Technology Project

    NASA Technical Reports Server (NTRS)

    Withrow, Colleen A.; Reveley, Mary S.

    2014-01-01

    This analysis was conducted to support the Vehicle Systems Safety Technology (VSST) Project of the Aviation Safety Program (AVsP) milestone VSST4.2.1.01, "Identification of VSST-Related Trends." In particular, this is a review of incident data from the NASA Aviation Safety Reporting System (ASRS). The following three VSST-related technical challenges (TCs) were the focus of the incidents searched in the ASRS database: (1) Vechicle health assurance, (2) Effective crew-system interactions and decisions in all conditions; and (3) Aircraft loss of control prevention, mitigation, and recovery.

  19. Agribusiness and space: No limits to growth

    NASA Technical Reports Server (NTRS)

    Montgomery, O. L.; Paludan, C. T. N.

    1984-01-01

    Technological developments responding to world food needs are examined. It is noted that agribusiness technology has become more space-related in recent years. Although crops forecasting and improvements in yield (the green revolution) were developed prior to the space era, it would be unthinkable today to ignore the contributions of operational meteorological and communications satellites and experimental Earth observation satellites in agribusiness. Space-driven communications now permit national agribusiness database management networks, with a significant portion of the data being space-derived. In demonstration experiments, space communications were shown to improve those aspects of the food problem related to education and communications.

  20. FORENSIC DNA BANKING LEGISLATION IN DEVELOPING COUNTRIES: PRIVACY AND CONFIDENTIALITY CONCERNS REGARDING A DRAFT FROM TURKISH LEGISLATION.

    PubMed

    Ilgili, Önder; Arda, Berna

    This paper presents and analyses, in terms of privacy and confidentiality, the Turkish Draft Law on National DNA Database prepared in 2004, and concerning the use of DNA analysis for forensic objectives and identity verification in Turkey. After a short introduction including related concepts, we evaluate the draft law and provide articles about confidentiality. The evaluation reminded us of some important topics at international level for the developing countries. As a result, the need for sophisticated legislations about DNA databases, for solutions to issues related to the education of employees, and the technological dependency to other countries emerged as main challenges in terms of confidentiality for the developing countries. As seen in the Turkish Draft Law on National DNA Database, the protection of the fundamental rights and freedoms requires more care during the legislative efforts.

  1. Where can cone penetrometer technology be applied? Development of a map of Europe regarding the soil penetrability.

    PubMed

    Fleischer, Matthias; van Ree, Derk; Leven, Carsten

    2014-01-01

    Over the past decades, significant efforts have been invested in the development of push-in technology for site characterization and monitoring for geotechnical and environmental purposes and have especially been undertaken in the Netherlands and Germany. These technologies provide the opportunity for faster, cheaper, and collection of more reliable subsurface data. However, to maximize the technology both from a development and implementation point of view, it is necessary to have an overview of the areas suitable for the application of this type of technology. Such an overview is missing and cannot simply be read from existing maps and material. This paper describes the development of a map showing the feasibility or applicability of Direct Push/Cone Penetrometer Technology (DPT/CPT) in Europe which depends on the subsurface and its extremely varying properties throughout Europe. Subsurface penetrability is dependent on a range of factors that have not been mapped directly or can easily be inferred from existing databases, especially the maximum depth reachable would be of interest. Among others, it mainly depends on the geology, the soil mechanical properties, the type of equipment used as well as soil-forming processes. This study starts by looking at different geological databases available at the European scale. Next, a scheme has been developed linking geological properties mapped to geotechnical properties to determine basic penetrability categories. From this, a map of soil penetrability is developed and presented. Validating the output by performing field tests was beyond the scope of this study, but for the country of the Netherlands, this map has been compared against a database containing actual cone penetrometer depth data to look for possible contradictory results that would negate the approach. The map for the largest part of Europe clearly shows that there is a much wider potential for the application of Direct Push Technology than is currently seen. The study also shows that there is a lack of large-scale databases that contain depth-resolved data as well as soil mechanical and physical properties that can be used for engineering purposes in relation to the subsurface.

  2. ECLSS evolution: Advanced instrumentation interface requirements. Volume 3: Appendix C

    NASA Technical Reports Server (NTRS)

    1991-01-01

    An Advanced ECLSS (Environmental Control and Life Support System) Technology Interfaces Database was developed primarily to provide ECLSS analysts with a centralized and portable source of ECLSS technologies interface requirements data. The database contains 20 technologies which were previously identified in the MDSSC ECLSS Technologies database. The primary interfaces of interest in this database are fluid, electrical, data/control interfaces, and resupply requirements. Each record contains fields describing the function and operation of the technology. Fields include: an interface diagram, description applicable design points and operating ranges, and an explaination of data, as required. A complete set of data was entered for six of the twenty components including Solid Amine Water Desorbed (SAWD), Thermoelectric Integrated Membrane Evaporation System (TIMES), Electrochemical Carbon Dioxide Concentrator (EDC), Solid Polymer Electrolysis (SPE), Static Feed Electrolysis (SFE), and BOSCH. Additional data was collected for Reverse Osmosis Water Reclaimation-Potable (ROWRP), Reverse Osmosis Water Reclaimation-Hygiene (ROWRH), Static Feed Solid Polymer Electrolyte (SFSPE), Trace Contaminant Control System (TCCS), and Multifiltration Water Reclamation - Hygiene (MFWRH). A summary of the database contents is presented in this report.

  3. Keyless Entry: Building a Text Database Using OCR Technology.

    ERIC Educational Resources Information Center

    Grotophorst, Clyde W.

    1989-01-01

    Discusses the use of optical character recognition (OCR) technology to produce an ASCII text database. A tutorial on digital scanning and OCR is provided, and a systems integration project which used the Calera CDP-3000XF scanner and text retrieval software to construct a database of dissertations at George Mason University is described. (four…

  4. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  5. Current Abstracts Nuclear Reactors and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bales, J.D.; Hicks, S.C.

    1993-01-01

    This publication Nuclear Reactors and Technology (NRT) announces on a monthly basis the current worldwide information available from the open literature on nuclear reactors and technology, including all aspects of power reactors, components and accessories, fuel elements, control systems, and materials. This publication contains the abstracts of DOE reports, journal articles, conference papers, patents, theses, and monographs added to the Energy Science and Technology Database during the past month. Also included are US information obtained through acquisition programs or interagency agreements and international information obtained through acquisition programs or interagency agreements and international information obtained through the International Energy Agency`smore » Energy Technology Data Exchange or government-to-government agreements. The digests in NRT and other citations to information on nuclear reactors back to 1948 are available for online searching and retrieval on the Energy Science and Technology Database and Nuclear Science Abstracts (NSA) database. Current information, added daily to the Energy Science and Technology Database, is available to DOE and its contractors through the DOE Integrated Technical Information System. Customized profiles can be developed to provide current information to meet each user`s needs.« less

  6. Nuclear Reactors and Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cason, D.L.; Hicks, S.C.

    1992-01-01

    This publication Nuclear Reactors and Technology (NRT) announces on a monthly basis the current worldwide information available from the open literature on nuclear reactors and technology, including all aspects of power reactors, components and accessories, fuel elements, control systems, and materials. This publication contains the abstracts of DOE reports, journal articles, conference papers, patents, theses, and monographs added to the Energy Science and Technology Database during the past month. Also included are US information obtained through acquisition programs or interagency agreements and international information obtained through the International Energy Agency`s Energy Technology Data Exchange or government-to-government agreements. The digests inmore » NRT and other citations to information on nuclear reactors back to 1948 are available for online searching and retrieval on the Energy Science and Technology Database and Nuclear Science Abstracts (NSA) database. Current information, added daily to the Energy Science and Technology Database, is available to DOE and its contractors through the DOE Integrated Technical Information System. Customized profiles can be developed to provide current information to meet each user`s needs.« less

  7. Using assistive technology outcomes research to inform policy related to the employment of individuals with disabilities.

    PubMed

    Mendelsohn, Steven; Edyburn, Dave L; Rust, Kathy L; Schwanke, Todd D; Smith, Roger O

    2008-01-01

    We know that work is recognized as a central component of life for individuals with and without disabilities. It yields many physical and psychological benefits to the individual while simultaneously contributing numerous benefits to society. Lawmakers have enacted a plethora of laws designed to prevent discrimination, provide incentives for employers to hire individuals with disabilities, and facilitate job training/career preparation. Assistive technology figures prominently in disability employment law as a critical strategy for gaining access and supporting employment and upward mobility in the workplace. However, little systematic effort has been devoted to examining assistive technology use and outcomes as they relate to the employment of individuals with disabilities. The purpose of this article is to articulate a series of issues that permeate assistive technology outcome measurement in employment settings and subsequently affect the use of research knowledge for federal and state policy makers. For each issue, the authors pose three questions for critical analysis: Does the law compel the provision of assistive technology? Does outcome data play any part in the operation of the law? When it does, what kind of data would be useful to collect and where could it be found? Finally, the authors provide a brief glimpse of the current and future research efforts concerning the RSA-911 database. The recent database summaries exemplify the importance of such a national data collection system for informing federal policy, particularly concerning the contributions of assistive technology device use and services on improving the employment of individuals with disabilities.

  8. LIRIS flight database and its use toward noncooperative rendezvous

    NASA Astrophysics Data System (ADS)

    Mongrard, O.; Ankersen, F.; Casiez, P.; Cavrois, B.; Donnard, A.; Vergnol, A.; Southivong, U.

    2018-06-01

    ESA's fifth and last Automated Transfer Vehicle, ATV Georges Lemaître, tested new rendezvous technology before docking with the International Space Station (ISS) in August 2014. The technology demonstration called Laser Infrared Imaging Sensors (LIRIS) provides an unseen view of the ISS. During Georges Lemaître's rendezvous, LIRIS sensors, composed of two infrared cameras, one visible camera, and a scanning LIDAR (Light Detection and Ranging), were turned on two and a half hours and 3500 m from the Space Station. All sensors worked as expected and a large amount of data was recorded and stored within ATV-5's cargo hold before being returned to Earth with the Soyuz flight 38S in September 2014. As a part of the LIRIS postflight activities, the information gathered by all sensors is collected inside a flight database together with the reference ATV trajectory and attitude estimated by ATV main navigation sensors. Although decoupled from the ATV main computer, the LIRIS data were carefully synchronized with ATV guidance, navigation, and control (GNC) data. Hence, the LIRIS database can be used to assess the performance of various image processing algorithms to provide range and line-of-sight (LoS) navigation at long/medium range but also 6 degree-of-freedom (DoF) navigation at short range. The database also contains information related to the overall ATV position with respect to Earth and the Sun direction within ATV frame such that the effect of the environment on the sensors can also be investigated. This paper introduces the structure of the LIRIS database and provides some example of applications to increase the technology readiness level of noncooperative rendezvous.

  9. International energy: Research organizations, 1988--1992. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, P.; Jordan, S.

    This publication contains the standardized names of energy research organizations used in energy information databases. Involved in this cooperative task are (1) the technical staff of the US DOE Office of Scientific and Technical Information (OSTI) in cooperation with the member countries of the Energy Technology Data Exchange (ETDE) and (2) the International Nuclear Information System (INIS). ETDE member countries are also members of the International Nuclear Information System (INIS). Nuclear organization names recorded for INIS by these ETDE member countries are also included in the ETDE Energy Database. Therefore, these organization names are cooperatively standardized for use in bothmore » information systems. This publication identifies current organizations doing research in all energy fields, standardizes the format for recording these organization names in bibliographic citations, assigns a numeric code to facilitate data entry, and identifies report number prefixes assigned by these organizations. These research organization names may be used in searching the databases ``Energy Science & Technology`` on DIALOG and ``Energy`` on STN International. These organization names are also used in USDOE databases on the Integrated Technical Information System. Research organizations active in the past five years, as indicated by database records, were identified to form this publication. This directory includes approximately 31,000 organizations that reported energy-related literature from 1988 to 1992 and updates the DOE Energy Data Base: Corporate Author Entries.« less

  10. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  11. Aircraft Operations Classification System

    NASA Technical Reports Server (NTRS)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  12. Food traceability systems in China: The current status of and future perspectives on food supply chain databases, legal support, and technological research and support for food safety regulation.

    PubMed

    Tang, Qi; Li, Jiajia; Sun, Mei; Lv, Jun; Gai, Ruoyan; Mei, Lin; Xu, Lingzhong

    2015-02-01

    Over the past few decades, the field of food security has witnessed numerous problems and incidents that have garnered public attention. Given this serious situation, the food traceability system (FTS) has become part of the expanding food safety continuum to reduce the risk of food safety problems. This article reviews a great deal of the related literature and results from previous studies of FTS to corroborate this contention. This article describes the development and benefits of FTS in developed countries like the United States of America (USA), Japan, and some European countries. Problems with existing FTS in China are noted, including a lack of a complete database, inadequate laws and regulations, and lagging technological research into FTS. This article puts forward several suggestions for the future, including improvement of information websites, clarification of regulatory responsibilities, and promotion of technological research.

  13. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  14. The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database.

    PubMed

    Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young

    2016-03-01

    Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA.

  15. DataHub: Knowledge-based data management for data discovery

    NASA Astrophysics Data System (ADS)

    Handley, Thomas H.; Li, Y. Philip

    1993-08-01

    Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.

  16. DRUMS: Disk Repository with Update Management and Select option for high throughput sequencing data

    PubMed Central

    2014-01-01

    Background New technologies for analyzing biological samples, like next generation sequencing, are producing a growing amount of data together with quality scores. Moreover, software tools (e.g., for mapping sequence reads), calculating transcription factor binding probabilities, estimating epigenetic modification enriched regions or determining single nucleotide polymorphism increase this amount of position-specific DNA-related data even further. Hence, requesting data becomes challenging and expensive and is often implemented using specialised hardware. In addition, picking specific data as fast as possible becomes increasingly important in many fields of science. The general problem of handling big data sets was addressed by developing specialized databases like HBase, HyperTable or Cassandra. However, these database solutions require also specialized or distributed hardware leading to expensive investments. To the best of our knowledge, there is no database capable of (i) storing billions of position-specific DNA-related records, (ii) performing fast and resource saving requests, and (iii) running on a single standard computer hardware. Results Here, we present DRUMS (Disk Repository with Update Management and Select option), satisfying demands (i)-(iii). It tackles the weaknesses of traditional databases while handling position-specific DNA-related data in an efficient manner. DRUMS is capable of storing up to billions of records. Moreover, it focuses on optimizing relating single lookups as range request, which are needed permanently for computations in bioinformatics. To validate the power of DRUMS, we compare it to the widely used MySQL database. The test setting considers two biological data sets. We use standard desktop hardware as test environment. Conclusions DRUMS outperforms MySQL in writing and reading records by a factor of two up to a factor of 10000. Furthermore, it can work with significantly larger data sets. Our work focuses on mid-sized data sets up to several billion records without requiring cluster technology. Storing position-specific data is a general problem and the concept we present here is a generalized approach. Hence, it can be easily applied to other fields of bioinformatics. PMID:24495746

  17. A Look Under the Hood: How the JPL Tropical Cyclone Information System Uses Database Technologies to Present Big Data to Users

    NASA Astrophysics Data System (ADS)

    Knosp, B.; Gangl, M.; Hristova-Veleva, S. M.; Kim, R. M.; Li, P.; Turk, J.; Vu, Q. A.

    2015-12-01

    The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data and model forecast related to tropical cyclones. The TCIS has been running a near-real time (NRT) data portal during North Atlantic hurricane season that typically runs from June through October each year, since 2010. Data collected by the TCIS varies by type, format, contents, and frequency and is served to the user in two ways: (1) as image overlays on a virtual globe and (2) as derived output from a suite of analysis tools. In order to support these two functions, the data must be collected and then made searchable by criteria such as date, mission, product, pressure level, and geospatial region. Creating a database architecture that is flexible enough to manage, intelligently interrogate, and ultimately present this disparate data to the user in a meaningful way has been the primary challenge. The database solution for the TCIS has been to use a hybrid MySQL + Solr implementation. After testing other relational database and NoSQL solutions, such as PostgreSQL and MongoDB respectively, this solution has given the TCIS the best offerings in terms of query speed and result reliability. This database solution also supports the challenging (and memory overwhelming) geospatial queries that are necessary to support analysis tools requested by users. Though hardly new technologies on their own, our implementation of MySQL + Solr had to be customized and tuned to be able to accurately store, index, and search the TCIS data holdings. In this presentation, we will discuss how we arrived on our MySQL + Solr database architecture, why it offers us the most consistent fast and reliable results, and how it supports our front end so that we can offer users a look into our "big data" holdings.

  18. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  19. Information technologies for astrophysics circa 2001

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1991-01-01

    It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include miniaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is less easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large data sets. Three limiting paradigms are as follows: saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage, and retrieval off the shelf; and the linear model of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.

  20. [Health-related scientific and technological capabilities and university-industry research collaboration].

    PubMed

    Britto, Jorge; Vargas, Marco Antônio; Gadelha, Carlos Augusto Grabois; Costa, Laís Silveira

    2012-12-01

    To examine recent developments in health-related scientific capabilities, the impact of lines of incentives on reducing regional scientific imbalances, and university-industry research collaboration in Brazil. Data were obtained from the Conselho Nacional de Desenvolvimento Científico e Tecnológico (Brazilian National Council for Scientific and Technological Development) databases for the years 2000 to 2010. There were assessed indicators of resource mobilization, research network structuring, and knowledge transfer between science and industry initiatives. Based on the regional distribution map of health-related scientific and technological capabilities there were identified patterns of scientific capabilities and science-industry collaboration. There was relative spatial deconcentration of health research groups and more than 6% of them worked in six areas of knowledge areas: medicine, collective health, dentistry, veterinary medicine, ecology and physical education. Lines of incentives that were adopted from 2000 to 2009 contributed to reducing regional scientific imbalances and improving preexisting capabilities or, alternatively, encouraging spatial decentralization of these capabilities. Health-related scientific and technological capabilities remain highly spatially concentrated in Brazil and incentive policies have contributed to reduce to some extent these imbalances.

  1. CurrMIT: A Tool for Managing Medical School Curricula.

    ERIC Educational Resources Information Center

    Salas, Albert A.; Anderson, M. Brownell; LaCourse, Lisa; Allen, Robert; Candler, Chris S.; Cameron, Terri; Lafferty, Debra

    2003-01-01

    The Association of American Medical Colleges (AAMC) Curriculum Management & Information Tool (CurrMIT) is a relational database containing curriculum information from medical schools throughout the United States and Canada. This article gives an overview of the technology upon which the system is built and the training materials and workshops…

  2. Scrapping Patched Computer Systems: Integrated Data Processing for Information Management.

    ERIC Educational Resources Information Center

    Martinson, Linda

    1991-01-01

    Colleges and universities must find a way to streamline and integrate information management processes across the organization. The Georgia Institute of Technology responded to an acute problem of dissimilar operating systems with a campus-wide integrated administrative system using a machine independent relational database management system. (MSE)

  3. Next Generation Clustered Heat Maps | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Next-Generation (Clustered) Heat Maps are interactive heat maps that enable the user to zoom and pan across the heatmap, alter its color scheme, generate production quality PDFs, and link out from rows, columns, and individual heatmap entries to related statistics, databases and other information.

  4. STEM Education Related Dissertation Abstracts: A Bounded Qualitative Meta-Study

    ERIC Educational Resources Information Center

    Banning, James; Folkestad, James E.

    2012-01-01

    This article utilizes a bounded qualitative meta-study framework to examine the 101 dissertation abstracts found by searching the ProQuest Dissertation and Theses[TM] digital database for dissertations abstracts from 1990 through 2010 using the search terms education, science, technology, engineer, and STEM/SMET. Professional search librarians…

  5. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    PubMed

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  6. [Mobile phone-computer wireless interactive graphics transmission technology and its medical application].

    PubMed

    Huang, Shuo; Liu, Jing

    2010-05-01

    Application of clinical digital medical imaging has raised many tough issues to tackle, such as data storage, management, and information sharing. Here we investigated a mobile phone based medical image management system which is capable of achieving personal medical imaging information storage, management and comprehensive health information analysis. The technologies related to the management system spanning the wireless transmission technology, the technical capabilities of phone in mobile health care and management of mobile medical database were discussed. Taking medical infrared images transmission between phone and computer as an example, the working principle of the present system was demonstrated.

  7. [Good practices and techniques for prevention of accidents at work and occupational diseases. New database of Inail].

    PubMed

    Bindi, L; Ossicini, A

    2007-01-01

    The project "The publication of good practices and good techniques for prevention" is one the priorities of nail. This computerized system for the collection of good practices and standards of Good Technology is aimed to health and safety of workers. The basic objective of the database is to provide a valuable tool, usable, dynamic and implemented, in order to facilitate and direct the access to BP and BT it by people responsible for SSL. At the same time constitutes a tool strategically important for enterprises (especially SMEs) in terms of technological innovation and competitiveness, related to the prevention, safety and health of workers. The realization of this project has involved many of the professionals (chemists, engineers, doctors, biologists, geologists, etc.), and everyone gives his intake of qualified professional competence.

  8. ECLSS Integration Analysis: Advanced ECLSS Subsystem and Instrumentation Technology Study for the Space Exploration Initiative

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In his July 1989 space policy speech, President Bush proposed a long range continuing commitment to space exploration and development. Included in his goals were the establishment of permanent lunar and Mars habitats and the development of extended duration space transportation. In both cases, a major issue is the availability of qualified sensor technologies for use in real-time monitoring and control of integrated physical/chemical/biological (p/c/b) Environmental Control and Life Support Systems (ECLSS). The purpose of this study is to determine the most promising instrumentation technologies for future ECLSS applications. The study approach is as follows: 1. Precursor ECLSS Subsystem Technology Trade Study - A database of existing and advanced Atmosphere Revitalization (AR) and Water Recovery and Management (WRM) ECLSS subsystem technologies was created. A trade study was performed to recommend AR and WRM subsystem technologies for future lunar and Mars mission scenarios. The purpose of this trade study was to begin defining future ECLSS instrumentation requirements as a precursor to determining the instrumentation technologies that will be applicable to future ECLS systems. 2. Instrumentation Survey - An instrumentation database of Chemical, Microbial, Conductivity, Humidity, Flowrate, Pressure, and Temperature sensors was created. Each page of the sensor database report contains information for one type of sensor, including a description of the operating principles, specifications, and the reference(s) from which the information was obtained. This section includes a cursory look at the history of instrumentation on U.S. spacecraft. 3. Results and Recommendations - Instrumentation technologies were recommended for further research and optimization based on a consideration of both of the above sections. A sensor or monitor technology was recommended based on its applicability to future ECLS systems, as defined by the ECLSS Trade Study (1), and on whether its characteristics were considered favorable relative to similar instrumentation technologies (competitors), as determined from the Instrumentation Survey (2). The instrumentation technologies recommended by this study show considerable potential for development and promise significant returns if research efforts are invested.

  9. Development, deployment and operations of ATLAS databases

    NASA Astrophysics Data System (ADS)

    Vaniachine, A. V.; Schmitt, J. G. v. d.

    2008-07-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.

  10. [Conceptual foundations of creation of branch database of technology and intellectual property rights owned by scientific institutions, organizations, higher medical educational institutions and enterprises of healthcare sphere of Ukraine].

    PubMed

    Horban', A Ie

    2013-09-01

    The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.

  11. Combining new technologies for effective collection development: a bibliometric study using CD-ROM and a database management program.

    PubMed Central

    Burnham, J F; Shearer, B S; Wall, J C

    1992-01-01

    Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424

  12. Conversion of a traditional image archive into an image resource on compact disc.

    PubMed Central

    Andrew, S M; Benbow, E W

    1997-01-01

    The conversion of a traditional archive of pathology images was organised on 35 mm slides into a database of images stored on compact disc (CD-ROM), and textual descriptions were added to each image record. Students on a didactic pathology course found this resource useful as an aid to revision, despite relative computer illiteracy, and it is anticipated that students on a new problem based learning course, which incorporates experience with information technology, will benefit even more readily when they use the database as an educational resource. A text and image database on CD-ROM can be updated repeatedly, and the content manipulated to reflect the content and style of the courses it supports. Images PMID:9306931

  13. SmallSat Database

    NASA Technical Reports Server (NTRS)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data required. When completed it will interface with the SCENIC environment to allow modeling of smallSats. The SmallSat Relational Database can also be integrated with the SCENIC Simulation modeling system that is currently in development. The SmallSat Relational Database simulation will be of great significance in assisting the NASA SCaN group to understand the impact the smallSats have made which have populated the lower orbit around our mother earth. What I have created and worked on this summer session 2015, is the basis for a tool that will be of value to the NASA SCaN SCENIC Simulation Environment for years to come.

  14. Introducing information technologies into medical education: activities of the AAMC.

    PubMed

    Salas, A A; Anderson, M B

    1997-03-01

    Previous articles in this column have discussed how new information technologies are revolutionizing medical education. In this article, two staff members from the Association of American Medical College's Division of Medical Education discuss how the Association (the AAMC) is working both to support the introduction of new technologies into medical education and to facilitate dialogue on information technology and curriculum issues among AAMC constituents and staff. The authors describe six AAMC initiatives related to computing in medical education: the Medical School Objectives Project, the National Curriculum Database Project, the Information Technology and Medical Education Project, a professional development program for chief information officers, the AAMC ACCESS Data Collection and Dissemination System, and the internal Staff Interest Group on Medical Informatics and Medical Education.

  15. Fine-grained policy control in U.S. Army Research Laboratory (ARL) multimodal signatures database

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly; Grueneberg, Keith; Wood, David; Calo, Seraphin

    2014-06-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) consists of a number of colocated relational databases representing a collection of data from various sensors. Role-based access to this data is granted to external organizations such as DoD contractors and other government agencies through a client Web portal. In the current MMSDB system, access control is only at the database and firewall level. In order to offer finer grained security, changes to existing user profile schemas and authentication mechanisms are usually needed. In this paper, we describe a software middleware architecture and implementation that allows fine-grained access control to the MMSDB at a dataset, table, and row level. Result sets from MMSDB queries issued in the client portal are filtered with the use of a policy enforcement proxy, with minimal changes to the existing client software and database. Before resulting data is returned to the client, policies are evaluated to determine if the user or role is authorized to access the data. Policies can be authored to filter data at the row, table or column level of a result set. The system uses various technologies developed in the International Technology Alliance in Network and Information Science (ITA) for policy-controlled information sharing and dissemination1. Use of the Policy Management Library provides a mechanism for the management and evaluation of policies to support finer grained access to the data in the MMSDB system. The GaianDB is a policy-enabled, federated database that acts as a proxy between the client application and the MMSDB system.

  16. BIRS – Bioterrorism Information Retrieval System

    PubMed Central

    Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar

    2013-01-01

    Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. Availability The database is freely available at http://www.bioterrorism.biowaves.org PMID:23390356

  17. Integrated Approaches to Drug Discovery for Oxidative Stress-Related Retinal Diseases.

    PubMed

    Nishimura, Yuhei; Hara, Hideaki

    2016-01-01

    Excessive oxidative stress induces dysregulation of functional networks in the retina, resulting in retinal diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. Although various therapies have been developed to reduce oxidative stress in retinal diseases, most have failed to show efficacy in clinical trials. This may be due to oversimplification of target selection for such a complex network as oxidative stress. Recent advances in high-throughput technologies have facilitated the collection of multilevel omics data, which has driven growth in public databases and in the development of bioinformatics tools. Integration of the knowledge gained from omics databases can be used to generate disease-related biological networks and to identify potential therapeutic targets within the networks. Here, we provide an overview of integrative approaches in the drug discovery process and provide simple examples of how the approaches can be exploited to identify oxidative stress-related targets for retinal diseases.

  18. Integrated Approaches to Drug Discovery for Oxidative Stress-Related Retinal Diseases

    PubMed Central

    Hara, Hideaki

    2016-01-01

    Excessive oxidative stress induces dysregulation of functional networks in the retina, resulting in retinal diseases such as glaucoma, age-related macular degeneration, and diabetic retinopathy. Although various therapies have been developed to reduce oxidative stress in retinal diseases, most have failed to show efficacy in clinical trials. This may be due to oversimplification of target selection for such a complex network as oxidative stress. Recent advances in high-throughput technologies have facilitated the collection of multilevel omics data, which has driven growth in public databases and in the development of bioinformatics tools. Integration of the knowledge gained from omics databases can be used to generate disease-related biological networks and to identify potential therapeutic targets within the networks. Here, we provide an overview of integrative approaches in the drug discovery process and provide simple examples of how the approaches can be exploited to identify oxidative stress-related targets for retinal diseases. PMID:28053689

  19. A Toolkit for Active Object-Oriented Databases with Application to Interoperability

    NASA Technical Reports Server (NTRS)

    King, Roger

    1996-01-01

    In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object-oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.

  20. A Toolkit for Active Object-Oriented Databases with Application to Interoperability

    NASA Technical Reports Server (NTRS)

    King, Roger

    1996-01-01

    In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object- oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.

  1. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  2. Video Discs in Libraries.

    ERIC Educational Resources Information Center

    Barker, Philip

    1986-01-01

    Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…

  3. Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives

    NASA Technical Reports Server (NTRS)

    Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard

    1996-01-01

    At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.

  4. The use of intelligent database systems in acute pancreatitis--a systematic review.

    PubMed

    van den Heever, Marc; Mittal, Anubhav; Haydock, Matthew; Windsor, John

    2014-01-01

    Acute pancreatitis (AP) is a complex disease with multiple aetiological factors, wide ranging severity, and multiple challenges to effective triage and management. Databases, data mining and machine learning algorithms (MLAs), including artificial neural networks (ANNs), may assist by storing and interpreting data from multiple sources, potentially improving clinical decision-making. 1) Identify database technologies used to store AP data, 2) collate and categorise variables stored in AP databases, 3) identify the MLA technologies, including ANNs, used to analyse AP data, and 4) identify clinical and non-clinical benefits and obstacles in establishing a national or international AP database. Comprehensive systematic search of online reference databases. The predetermined inclusion criteria were all papers discussing 1) databases, 2) data mining or 3) MLAs, pertaining to AP, independently assessed by two reviewers with conflicts resolved by a third author. Forty-three papers were included. Three data mining technologies and five ANN methodologies were reported in the literature. There were 187 collected variables identified. ANNs increase accuracy of severity prediction, one study showed ANNs had a sensitivity of 0.89 and specificity of 0.96 six hours after admission--compare APACHE II (cutoff score ≥8) with 0.80 and 0.85 respectively. Problems with databases were incomplete data, lack of clinical data, diagnostic reliability and missing clinical data. This is the first systematic review examining the use of databases, MLAs and ANNs in the management of AP. The clinical benefits these technologies have over current systems and other advantages to adopting them are identified. Copyright © 2013 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  5. HRGFish: A database of hypoxia responsive genes in fishes

    NASA Astrophysics Data System (ADS)

    Rashid, Iliyas; Nagpure, Naresh Sahebrao; Srivastava, Prachi; Kumar, Ravindra; Pathak, Ajey Kumar; Singh, Mahender; Kushwaha, Basdeo

    2017-02-01

    Several studies have highlighted the changes in the gene expression due to the hypoxia response in fishes, but the systematic organization of the information and the analytical platform for such genes are lacking. In the present study, an attempt was made to develop a database of hypoxia responsive genes in fishes (HRGFish), integrated with analytical tools, using LAMPP technology. Genes reported in hypoxia response for fishes were compiled through literature survey and the database presently covers 818 gene sequences and 35 gene types from 38 fishes. The upstream fragments (3,000 bp), covered in this database, enables to compute CG dinucleotides frequencies, motif finding of the hypoxia response element, identification of CpG island and mapping with the reference promoter of zebrafish. The database also includes functional annotation of genes and provides tools for analyzing sequences and designing primers for selected gene fragments. This may be the first database on the hypoxia response genes in fishes that provides a workbench to the scientific community involved in studying the evolution and ecological adaptation of the fish species in relation to hypoxia.

  6. The development of a prototype intelligent user interface subsystem for NASA's scientific database systems

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.

    1987-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.

  7. A practical approach for inexpensive searches of radiology report databases.

    PubMed

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  8. Cadastral Database Positional Accuracy Improvement

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  9. Background qualitative analysis of the European Reference Life Cycle Database (ELCD) energy datasets - part I: fuel datasets.

    PubMed

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this study is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) fuel datasets. The revision is based on the data quality indicators described by the ILCD Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD fuel datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the fuel-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD fuel datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall DQR of databases.

  10. 'RetinoGenetics': a comprehensive mutation database for genes related to inherited retinal degeneration.

    PubMed

    Ran, Xia; Cai, Wei-Jun; Huang, Xiu-Feng; Liu, Qi; Lu, Fan; Qu, Jia; Wu, Jinyu; Jin, Zi-Bing

    2014-01-01

    Inherited retinal degeneration (IRD), a leading cause of human blindness worldwide, is exceptionally heterogeneous with clinical heterogeneity and genetic variety. During the past decades, tremendous efforts have been made to explore the complex heterogeneity, and massive mutations have been identified in different genes underlying IRD with the significant advancement of sequencing technology. In this study, we developed a comprehensive database, 'RetinoGenetics', which contains informative knowledge about all known IRD-related genes and mutations for IRD. 'RetinoGenetics' currently contains 4270 mutations in 186 genes, with detailed information associated with 164 phenotypes from 934 publications and various types of functional annotations. Then extensive annotations were performed to each gene using various resources, including Gene Ontology, KEGG pathways, protein-protein interaction, mutational annotations and gene-disease network. Furthermore, by using the search functions, convenient browsing ways and intuitive graphical displays, 'RetinoGenetics' could serve as a valuable resource for unveiling the genetic basis of IRD. Taken together, 'RetinoGenetics' is an integrative, informative and updatable resource for IRD-related genetic predispositions. Database URL: http://www.retinogenetics.org/. © The Author(s) 2014. Published by Oxford University Press.

  11. Regional early flood warning system: design and implementation

    NASA Astrophysics Data System (ADS)

    Chang, L. C.; Yang, S. N.; Kuo, C. L.; Wang, Y. F.

    2017-12-01

    This study proposes a prototype of the regional early flood inundation warning system in Tainan City, Taiwan. The AI technology is used to forecast multi-step-ahead regional flood inundation maps during storm events. The computing time is only few seconds that leads to real-time regional flood inundation forecasting. A database is built to organize data and information for building real-time forecasting models, maintaining the relations of forecasted points, and displaying forecasted results, while real-time data acquisition is another key task where the model requires immediately accessing rain gauge information to provide forecast services. All programs related database are constructed in Microsoft SQL Server by using Visual C# to extracting real-time hydrological data, managing data, storing the forecasted data and providing the information to the visual map-based display. The regional early flood inundation warning system use the up-to-date Web technologies driven by the database and real-time data acquisition to display the on-line forecasting flood inundation depths in the study area. The friendly interface includes on-line sequentially showing inundation area by Google Map, maximum inundation depth and its location, and providing KMZ file download of the results which can be watched on Google Earth. The developed system can provide all the relevant information and on-line forecast results that helps city authorities to make decisions during typhoon events and make actions to mitigate the losses.

  12. Assessment of COPD-related outcomes via a national electronic medical record database.

    PubMed

    Asche, Carl; Said, Quayyim; Joish, Vijay; Hall, Charles Oaxaca; Brixner, Diana

    2008-01-01

    The technology and sophistication of healthcare utilization databases have expanded over the last decade to include results of lab tests, vital signs, and other clinical information. This review provides an assessment of the methodological and analytical challenges of conducting chronic obstructive pulmonary disease (COPD) outcomes research in a national electronic medical records (EMR) dataset and its potential application towards the assessment of national health policy issues, as well as a description of the challenges or limitations. An EMR database and its application to measuring outcomes for COPD are described. The ability to measure adherence to the COPD evidence-based practice guidelines, generated by the NIH and HEDIS quality indicators, in this database was examined. Case studies, before and after their publication, were used to assess the adherence to guidelines and gauge the conformity to quality indicators. EMR was the only source of information for pulmonary function tests, but low frequency in ordering by primary care was an issue. The EMR data can be used to explore impact of variation in healthcare provision on clinical outcomes. The EMR database permits access to specific lab data and biometric information. The richness and depth of information on "real world" use of health services for large population-based analytical studies at relatively low cost render such databases an attractive resource for outcomes research. Various sources of information exist to perform outcomes research. It is important to understand the desired endpoints of such research and choose the appropriate database source.

  13. What have we learned in minimally invasive colorectal surgery from NSQIP and NIS large databases? A systematic review.

    PubMed

    Batista Rodríguez, Gabriela; Balla, Andrea; Corradetti, Santiago; Martinez, Carmen; Hernández, Pilar; Bollo, Jesús; Targarona, Eduard M

    2018-06-01

    "Big data" refers to large amount of dataset. Those large databases are useful in many areas, including healthcare. The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) and the National Inpatient Sample (NIS) are big databases that were developed in the USA in order to record surgical outcomes. The aim of the present systematic review is to evaluate the type and clinical impact of the information retrieved through NISQP and NIS big database articles focused on laparoscopic colorectal surgery. A systematic review was conducted using The Meta-Analysis Of Observational Studies in Epidemiology (MOOSE) guidelines. The research was carried out on PubMed database and revealed 350 published papers. Outcomes of articles in which laparoscopic colorectal surgery was the primary aim were analyzed. Fifty-five studies, published between 2007 and February 2017, were included. Articles included were categorized in groups according to the main topic as: outcomes related to surgical technique comparisons, morbidity and perioperatory results, specific disease-related outcomes, sociodemographic disparities, and academic training impact. NSQIP and NIS databases are just the tip of the iceberg for the potential application of Big Data technology and analysis in MIS. Information obtained through big data is useful and could be considered as external validation in those situations where a significant evidence-based medicine exists; also, those databases establish benchmarks to measure the quality of patient care. Data retrieved helps to inform decision-making and improve healthcare delivery.

  14. A systematic review of patient acceptance of consumer health information technology.

    PubMed

    Or, Calvin K L; Karsh, Ben-Tzion

    2009-01-01

    A systematic literature review was performed to identify variables promoting consumer health information technology (CHIT) acceptance among patients. The electronic bibliographic databases Web of Science, Business Source Elite, CINAHL, Communication and Mass Media Complete, MEDLINE, PsycArticles, and PsycInfo were searched. A cited reference search of articles meeting the inclusion criteria was also conducted to reduce misses. Fifty-two articles met the selection criteria. Among them, 94 different variables were tested for associations with acceptance. Most of those tested (71%) were patient factors, including sociodemographic characteristics, health- and treatment-related variables, and prior experience or exposure to computer/health technology. Only ten variables were related to human-technology interaction; 16 were organizational factors; and one was related to the environment. In total, 62 (66%) were found to predict acceptance in at least one study. Existing literature focused largely on patient-related factors. No studies examined the impact of social and task factors on acceptance, and few tested the effects of organizational or environmental factors on acceptance. Future research guided by technology acceptance theories should fill those gaps to improve our understanding of patient CHIT acceptance, which in turn could lead to better CHIT design and implementation.

  15. DIY Analytics for Postsecondary Students

    ERIC Educational Resources Information Center

    Arndt, Timothy; Guercio, Angela

    2014-01-01

    Recently organizations have begun to realize the potential value in the huge amounts of raw, constantly fluctuating data sets that they generate and, with the help of advances in storage and processing technologies, collect. This leads to the phenomenon of big data. This data may be stored in structured format in relational database systems, but…

  16. 21 CFR 1300.03 - Definitions relating to electronic orders for controlled substances and electronic prescriptions...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... records on its servers. Audit trail means a record showing who has accessed an information technology... identity of the user as a prerequisite to allowing access to the information application. Authentication... information in a database. (4) Comparing the biometric data with data contained in one or more reference...

  17. Semiannual patents review, January — June 2001.

    Treesearch

    Marguerite S. Sykes; Julie Blankenburg

    2001-01-01

    This review summarizes patents related to paper recycling that were issued during the first 6 months of 2001. Two online databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This semiannual feature is intended to inform readers about recent developments in equipment design, chemicals, and process technology for...

  18. Semiannual patents review, July 2001-December 2001

    Treesearch

    Roland Gleisner; Marguerite Sykes; Julie Blankenburg

    2002-01-01

    This review summarizes patents related to paper recycling that were issued during the last six months of 2001. Two on-line databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This semiannual feature is intended to inform readers about recent developments in equipment design, chemicals and process technology for...

  19. Spreadsheets for Analyzing and Optimizing Space Missions

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Agrawal, Anil K.; Czikmantory, Akos J.; Weisbin, Charles R.; Hua, Hook; Neff, Jon M.; Cowdin, Mark A.; Lewis, Brian S.; Iroz, Juana; Ross, Rick

    2009-01-01

    XCALIBR (XML Capability Analysis LIBRary) is a set of Extensible Markup Language (XML) database and spreadsheet- based analysis software tools designed to assist in technology-return-on-investment analysis and optimization of technology portfolios pertaining to outer-space missions. XCALIBR is also being examined for use in planning, tracking, and documentation of projects. An XCALIBR database contains information on mission requirements and technological capabilities, which are related by use of an XML taxonomy. XCALIBR incorporates a standardized interface for exporting data and analysis templates to an Excel spreadsheet. Unique features of XCALIBR include the following: It is inherently hierarchical by virtue of its XML basis. The XML taxonomy codifies a comprehensive data structure and data dictionary that includes performance metrics for spacecraft, sensors, and spacecraft systems other than sensors. The taxonomy contains >700 nodes representing all levels, from system through subsystem to individual parts. All entries are searchable and machine readable. There is an intuitive Web-based user interface. The software automatically matches technologies to mission requirements. The software automatically generates, and makes the required entries in, an Excel return-on-investment analysis software tool. The results of an analysis are presented in both tabular and graphical displays.

  20. Graphical tool for navigation within the semantic network of the UMLS metathesaurus on a locally installed database.

    PubMed

    Frankewitsch, T; Prokosch, H U

    2000-01-01

    Knowledge in the environment of information technologies is bound to structured vocabularies. Medical data dictionaries are necessary for uniquely describing findings like diagnoses, procedures or functions. Therefore we decided to locally install a version of the Unified Medical Language System (UMLS) of the U.S. National Library of Medicine as a repository for defining entries of a medical multimedia database. Because of the requirement to extend the vocabulary in concepts and relations between existing concepts a graphical tool for appending new items to the database has been developed: Although the database is an instance of a semantic network the focus on single entries offers the opportunity of reducing the net to a tree within this detail. Based on the graph theorem, there are definitions of nodes of concepts and nodes of knowledge. The UMLS additionally offers the specification of sub-relations, which can be represented, too. Using this view it is possible to manage these 1:n-Relations in a simple tree view. On this background an explorer like graphical user interface has been realised to add new concepts and define new relationships between those and existing entries for adapting the UMLS for specific purposes such as describing medical multimedia objects.

  1. USDA food and nutrient databases provide the infrastructure for food and nutrition research, policy, and practice.

    PubMed

    Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen

    2013-02-01

    The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.

  2. Technology and the Modern Library.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    1984-01-01

    Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…

  3. [Construction and application of special analysis database of geoherbs based on 3S technology].

    PubMed

    Guo, Lan-ping; Huang, Lu-qi; Lv, Dong-mei; Shao, Ai-juan; Wang, Jian

    2007-09-01

    In this paper,the structures, data sources, data codes of "the spacial analysis database of geoherbs" based 3S technology are introduced, and the essential functions of the database, such as data management, remote sensing, spacial interpolation, spacial statistics, spacial analysis and developing are described. At last, two examples for database usage are given, the one is classification and calculating of NDVI index of remote sensing image in geoherbal area of Atractylodes lancea, the other one is adaptation analysis of A. lancea. These indicate that "the spacial analysis database of geoherbs" has bright prospect in spacial analysis of geoherbs.

  4. Mobile and Web 2.0 interventions for weight management: an overview of review evidence and its methodological quality

    PubMed Central

    Smith, Jane R.; Samaha, Laya; Abraham, Charles

    2016-01-01

    Abstract Background : The use of Internet and related technologies for promoting weight management (WM), physical activity (PA), or dietary-related behaviours has been examined in many articles and systematic reviews. This overview aims to summarize and assess the quality of the review evidence specifically focusing on mobile and Web 2.0 technologies, which are the most utilized, currently available technologies. Methods: Following a registered protocol (CRD42014010323), we searched 16 databases for articles published in English until 31 December 2014 discussing the use of either mobile or Web 2.0 technologies to promote WM or related behaviors, i.e. diet and physical activity (PA). Two reviewers independently selected reviews and assessed their methodological quality using the AMSTAR checklist. Citation matrices were used to determine the overlap among reviews. Results: Forty-four eligible reviews were identified, 39 of which evaluated the effects of interventions using mobile or Web 2.0 technologies. Methodological quality was generally low with only 7 reviews (16%) meeting the highest standards. Suggestive evidence exists for positive effects of mobile technologies on weight-related outcomes and, to a lesser extent, PA. Evidence is inconclusive regarding Web 2.0 technologies. Conclusions : Reviews on mobile and Web 2.0 interventions for WM and related behaviors suggest that these technologies can, under certain circumstances, be effective, but conclusions are limited by poor review quality based on a heterogeneous evidence base. PMID:27335330

  5. NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information

    USGS Publications Warehouse

    ,

    2004-01-01

    Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.

  6. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  7. [A Terahertz Spectral Database Based on Browser/Server Technique].

    PubMed

    Zhang, Zhuo-yong; Song, Yue

    2015-09-01

    With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to the obtained correlation coefficient one can perform the searching task very fast and conveniently. Our terahertz spectral database can be accessed at http://www.teralibrary.com. The proposed terahertz spectral database is based on spectral information so far, and will be improved in the future. We hope this terahertz spectral database can provide users powerful, convenient, and high efficient functions, and could promote the broader applications of terahertz technology.

  8. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  9. Drug-Path: a database for drug-induced pathways

    PubMed Central

    Zeng, Hui; Cui, Qinghua

    2015-01-01

    Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. Database URL: http://www.cuilab.cn/drugpath PMID:26130661

  10. Lessons Learned and Technical Standards: A Logical Marriage for Future Space Systems Design

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Garcia, Danny; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    A comprehensive database of engineering lessons learned that corresponds with relevant technical standards will be a valuable asset to those engaged in studies on future space vehicle developments, especially for structures, materials, propulsion, control, operations and associated elements. In addition, this will enable the capturing of technology developments applicable to the design, development, and operation of future space vehicles as planned in the Space Launch Initiative. Using the time-honored tradition of passing on lessons learned while utilizing the newest information technology, NASA has launched an intensive effort to link lessons learned acquired through various Internet databases with applicable technical standards. This paper will discuss the importance of lessons learned, the difficulty in finding relevant lessons learned while engaged in a space vehicle development, and the new NASA effort to relate them to technical standards that can help alleviate this difficulty.

  11. Intelligent data management

    NASA Technical Reports Server (NTRS)

    Campbell, William J.

    1985-01-01

    Intelligent data management is the concept of interfacing a user to a database management system with a value added service that will allow a full range of data management operations at a high level of abstraction using human written language. The development of such a system will be based on expert systems and related artificial intelligence technologies, and will allow the capturing of procedural and relational knowledge about data management operations and the support of a user with such knowledge in an on-line, interactive manner. Such a system will have the following capabilities: (1) the ability to construct a model of the users view of the database, based on the query syntax; (2) the ability to transform English queries and commands into database instructions and processes; (3) the ability to use heuristic knowledge to rapidly prune the data space in search processes; and (4) the ability to use an on-line explanation system to allow the user to understand what the system is doing and why it is doing it. Additional information is given in outline form.

  12. Incremental Query Rewriting with Resolution

    NASA Astrophysics Data System (ADS)

    Riazanov, Alexandre; Aragão, Marcelo A. T.

    We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a resolution-based first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent translation of these schematic answers to SQL queries which are evaluated using a conventional relational DBMS. We call our method incremental query rewriting, because an original semantic query is rewritten into a (potentially infinite) series of SQL queries. In this chapter, we outline the main idea of our technique - using abstractions of databases and constrained clauses for deriving schematic answers, and provide completeness and soundness proofs to justify the applicability of this technique to the case of resolution for FOL without equality. The proposed method can be directly used with regular RDBs, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.

  13. FJET Database Project: Extract, Transform, and Load

    NASA Technical Reports Server (NTRS)

    Samms, Kevin O.

    2015-01-01

    The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.

  14. Accuracy of LightCycler® SeptiFast for the detection and identification of pathogens in the blood of patients with suspected sepsis: a systematic review protocol

    PubMed Central

    Wilson, Claire; Blackwood, Bronagh; McAuley, Danny F; Perkins, Gavin D; McMullan, Ronan; Gates, Simon; Warhurst, Geoffrey

    2012-01-01

    Background There is growing interest in the potential utility of molecular diagnostics in improving the detection of life-threatening infection (sepsis). LightCycler® SeptiFast is a multipathogen probe-based real-time PCR system targeting DNA sequences of bacteria and fungi present in blood samples within a few hours. We report here the protocol of the first systematic review of published clinical diagnostic accuracy studies of this technology when compared with blood culture in the setting of suspected sepsis. Methods/design Data sources: the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment Database (HTA), the NHS Economic Evaluation Database (NHSEED), The Cochrane Library, MEDLINE, EMBASE, ISI Web of Science, BIOSIS Previews, MEDION and the Aggressive Research Intelligence Facility Database (ARIF). Study selection: diagnostic accuracy studies that compare the real-time PCR technology with standard culture results performed on a patient's blood sample during the management of sepsis. Data extraction: three reviewers, working independently, will determine the level of evidence, methodological quality and a standard data set relating to demographics and diagnostic accuracy metrics for each study. Statistical analysis/data synthesis: heterogeneity of studies will be investigated using a coupled forest plot of sensitivity and specificity and a scatter plot in Receiver Operator Characteristic (ROC) space. Bivariate model method will be used to estimate summary sensitivity and specificity. The authors will investigate reporting biases using funnel plots based on effective sample size and regression tests of asymmetry. Subgroup analyses are planned for adults, children and infection setting (hospital vs community) if sufficient data are uncovered. Dissemination Recommendations will be made to the Department of Health (as part of an open-access HTA report) as to whether the real-time PCR technology has sufficient clinical diagnostic accuracy potential to move forward to efficacy testing during the provision of routine clinical care. Registration PROSPERO—NIHR Prospective Register of Systematic Reviews (CRD42011001289). PMID:22240646

  15. A Database for Decision-Making in Training and Distributed Learning Technology

    DTIC Science & Technology

    1998-04-01

    developer must answer these questions: ♦ Who will develop the courseware? Should we outsource ? ♦ What media should we use? How much will it cost? ♦ What...to develop , the database can be useful for answering staffing questions and planning transitions to technology- assisted courses. The database...of distributed learning curricula in com- parison to traditional methods. To develop a military-wide distributed learning plan, the existing course

  16. WEB-BASED DATABASE ON RENEWAL TECHNOLOGIES ...

    EPA Pesticide Factsheets

    As U.S. utilities continue to shore up their aging infrastructure, renewal needs now represent over 43% of annual expenditures compared to new construction for drinking water distribution and wastewater collection systems (Underground Construction [UC], 2016). An increased understanding of renewal options will ultimately assist drinking water utilities in reducing water loss and help wastewater utilities to address infiltration and inflow issues in a cost-effective manner. It will also help to extend the service lives of both drinking water and wastewater mains. This research effort involved collecting case studies on the use of various trenchless pipeline renewal methods and providing the information in an online searchable database. The overall objective was to further support technology transfer and information sharing regarding emerging and innovative renewal technologies for water and wastewater mains. The result of this research is a Web-based, searchable database that utility personnel can use to obtain technology performance and cost data, as well as case study references. The renewal case studies include: technologies used; the conditions under which the technology was implemented; costs; lessons learned; and utility contact information. The online database also features a data mining tool for automated review of the technologies selected and cost data. Based on a review of the case study results and industry data, several findings are presented on tren

  17. Nuclear science abstracts (NSA) database 1948--1974 (on the Internet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Nuclear Science Abstracts (NSA) is a comprehensive abstract and index collection of the International Nuclear Science and Technology literature for the period 1948 through 1976. Included are scientific and technical reports of the US Atomic Energy Commission, US Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Coverage of the literature since 1976 is provided by Energy Science and Technology Database. Approximately 25% of the records in the file contain abstracts. These are from the following volumes of the print Nuclear Science Abstracts: Volumes 12--18, Volume 29, and Volume 33. The database containsmore » over 900,000 bibliographic records. All aspects of nuclear science and technology are covered, including: Biomedical Sciences; Metals, Ceramics, and Other Materials; Chemistry; Nuclear Materials and Waste Management; Environmental and Earth Sciences; Particle Accelerators; Engineering; Physics; Fusion Energy; Radiation Effects; Instrumentation; Reactor Technology; Isotope and Radiation Source Technology. The database includes all records contained in Volume 1 (1948) through Volume 33 (1976) of the printed version of Nuclear Science Abstracts (NSA). This worldwide coverage includes books, conference proceedings, papers, patents, dissertations, engineering drawings, and journal literature. This database is now available for searching through the GOV. Research Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  18. Technological innovation in neurosurgery: a quantitative study.

    PubMed

    Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar

    2015-07-01

    Technological innovation within health care may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technology-intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical techniques. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation, respectively. The authors searched a patent database for articles published between 1960 and 2010 using the Boolean search term "neurosurgeon OR neurosurgical OR neurosurgery." The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top-performing technology cluster was then selected as an exemplar for a more detailed analysis of individual patents. In all, 11,672 patents and 208,203 publications related to neurosurgery were identified. The top-performing technology clusters during these 50 years were image-guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes, and endoscopes. In relation to image-guidance and neuromodulation devices, the authors found a highly correlated rapid rise in the numbers of patents and publications, which suggests that these are areas of technology expansion. An in-depth analysis of neuromodulation-device patents revealed that the majority of well-performing patents were related to deep brain stimulation. Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery.

  19. CSPMS supported by information technology

    NASA Astrophysics Data System (ADS)

    Zhang, Hudan; Wu, Heng

    This paper will propose a whole new viewpoint about building a CSPMS(Coal-mine Safety Production Management System) by means of information technology. This system whose core part is a four-grade automatic triggered warning system achieves the goal that information transmission will be smooth, nondestructive and in time. At the same time, the system provides a comprehensive and collective technology platform for various Public Management Organizations and coal-mine production units to deal with safety management, advance warning, unexpected incidents, preplan implementation, and resource deployment at different levels. The database of this system will support national related industry's resource control, plan, statistics, tax and the construction of laws and regulations effectively.

  20. Cutaneous lichen planus: A systematic review of treatments.

    PubMed

    Fazel, Nasim

    2015-06-01

    Various treatment modalities are available for cutaneous lichen planus. Pubmed, EMBASE, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, and Health Technology Assessment Database were searched for all the systematic reviews and randomized controlled trials related to cutaneous lichen planus. Two systematic reviews and nine relevant randomized controlled trials were identified. Acitretin, griseofulvin, hydroxychloroquine and narrow band ultraviolet B are demonstrated to be effective in the treatment of cutaneous lichen planus. Sulfasalazine is effective, but has an unfavorable safety profile. KH1060, a vitamin D analogue, is not beneficial in the management of cutaneous lichen planus. Evidence from large scale randomized trials demonstrating the safety and efficacy for many other treatment modalities used to treat cutaneous lichen planus is simply not available.

  1. What Is eHealth (4): A Scoping Exercise to Map the Field

    PubMed Central

    Sloan, David; Gregor, Peter; Sullivan, Frank; Detmer, Don; Kahan, James P; Oortwijn, Wija; MacGillivray, Steve

    2005-01-01

    Background Lack of consensus on the meaning of eHealth has led to uncertainty among academics, policymakers, providers and consumers. This project was commissioned in light of the rising profile of eHealth on the international policy agenda and the emerging UK National Programme for Information Technology (now called Connecting for Health) and related developments in the UK National Health Service. Objectives To map the emergence and scope of eHealth as a topic and to identify its place within the wider health informatics field, as part of a larger review of research and expert analysis pertaining to current evidence, best practice and future trends. Methods Multiple databases of scientific abstracts were explored in a nonsystematic fashion to assess the presence of eHealth or conceptually related terms within their taxonomies, to identify journals in which articles explicitly referring to eHealth are contained and the topics covered, and to identify published definitions of the concept. The databases were Medline (PubMed), the Cumulative Index of Nursing and Allied Health Literature (CINAHL), the Science Citation Index (SCI), the Social Science Citation Index (SSCI), the Cochrane Database (including Dare, Central, NHS Economic Evaluation Database [NHS EED], Health Technology Assessment [HTA] database, NHS EED bibliographic) and ISTP (now known as ISI proceedings).We used the search query, “Ehealth OR e-health OR e*health”. The timeframe searched was 1997-2003, although some analyses contain data emerging subsequent to this period. This was supplemented by iterative searches of Web-based sources, such as commercial and policy reports, research commissioning programmes and electronic news pages. Definitions extracted from both searches were thematically analyzed and compared in order to assess conceptual heterogeneity. Results The term eHealth only came into use in the year 2000, but has since become widely prevalent. The scope of the topic was not immediately discernable from that of the wider health informatics field, for which over 320000 publications are listed in Medline alone, and it is not explicitly represented within the existing Medical Subject Headings (MeSH) taxonomy. Applying eHealth as narrative search term to multiple databases yielded 387 relevant articles, distributed across 154 different journals, most commonly related to information technology and telemedicine, but extending to such areas as law. Most eHealth articles are represented on Medline. Definitions of eHealth vary with respect to the functions, stakeholders, contexts and theoretical issues targeted. Most encompass a broad range of medical informatics applications either specified (eg, decision support, consumer health information) or presented in more general terms (eg, to manage, arrange or deliver health care). However the majority emphasize the communicative functions of eHealth and specify the use of networked digital technologies, primarily the Internet, thus differentiating eHealth from the field of medical informatics. While some definitions explicitly target health professionals or patients, most encompass applications for all stakeholder groups. The nature of the scientific and broader literature pertaining to eHealth closely reflects these conceptualizations. Conclusions We surmise that the field – as it stands today – may be characterized by the global definitions suggested by Eysenbach and Eng. PMID:15829481

  2. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Senoo, Tetsuo

    As computer technology, communication technology and others have progressed, many corporations are likely to locate constructing and utilizing their own databases at the center of the information activities, and aim at developing their information activities newly. This paper considers how information management in a corporation is affected under changing management and technology environments, and clarifies and generalizes what in-house databases should be constructed and utilized from the viewpoints of requirements to be furnished, types and forms of information to be dealt, indexing, use type and frequency, evaluation method and so on. The author outlines an information system of Matsushita called MATIS (Matsushita Technical Information System) as an actual example, and describes the present status and some points to be reminded in constructing and utilizing databases of REP, BOOK and SYMP.

  3. 77 FR 71089 - Pilot Loading of Aeronautical Database Updates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-29

    ... the use of newer systems and data-transfer mechanisms such as those employing wireless technology. In... which enables wireless updating of systems and databases. The current regulation does not accommodate... maintenance); Recordkeeping requirements; Training for pilots; Technological advancements in data-transfer...

  4. The contributions of digital technologies in the teaching of nursing skills: an integrative review.

    PubMed

    Silveira, Maurício de Souza; Cogo, Ana Luísa Petersen

    2017-07-13

    To analyze the contributions of digital educational technologies used in teaching nursing skills. Integrative literature review, search in five databases, from 2006 to 2015 combining the descriptors 'education, nursing', 'educational technology', 'computer-assisted instruction' or related terms in English. Sample of 30 articles grouped in the thematic categories 'technology in the simulation with manikin', 'incentive to learning' and 'teaching of nursing skills'. It was identified different formats of digital educational technologies used in teaching Nursing skills such as videos, learning management system, applications, hypertext, games, virtual reality simulators. These digital materials collaborated in the acquisition of theoretical references that subsidize the practices, enhancing the teaching and enable the use of active learning methods, breaking with the traditional teaching of demonstrating and repeating procedures.

  5. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  6. DFACS - DATABASE, FORMS AND APPLICATIONS FOR CABLING AND SYSTEMS, VERSION 3.30

    NASA Technical Reports Server (NTRS)

    Billitti, J. W.

    1994-01-01

    DFACS is an interactive multi-user computer-aided engineering tool for system level electrical integration and cabling engineering. The purpose of the program is to provide the engineering community with a centralized database for entering and accessing system functional definitions, subsystem and instrument-end circuit pinout details, and harnessing data. The primary objective is to provide an instantaneous single point of information interchange, thus avoiding error-prone, time-consuming, and costly multiple-path data shuttling. The DFACS program, which is centered around a single database, has built-in menus that provide easy data input and access for all involved system, subsystem, and cabling personnel. The DFACS program allows parallel design of circuit data sheets and harness drawings. It also recombines raw information to automatically generate various project documents and drawings including the Circuit Data Sheet Index, the Electrical Interface Circuits List, Assembly and Equipment Lists, Electrical Ground Tree, Connector List, Cable Tree, Cabling Electrical Interface and Harness Drawings, Circuit Data Sheets, and ECR List of Affected Interfaces/Assemblies. Real time automatic production of harness drawings and circuit data sheets from the same data reservoir ensures instant system and cabling engineering design harmony. DFACS also contains automatic wire routing procedures and extensive error checking routines designed to minimize the possibility of engineering error. DFACS is designed to run on DEC VAX series computers under VMS using Version 6.3/01 of INGRES QUEL/OSL, a relational database system which is available through Relational Technology, Inc. The program is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. DFACS was developed in 1987 and last updated in 1990. DFACS is a copyrighted work with all copyright vested in NASA. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. INGRES QUEL/OSL is a trademark of Relational Technology, Inc.

  7. CHERNOLITTM. Chernobyl Bibliographic Search System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caff, F., Jr.; Kennedy, R.A.; Mahaffey, J.A.

    1992-03-02

    The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world`s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less

  8. Chernobyl Bibliographic Search System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, Jr, F.; Kennedy, R. A.; Mahaffey, J. A.

    1992-05-11

    The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world''s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less

  9. Applications of Information Technology in Nursing During 2005-15: Evidence from Iran.

    PubMed

    Meraji, Marziye; Ramazan Ghorbani, Nahid; Mahmoodian, Sanaz; Samadbeik, Mahnaz

    2016-01-01

    In this ever-changing health care environment, nurses employ technologies and information systems to accomplish the intentions of the practice of nursing. Information technology supports the basic and advanced nursing practices in all settings. This review provides evidence about applications of information technology in Iranian nursing. We systematically searched all papers about applications of information technology in nursing in Iran that were indexed in SID, Magiran, Iran medex, PubMed and scopus databases. This study indicated that 12 (%52) studies used information technologies in the nursing education domain. Also, in 6 (%26) studies telenursing was used for patient care. 3 (13%) of the articles were related to the impact of the use of computer-based information system on nursing practice. In 2 (%9) papers the researchers developed computerized software for nursing processes. The results of this study indicate the use of information technology in nearly every aspect of nursing in Iran.

  10. Surface Transportation Security Priority Assessment

    DTIC Science & Technology

    2010-03-01

    intercity buses), and pipelines, and related infrastructure (including roads and highways), that are within the territory of the United States...Modernizing the information technology infrastructure used to vet the identity of travelers and transportation workers  Using terrorist databases to...examination of persons travelling , surface transportation modes tend to operate in a much more open environment, making it difficult to screen workers

  11. Semiannual patents review July 2002–December 2002

    Treesearch

    Roland Gleisner; Julie Blankenburg

    2003-01-01

    This review summarizes patents related to paper recycling that were issued during the last six months of 2002. Two on-line databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This semiannual feature is intended to inform readers about recent developments in equipment design, chemicals, and process technology for...

  12. Semiannual patents review, January-June 1999

    Treesearch

    Marguerite Sykes; Julie Blankenburg

    1999-01-01

    This review summarizes patents related to paper recycling that were issued during the first 6 months of 1999. The two on-line databases used for this search were C1aims/U.S. Patents Abstracts and Derwent World Patents Index. This semiannual feature is intended to inform readers about the latest developments in equipment design, chemicals, and process technology for...

  13. Semiannual patents review, July-December 1998

    Treesearch

    Matthew Stroika; Marguerite Sykes; Julie Blankenburg

    1999-01-01

    This review summarizes patents related to paper recycling issued during the last 6 months of 1998. The two online databases used for this search are Claim/US. Patents Abstracts and Derwent World Patents Index. This semiannual feature is intended to inform readers about the latest developments in equipment, chemicals, and technology in the field of paper recycling. This...

  14. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  15. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  16. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  17. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  18. Advanced life support study

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Summary reports on each of the eight tasks undertaken by this contract are given. Discussed here is an evaluation of a Closed Ecological Life Support System (CELSS), including modeling and analysis of Physical/Chemical Closed Loop Life Support (P/C CLLS); the Environmental Control and Life Support Systems (ECLSS) evolution - Intermodule Ventilation study; advanced technologies interface requirements relative to ECLSS; an ECLSS resupply analysis; the ECLSS module addition relocation systems engineering analysis; an ECLSS cost/benefit analysis to identify rack-level interface requirements of the alternate technologies evaluated in the ventilation study, with a comparison of these with the rack level interface requirements for the baseline technologies; advanced instrumentation - technology database enhancement; and a clean room survey and assessment of various ECLSS evaluation options for different growth scenarios.

  19. Psychology, technology, and diabetes management.

    PubMed

    Gonder-Frederick, Linda A; Shepard, Jaclyn A; Grabman, Jesse H; Ritterband, Lee M

    2016-10-01

    Use of technology in diabetes management is rapidly advancing and has the potential to help individuals with diabetes achieve optimal glycemic control. Over the past 40 years, several devices have been developed and refined, including the blood glucose meter, insulin pump, and continuous glucose monitor. When used in tandem, the insulin pump and continuous glucose monitor have prompted the Artificial Pancreas initiative, aimed at developing control system for fully automating glucose monitoring and insulin delivery. In addition to devices, modern technology, such as the Internet and mobile phone applications, have been used to promote patient education, support, and intervention to address the behavioral and emotional challenges of diabetes management. These state-of-the-art technologies not only have the potential to improve clinical outcomes, but there are possible psychological benefits, such as improved quality of life, as well. However, practical and psychosocial limitations related to advanced technology exist and, in the context of several technology-related theoretical frameworks, can influence patient adoption and continued use. It is essential for future diabetes technology research to address these barriers given that the clinical benefits appear to largely depend on patient engagement and consistence of technology use. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Human spaceflight technology needs-a foundation for JSC's technology strategy

    NASA Astrophysics Data System (ADS)

    Stecklein, J. M.

    Human space exploration has always been heavily influenced by goals to achieve a specific mission on a specific schedule. This approach drove rapid technology development, the rapidity of which added risks and became a major driver for costs and cost uncertainty. The National Aeronautics and Space Administration (NASA) is now approaching the extension of human presence throughout the solar system by balancing a proactive yet less schedule-driven development of technology with opportunistic scheduling of missions as the needed technologies are realized. This approach should provide cost effective, low risk technology development that will enable efficient and effective manned spaceflight missions. As a first step, the NASA Human Spaceflight Architecture Team (HAT) has identified a suite of critical technologies needed to support future manned missions across a range of destinations, including in cis-lunar space, near earth asteroid visits, lunar exploration, Mars moons, and Mars exploration. The challenge now is to develop a strategy and plan for technology development that efficiently enables these missions over a reasonable time period, without increasing technology development costs unnecessarily due to schedule pressure, and subsequently mitigating development and mission risks. NASA's Johnson Space Center (JSC), as the nation's primary center for human exploration, is addressing this challenge through an innovative approach in allocating Internal Research and Development funding to projects. The HAT Technology Needs (Tech Needs) Database has been developed to correlate across critical technologies and the NASA Office of Chief Technologist Technology Area Breakdown Structure (TABS). The TechNeeds Database illuminates that many critical technologies may support a single technical capability gap, that many HAT technology needs may map to a single TABS technology discipline, and that a single HAT technology need may map to multiple TABS technology disciplines. Th- TechNeeds Database greatly clarifies understanding of the complex relationships of critical technologies to mission and architecture element needs. Extensions to the core TechNeeds Database allow JSC to factor in and appropriately weight JSC core technology competencies, and considerations of commercialization potential and partnership potential. The inherent coupling among these, along with an appropriate importance weighting, has provided an initial prioritization for allocation of technology development research funding at JSc. The HAT Technology Needs Database, with a core of built-in reports, clarifies and communicates complex technology needs for cost effective human space exploration so that an organization seeking to assure that research prioritization supports human spaceflight of the future can be successful.

  1. Human Spaceflight Technology Needs - A Foundation for JSC's Technology Strategy

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.

    2013-01-01

    Human space exploration has always been heavily influenced by goals to achieve a specific mission on a specific schedule. This approach drove rapid technology development, the rapidity of which adds risks as well as provides a major driver for costs and cost uncertainty. The National Aeronautics and Space Administration (NASA) is now approaching the extension of human presence throughout the solar system by balancing a proactive yet less schedule-driven development of technology with opportunistic scheduling of missions as the needed technologies are realized. This approach should provide cost effective, low risk technology development that will enable efficient and effective manned spaceflight missions. As a first step, the NASA Human Spaceflight Architecture Team (HAT) has identified a suite of critical technologies needed to support future manned missions across a range of destinations, including in cis-lunar space, near earth asteroid visits, lunar exploration, Mars moons, and Mars exploration. The challenge now is to develop a strategy and plan for technology development that efficiently enables these missions over a reasonable time period, without increasing technology development costs unnecessarily due to schedule pressure, and subsequently mitigating development and mission risks. NASA's Johnson Space Center (JSC), as the nation s primary center for human exploration, is addressing this challenge through an innovative approach in allocating Internal Research and Development funding to projects. The HAT Technology Needs (TechNeeds) Database has been developed to correlate across critical technologies and the NASA Office of Chief Technologist Technology Area Breakdown Structure (TABS). The TechNeeds Database illuminates that many critical technologies may support a single technical capability gap, that many HAT technology needs may map to a single TABS technology discipline, and that a single HAT technology need may map to multiple TABS technology disciplines. The TechNeeds Database greatly clarifies understanding of the complex relationships of critical technologies to mission and architecture element needs. Extensions to the core TechNeeds Database allow JSC to factor in and appropriately weight JSC Center Core Technology Competencies, and considerations of Commercialization Potential and Partnership Potential. The inherent coupling among these, along with an appropriate importance weighting, has provided an initial prioritization for allocation of technology development research funding for JSC. The HAT Technology Needs Database, with a core of built-in reports, clarifies and communicates complex technology needs for cost effective human space exploration such that an organization seeking to assure that research prioritization supports human spaceflight of the future can be successful.

  2. Rasdaman for Big Spatial Raster Data

    NASA Astrophysics Data System (ADS)

    Hu, F.; Huang, Q.; Scheele, C. J.; Yang, C. P.; Yu, M.; Liu, K.

    2015-12-01

    Spatial raster data have grown exponentially over the past decade. Recent advancements on data acquisition technology, such as remote sensing, have allowed us to collect massive observation data of various spatial resolution and domain coverage. The volume, velocity, and variety of such spatial data, along with the computational intensive nature of spatial queries, pose grand challenge to the storage technologies for effective big data management. While high performance computing platforms (e.g., cloud computing) can be used to solve the computing-intensive issues in big data analysis, data has to be managed in a way that is suitable for distributed parallel processing. Recently, rasdaman (raster data manager) has emerged as a scalable and cost-effective database solution to store and retrieve massive multi-dimensional arrays, such as sensor, image, and statistics data. Within this paper, the pros and cons of using rasdaman to manage and query spatial raster data will be examined and compared with other common approaches, including file-based systems, relational databases (e.g., PostgreSQL/PostGIS), and NoSQL databases (e.g., MongoDB and Hive). Earth Observing System (EOS) data collected from NASA's Atmospheric Scientific Data Center (ASDC) will be used and stored in these selected database systems, and a set of spatial and non-spatial queries will be designed to benchmark their performance on retrieving large-scale, multi-dimensional arrays of EOS data. Lessons learnt from using rasdaman will be discussed as well.

  3. Empirical study on neural network based predictive techniques for automatic number plate recognition

    NASA Astrophysics Data System (ADS)

    Shashidhara, M. S.; Indrakumar, S. S.

    2011-10-01

    The objective of this study is to provide an easy, accurate and effective technology for the Bangalore city traffic control. This is based on the techniques of image processing and laser beam technology. The core concept chosen here is an image processing technology by the method of automatic number plate recognition system. First number plate is recognized if any vehicle breaks the traffic rules in the signals. The number is fetched from the database of the RTO office by the process of automatic database fetching. Next this sends the notice and penalty related information to the vehicle owner email-id and an SMS sent to vehicle owner. In this paper, we use of cameras with zooming options & laser beams to get accurate pictures further applied image processing techniques such as Edge detection to understand the vehicle, Identifying the location of the number plate, Identifying the number plate for further use, Plain plate number, Number plate with additional information, Number plates in the different fonts. Accessing the database of the vehicle registration office to identify the name and address and other information of the vehicle number. The updates to be made to the database for the recording of the violation and penalty issues. A feed forward artificial neural network is used for OCR. This procedure is particularly important for glyphs that are visually similar such as '8' and '9' and results in training sets of between 25,000 and 40,000 training samples. Over training of the neural network is prevented by Bayesian regularization. The neural network output value is set to 0.05 when the input is not desired glyph, and 0.95 for correct input.

  4. A Solution on Identification and Rearing Files Insmallhold Pig Farming

    NASA Astrophysics Data System (ADS)

    Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang

    In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.

  5. Databases in the Central Government : State-of-the-art and the Future

    NASA Astrophysics Data System (ADS)

    Ohashi, Tomohiro

    Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.

  6. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...

  7. Curriculum Connection. Take Technology Outdoors.

    ERIC Educational Resources Information Center

    Dean, Bruce Robert

    1992-01-01

    Technology can support hands-on science as elementary students use computers to formulate field guides to nature surrounding their school. Students examine other field guides; open databases for recording information; collect, draw, and identify plants, insects, and animals; enter data into the database; then generate a computerized field guide.…

  8. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099

  9. Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.

    PubMed

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  10. An international aerospace information system - A cooperative opportunity

    NASA Technical Reports Server (NTRS)

    Blados, Walter R.; Cotter, Gladys A.

    1992-01-01

    This paper presents for consideration new possibilities for uniting the various aerospace database efforts toward a cooperative international aerospace database initiative that can optimize the cost-benefit equation for all members. The development of astronautics and aeronautics in individual nations has led to initiatives for national aerospace databases. Technological developments in information technology and science, as well as the reality of scarce resources, makes it necessary to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing.

  11. Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study

    DTIC Science & Technology

    2015-01-16

    evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and

  12. Information for Developing Countries: Definitions, Institutions and Issues. A Contribution towards Forming an Understanding of the Potential for Consultancy, Marketing and Training Related Activities. Kingston Polytechnic School of Information Systems Research Report 87-3.

    ERIC Educational Resources Information Center

    Lindsay, John

    This paper reports on the emerging market in information on development-related activities in terms of the European capacity in databases and information networking. The first of its two parts addresses issues that are emerging consequent to the introduction of information technology in developing countries. Problems of definition and interest in…

  13. MIMIC II: a massive temporal ICU patient database to support research in intelligent patient monitoring

    NASA Technical Reports Server (NTRS)

    Saeed, M.; Lieu, C.; Raber, G.; Mark, R. G.

    2002-01-01

    Development and evaluation of Intensive Care Unit (ICU) decision-support systems would be greatly facilitated by the availability of a large-scale ICU patient database. Following our previous efforts with the MIMIC (Multi-parameter Intelligent Monitoring for Intensive Care) Database, we have leveraged advances in networking and storage technologies to develop a far more massive temporal database, MIMIC II. MIMIC II is an ongoing effort: data is continuously and prospectively archived from all ICU patients in our hospital. MIMIC II now consists of over 800 ICU patient records including over 120 gigabytes of data and is growing. A customized archiving system was used to store continuously up to four waveforms and 30 different parameters from ICU patient monitors. An integrated user-friendly relational database was developed for browsing of patients' clinical information (lab results, fluid balance, medications, nurses' progress notes). Based upon its unprecedented size and scope, MIMIC II will prove to be an important resource for intelligent patient monitoring research, and will support efforts in medical data mining and knowledge-discovery.

  14. Second NASA Technical Interchange Meeting (TIM): Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    ONeil, D. A.; Mankins, J. C.; Christensen, C. B.; Gresham, E. C.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS), a spreadsheet analysis tool suite, applies parametric equations for sizing and lifecycle cost estimation. Performance, operation, and programmatic data used by the equations come from a Technology Tool Box (TTB) database. In this second TTB Technical Interchange Meeting (TIM), technologists, system model developers, and architecture analysts discussed methods for modeling technology decisions in spreadsheet models, identified specific technology parameters, and defined detailed development requirements. This Conference Publication captures the consensus of the discussions and provides narrative explanations of the tool suite, the database, and applications of ATLAS within NASA s changing environment.

  15. Health technology management: a database analysis as support of technology managers in hospitals.

    PubMed

    Miniati, Roberto; Dori, Fabrizio; Iadanza, Ernesto; Fregonara, Mario M; Gentili, Guido Biffi

    2011-01-01

    Technology management in healthcare must continually respond and adapt itself to new improvements in medical equipment. Multidisciplinary approaches which consider the interaction of different technologies, their use and user skills, are necessary in order to improve safety and quality. An easy and sustainable methodology is vital to Clinical Engineering (CE) services in healthcare organizations in order to define criteria regarding technology acquisition and replacement. This article underlines the critical aspects of technology management in hospitals by providing appropriate indicators for benchmarking CE services exclusively referring to the maintenance database from the CE department at the Careggi Hospital in Florence, Italy.

  16. TOPSAN: a dynamic web database for structural genomics.

    PubMed

    Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John

    2011-01-01

    The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.

  17. Drug-Path: a database for drug-induced pathways.

    PubMed

    Zeng, Hui; Qiu, Chengxiang; Cui, Qinghua

    2015-01-01

    Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. © The Author(s) 2015. Published by Oxford University Press.

  18. Fish Karyome: A karyological information network database of Indian Fishes.

    PubMed

    Nagpure, Naresh Sahebrao; Pathak, Ajey Kumar; Pati, Rameshwar; Singh, Shri Prakash; Singh, Mahender; Sarkar, Uttam Kumar; Kushwaha, Basdeo; Kumar, Ravindra

    2012-01-01

    'Fish Karyome', a database on karyological information of Indian fishes have been developed that serves as central source for karyotype data about Indian fishes compiled from the published literature. Fish Karyome has been intended to serve as a liaison tool for the researchers and contains karyological information about 171 out of 2438 finfish species reported in India and is publically available via World Wide Web. The database provides information on chromosome number, morphology, sex chromosomes, karyotype formula and cytogenetic markers etc. Additionally, it also provides the phenotypic information that includes species name, its classification, and locality of sample collection, common name, local name, sex, geographical distribution, and IUCN Red list status. Besides, fish and karyotype images, references for 171 finfish species have been included in the database. Fish Karyome has been developed using SQL Server 2008, a relational database management system, Microsoft's ASP.NET-2008 and Macromedia's FLASH Technology under Windows 7 operating environment. The system also enables users to input new information and images into the database, search and view the information and images of interest using various search options. Fish Karyome has wide range of applications in species characterization and identification, sex determination, chromosomal mapping, karyo-evolution and systematics of fishes.

  19. Recognition of edible oil by using BP neural network and laser induced fluorescence spectrum

    NASA Astrophysics Data System (ADS)

    Mu, Tao-tao; Chen, Si-ying; Zhang, Yin-chao; Guo, Pan; Chen, He; Zhang, Hong-yan; Liu, Xiao-hua; Wang, Yuan; Bu, Zhi-chao

    2013-09-01

    In order to accomplish recognition of the different edible oil we set up a laser induced fluorescence spectrum system in the laboratory based on Laser induced fluorescence spectrum technology, and then collect the fluorescence spectrum of different edible oil by using that system. Based on this, we set up a fluorescence spectrum database of different cooking oil. It is clear that there are three main peak position of different edible oil from fluorescence spectrum chart. Although the peak positions of all cooking oil were almost the same, the relative intensity of different edible oils was totally different. So it could easily accomplish that oil recognition could take advantage of the difference of relative intensity. Feature invariants were extracted from the spectrum data, which were chosen from the fluorescence spectrum database randomly, before distinguishing different cooking oil. Then back propagation (BP) neural network was established and trained by the chosen data from the spectrum database. On that basis real experiment data was identified by BP neural network. It was found that the overall recognition rate could reach as high as 83.2%. Experiments showed that the laser induced fluorescence spectrum of different cooking oil was very different from each other, which could be used to accomplish the oil recognition. Laser induced fluorescence spectrum technology, combined BP neural network,was fast, high sensitivity, non-contact, and high recognition rate. It could become a new technique to accomplish the edible oil recognition and quality detection.

  20. Directory of Assistive Technology: Data Sources.

    ERIC Educational Resources Information Center

    Council for Exceptional Children, Reston, VA. Center for Special Education Technology.

    The annotated directory describes in detail both on-line and print databases in the area of assistive technology for individuals with disabilities. For each database, the directory provides the name, address, and telephone number of the sponsoring organization; disability areas served; number of hardware and software products; types of information…

  1. Utility-Scale Energy Technology Capacity Factors | Energy Analysis | NREL

    Science.gov Websites

    Transparent Cost Database Button This chart indicates the range of recent capacity factor estimates for utility-scale technology cost and performance estimates, please visit the Transparent Cost Database website for NREL's information regarding vehicles, biofuels, and electricity generation. Capital Cost

  2. Understanding transit accidents using the National Transit Database and the role of Transit Intelligent Vehicle Initiative Technology in reducing accidents

    DOT National Transportation Integrated Search

    2004-06-01

    This report documents the results of bus accident data analysis using the 2002 National Transit Database (NTD) and discusses the potential of using advanced technology being studied and developed under the U.S. Department of Transportations (U.S. ...

  3. Genetic testing in the European Union: does economic evaluation matter?

    PubMed

    Antoñanzas, Fernando; Rodríguez-Ibeas, R; Hutter, M F; Lorente, R; Juárez, C; Pinillos, M

    2012-10-01

    We review the published economic evaluation studies applied to genetic technologies in the EU to know the main diseases addressed by these studies, the ways the studies were conducted and to assess the efficiency of these new technologies. The final aim of this review was to understand the possibilities of the economic evaluations performed up to date as a tool to contribute to decision making in this area. We have reviewed a set of articles found in several databases until March 2010. Literature searches were made in the following databases: PubMed; Euronheed; Centre for Reviews and Dissemination of the University of York-Health Technology Assessment, Database of Abstracts of Reviews of Effects, NHS Economic Evaluation Database; and Scopus. The algorithm was "(screening or diagnosis) and genetic and (cost or economic) and (country EU27)". We included studies if they met the following criteria: (1) a genetic technology was analysed; (2) human DNA must be tested for; (3) the analysis was a real economic evaluation or a cost study, and (4) the articles had to be related to any EU Member State. We initially found 3,559 papers on genetic testing but only 92 articles of economic analysis referred to a wide range of genetic diseases matched the inclusion criteria. The most studied diseases were as follows: cystic fibrosis (12), breast and ovarian cancer (8), hereditary hemochromatosis (6), Down's syndrome (7), colorectal cancer (5), familial hypercholesterolaemia (5), prostate cancer (4), and thrombophilia (4). Genetic tests were mostly used for screening purposes, and cost-effectiveness analysis is the most common type of economic study. The analysed gene technologies are deemed to be efficient for some specific population groups and screening algorithms according to the values of their cost-effectiveness ratios that were below the commonly accepted threshold of 30,000€. Economic evaluation of genetic technologies matters but the number of published studies is still rather low as to be widely used for most of the decisions in different jurisdictions across the EU. Further, the decision bodies across EU27 are fragmented and the responsibilities are located at different levels of the decision process for what it is difficult to find out whether a given decision on genetic tests was somehow supported by the economic evaluation results.

  4. Use of speech-to-text technology for documentation by healthcare providers.

    PubMed

    Ajami, Sima

    2016-01-01

    Medical records are a critical component of a patient's treatment. However, documentation of patient-related information is considered a secondary activity in the provision of healthcare services, often leading to incomplete medical records and patient data of low quality. Advances in information technology (IT) in the health system and registration of information in electronic health records (EHR) using speechto- text conversion software have facilitated service delivery. This narrative review is a literature search with the help of libraries, books, conference proceedings, databases of Science Direct, PubMed, Proquest, Springer, SID (Scientific Information Database), and search engines such as Yahoo, and Google. I used the following keywords and their combinations: speech recognition, automatic report documentation, voice to text software, healthcare, information, and voice recognition. Due to lack of knowledge of other languages, I searched all texts in English or Persian with no time limits. Of a total of 70, only 42 articles were selected. Speech-to-text conversion technology offers opportunities to improve the documentation process of medical records, reduce cost and time of recording information, enhance the quality of documentation, improve the quality of services provided to patients, and support healthcare providers in legal matters. Healthcare providers should recognize the impact of this technology on service delivery.

  5. [Clinical research evolution. In parallel with the current changes in welfare expectations and information technology incorporation, study designs and data collection and analysis are quickly changing as well].

    PubMed

    Tavazzi, Luigi

    2015-10-01

    The development of both technology, biological, and clinical knowledge leads to remarkable changes of scientific research methodology, including the clinical research. Major changes deal with the pragmatic approach of trial designs, an explosive diffusion of observational research which is becoming a usual component of clinical practice, and an active modelling of new research design. Moreover, a new healthcare landscape could be generated from the information technology routinely used to collect clinical data in huge databases, the management and the analytic methodology of big data, and the development of biological sensors compatible with the daily life delivering signals remotely forwardable to central databases. Precision medicine and individualized medicine seem to be the big novelties of the coming years, guiding to a shared pattern of patient/physician relationship. In healthcare, a huge business related mainly, but not exclusively, to the implementation of information technology is growing. This development will favor radical changes in the health systems, also reshaping the clinical activity. A new governance of the research strategies is needed and the application of the results should be based on shared ethical foundations. This new evolving profile of medical research and practice is discussed in this paper.

  6. Background qualitative analysis of the European reference life cycle database (ELCD) energy datasets - part II: electricity datasets.

    PubMed

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.

  7. TECHNOLOGICAL INNOVATION IN NEUROSURGERY: A QUANTITATIVE STUDY

    PubMed Central

    Marcus, Hani J; Hughes-Hallett, Archie; Kwasnicki, Richard M; Darzi, Ara; Yang, Guang-Zhong; Nandi, Dipankar

    2015-01-01

    Object Technological innovation within healthcare may be defined as the introduction of a new technology that initiates a change in clinical practice. Neurosurgery is a particularly technologically intensive surgical discipline, and new technologies have preceded many of the major advances in operative neurosurgical technique. The aim of the present study was to quantitatively evaluate technological innovation in neurosurgery using patents and peer-reviewed publications as metrics of technology development and clinical translation respectively. Methods A patent database was searched between 1960 and 2010 using the search terms “neurosurgeon” OR “neurosurgical” OR “neurosurgery”. The top 50 performing patent codes were then grouped into technology clusters. Patent and publication growth curves were then generated for these technology clusters. A top performing technology cluster was then selected as an exemplar for more detailed analysis of individual patents. Results In all, 11,672 patents and 208,203 publications relating to neurosurgery were identified. The top performing technology clusters over the 50 years were: image guidance devices, clinical neurophysiology devices, neuromodulation devices, operating microscopes and endoscopes. Image guidance and neuromodulation devices demonstrated a highly correlated rapid rise in patents and publications, suggesting they are areas of technology expansion. In-depth analysis of neuromodulation patents revealed that the majority of high performing patents were related to Deep Brain Stimulation (DBS). Conclusions Patent and publication data may be used to quantitatively evaluate technological innovation in neurosurgery. PMID:25699414

  8. High dimensional biological data retrieval optimization with NoSQL technology.

    PubMed

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.

  9. High dimensional biological data retrieval optimization with NoSQL technology

    PubMed Central

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347

  10. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    PubMed

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.

  11. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    NASA Astrophysics Data System (ADS)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.

  12. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less

  13. Crowdsourcing-Assisted Radio Environment Database for V2V Communication.

    PubMed

    Katagiri, Keita; Sato, Koya; Fujii, Takeo

    2018-04-12

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.

  14. StreptomycesInforSys: A web-enabled information repository

    PubMed Central

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736

  15. Crowdsourcing-Assisted Radio Environment Database for V2V Communication †

    PubMed Central

    Katagiri, Keita; Fujii, Takeo

    2018-01-01

    In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174

  16. StreptomycesInforSys: A web-enabled information repository.

    PubMed

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.

  17. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  18. The Design and Implement of Tourism Information System Based on GIS

    NASA Astrophysics Data System (ADS)

    Chunchang, Fu; Nan, Zhang

    From the geographical information system concept, discusses the main contents of the geographic information system, and the current of the geographic information system key technological measures of tourism information system, the application of tourism information system for specific requirements and goals, and analyzes a relational database model based on the tourist information system in GIS application methods of realization.

  19. Dynamic Analytics-Driven Assessment of Vulnerabilities and Exploitation

    DTIC Science & Technology

    2016-07-15

    integration with big data technologies such as Hadoop , nor does it natively support exporting of events to external relational databases. OSSIM supports...power of big data analytics to determine correlations and temporal causality among vulnerabilities and cyber events. The vulnerability dependencies...via the SCAPE (formerly known as LLCySA [6]). This is illustrated as a big data cyber analytic system architecture in

  20. A Novel Method for Constructing a WIFI Positioning System with Efficient Manpower

    PubMed Central

    Du, Yuanfeng; Yang, Dongkai; Xiu, Chundi

    2015-01-01

    With the rapid development of WIFI technology, WIFI-based indoor positioning technology has been widely studied for location-based services. To solve the problems related to the signal strength database adopted in the widely used fingerprint positioning technology, we first introduce a new system framework in this paper, which includes a modified AP firmware and some cheap self-made WIFI sensor anchors. The periodically scanned reports regarding the neighboring APs and sensor anchors are sent to the positioning server and serve as the calibration points. Besides the calculation of correlations between the target points and the neighboring calibration points, we take full advantage of the important but easily overlooked feature that the signal attenuation model varies in different regions in the regression algorithm to get more accurate results. Thus, a novel method called RSSI Geography Weighted Regression (RGWR) is proposed to solve the fingerprint database construction problem. The average error of all the calibration points’ self-localization results will help to make the final decision of whether the database is the latest or has to be updated automatically. The effects of anchors on system performance are further researched to conclude that the anchors should be deployed at the locations that stand for the features of RSSI distributions. The proposed system is convenient for the establishment of practical positioning system and extensive experiments have been performed to validate that the proposed method is robust and manpower efficient. PMID:25868078

  1. A novel method for constructing a WIFI positioning system with efficient manpower.

    PubMed

    Du, Yuanfeng; Yang, Dongkai; Xiu, Chundi

    2015-04-10

    With the rapid development of WIFI technology, WIFI-based indoor positioning technology has been widely studied for location-based services. To solve the problems related to the signal strength database adopted in the widely used fingerprint positioning technology, we first introduce a new system framework in this paper, which includes a modified AP firmware and some cheap self-made WIFI sensor anchors. The periodically scanned reports regarding the neighboring APs and sensor anchors are sent to the positioning server and serve as the calibration points. Besides the calculation of correlations between the target points and the neighboring calibration points, we take full advantage of the important but easily overlooked feature that the signal attenuation model varies in different regions in the regression algorithm to get more accurate results. Thus, a novel method called RSSI Geography Weighted Regression (RGWR) is proposed to solve the fingerprint database construction problem. The average error of all the calibration points' self-localization results will help to make the final decision of whether the database is the latest or has to be updated automatically. The effects of anchors on system performance are further researched to conclude that the anchors should be deployed at the locations that stand for the features of RSSI distributions. The proposed system is convenient for the establishment of practical positioning system and extensive experiments have been performed to validate that the proposed method is robust and manpower efficient.

  2. Technology utilization office data base analysis and design

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen A.

    1993-01-01

    NASA Headquarters is placing a high priority on the transfer of NASA and NASA contractor developed technologies and expertise to the private sector and to other federal, state and local government organizations. The ultimate objective of these efforts is positive economic impact, an improved quality of life, and a more competitive U.S. posture in international markets. The Technology Utilization Office (TUO) currently serves seven states with its technology transfer efforts. Since 1989, the TUO has handled over one-thousand formal requests for NASA related technologies assistance. The technology transfer process requires promoting public awareness of NASA related soliciting requests for assistance, matching technologies to specific needs, assuring appropriate technology transfer, and monitoring and evaluating the process. Each of these activities have one very important aspect in common: the success of each is dissemination of appropriate high quality information. The purpose of the research was to establish the requirements and develop a preliminary design for a database system to increase the effectiveness and efficiency of the TUO's technology transfer function. The research was conducted following the traditional systems development life cycle methodology and was supported through the use of modern structured analysis techniques. The next section will describe the research and findings as conducted under the life cycle approach.

  3. Realization of Real-Time Clinical Data Integration Using Advanced Database Technology

    PubMed Central

    Yoo, Sooyoung; Kim, Boyoung; Park, Heekyong; Choi, Jinwook; Chun, Jonghoon

    2003-01-01

    As information & communication technologies have advanced, interest in mobile health care systems has grown. In order to obtain information seamlessly from distributed and fragmented clinical data from heterogeneous institutions, we need solutions that integrate data. In this article, we introduce a method for information integration based on real-time message communication using trigger and advanced database technologies. Messages were devised to conform to HL7, a standard for electronic data exchange in healthcare environments. The HL7 based system provides us with an integrated environment in which we are able to manage the complexities of medical data. We developed this message communication interface to generate and parse HL7 messages automatically from the database point of view. We discuss how easily real time data exchange is performed in the clinical information system, given the requirement for minimum loading of the database system. PMID:14728271

  4. Marine and Hydrokinetic Data | Geospatial Data Science | NREL

    Science.gov Websites

    . wave energy resource using a 51-month Wavewatch III hindcast database developed by the National Database The U.S. Department of Energy's Marine and Hydrokinetic Technology Database provides information database includes wave, tidal, current, and ocean thermal energy and contains information about energy

  5. Energy science and technology database (on the internet). Online data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Energy Science and Technology Database (EDB) is a multidisciplinary file containing worldwide references to basic and applied scientific and technical research literature. The information is collected for use by government managers, researchers at the national laboratories, and other research efforts sponsored by the U.S. Department of Energy, and the results of this research are transferred to the public. Abstracts are included for records from 1976 to the present. The EDB also contains the Nuclear Science Abstracts which is a comprehensive abstract and index collection to the international nuclear science and technology literature for the period 1948 through 1976. Includedmore » are scientific and technical reports of the U.S. Atomic Energy Commission, U.S. Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Approximately 25% of the records in the file contain abstracts. Nuclear Science Abstracts contains over 900,000 bibliographic records. The entire Energy Science and Technology Database contains over 3 million bibliographic records. This database is now available for searching through the GOV. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  6. Reflections on CD-ROM: Bridging the Gap between Technology and Purpose.

    ERIC Educational Resources Information Center

    Saviers, Shannon Smith

    1987-01-01

    Provides a technological overview of CD-ROM (Compact Disc-Read Only Memory), an optically-based medium for data storage offering large storage capacity, computer-based delivery system, read-only medium, and economic mass production. CD-ROM database attributes appropriate for information delivery are also reviewed, including large database size,…

  7. Proposal for Implementing Multi-User Database (MUD) Technology in an Academic Library.

    ERIC Educational Resources Information Center

    Filby, A. M. Iliana

    1996-01-01

    Explores the use of MOO (multi-user object oriented) virtual environments in academic libraries to enhance reference services. Highlights include the development of multi-user database (MUD) technology from gaming to non-recreational settings; programming issues; collaborative MOOs; MOOs as distinguished from other types of virtual reality; audio…

  8. Over 20 years of reaction access systems from MDL: a novel reaction substructure search algorithm.

    PubMed

    Chen, Lingran; Nourse, James G; Christie, Bradley D; Leland, Burton A; Grier, David L

    2002-01-01

    From REACCS, to MDL ISIS/Host Reaction Gateway, and most recently to MDL Relational Chemistry Server, a new product based on Oracle data cartridge technology, MDL's reaction database management and retrieval systems have undergone great changes. The evolution of the system architecture is briefly discussed. The evolution of MDL reaction substructure search (RSS) algorithms is detailed. This article mainly describes a novel RSS algorithm. This algorithm is based on a depth-first search approach and is able to fully and prospectively use reaction specific information, such as reacting center and atom-atom mapping (AAM) information. The new algorithm has been used in the recently released MDL Relational Chemistry Server and allows the user to precisely find reaction instances in databases while minimizing unrelated hits. Finally, the existing and new RSS algorithms are compared with several examples.

  9. Research on Ajax and Hibernate technology in the development of E-shop system

    NASA Astrophysics Data System (ADS)

    Yin, Luo

    2011-12-01

    Hibernate is a object relational mapping framework of open source code, which conducts light-weighted object encapsulation of JDBC to let Java programmers use the concept of object-oriented programming to manipulate database at will. The appearence of the concept of Ajax (asynchronous JavaScript and XML technology) begins the time prelude of page partial refresh so that developers can develop web application programs with stronger interaction. The paper illustrates the concrete application of Ajax and Hibernate to the development of E-shop in details and adopts them to design to divide the entire program code into relatively independent parts which can cooperate with one another as well. In this way, it is easier for the entire program to maintain and expand.

  10. Social media based NPL system to find and retrieve ARM data: Concept paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra

    Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra

    Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less

  12. The current status of usability studies of information technologies in China: a systematic study.

    PubMed

    Lei, Jianbo; Xu, Lufei; Meng, Qun; Zhang, Jiajie; Gong, Yang

    2014-01-01

    To systematically review and analyze the current status and characteristics of usability studies in China in the field of information technology in general and in the field of healthcare in particular. We performed a quantitative literature analysis in three major Chinese academic databases and one English language database using Chinese search terms equivalent to the concept of usability. Six hundred forty-seven publications were selected for analysis. We found that in China the literature on usability in the field of information technology began in 1994 and increased thereafter. The usability definitions from ISO 9241-11:1998 and Nielsen (1993) have been widely recognized and cited. Authors who have published several publications are rare. Fourteen journals have a publishing rate over 1%. Only nine publications about HIT were identified. China's usability research started relatively late. There is a lack of organized research teams and dedicated usability journals. High-impact theoretical studies are scarce. On the application side, no original and systematic research frameworks have been developed. The understanding and definition of usability is not well synchronized with international norms. Besides, usability research in HIT is rare. More human and material resources need to be invested in China's usability research, particularly in HIT.

  13. A Community Data Model for Hydrologic Observations

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.

  14. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  15. COINS: A composites information database system

    NASA Technical Reports Server (NTRS)

    Siddiqi, Shahid; Vosteen, Louis F.; Edlow, Ralph; Kwa, Teck-Seng

    1992-01-01

    An automated data abstraction form (ADAF) was developed to collect information on advanced fabrication processes and their related costs. The information will be collected for all components being fabricated as part of the ACT program and include in a COmposites INformation System (COINS) database. The aim of the COINS development effort is to provide future airframe preliminary design and fabrication teams with a tool through which production cost can become a deterministic variable in the design optimization process. The effort was initiated by the Structures Technology Program Office (STPO) of the NASA LaRC to implement the recommendations of a working group comprised of representatives from the commercial airframe companies. The principal working group recommendation was to re-institute collection of composite part fabrication data in a format similar to the DOD/NASA Structural Composites Fabrication Guide. The fabrication information collection form was automated with current user friendly computer technology. This work in progress paper describes the new automated form and features that make the form easy to use by an aircraft structural design-manufacturing team.

  16. A dedicated database system for handling multi-level data in systems biology.

    PubMed

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  17. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    NASA Astrophysics Data System (ADS)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  18. An Examination of Selected Software Testing Tools: 1992

    DTIC Science & Technology

    1992-12-01

    Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows

  19. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  20. Examining the Factors That Contribute to Successful Database Application Implementation Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Nworji, Alexander O.

    2013-01-01

    Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…

  1. An Overview to Research on Education Technology Based on Constructivist Learning Approach

    ERIC Educational Resources Information Center

    Asiksoy, Gulsum; Ozdamli, Fezile

    2017-01-01

    The aim of this research is to determine the trends of education technology researches on Constructivist Learning Approach, which were published on database of ScienceDirect between 2010 and 2016. It also aims to guide researchers who will do studies in this field. After scanning the database, 81 articles published on ScienceDirect's data base…

  2. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    NASA Astrophysics Data System (ADS)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  3. Integrating diverse databases into an unified analysis framework: a Galaxy approach

    PubMed Central

    Blankenberg, Daniel; Coraor, Nathan; Von Kuster, Gregory; Taylor, James; Nekrutenko, Anton

    2011-01-01

    Recent technological advances have lead to the ability to generate large amounts of data for model and non-model organisms. Whereas, in the past, there have been a relatively small number of central repositories that serve genomic data, an increasing number of distinct specialized data repositories and resources have been established. Here, we describe a generic approach that provides for the integration of a diverse spectrum of data resources into a unified analysis framework, Galaxy (http://usegalaxy.org). This approach allows the simplified coupling of external data resources with the data analysis tools available to Galaxy users, while leveraging the native data mining facilities of the external data resources. Database URL: http://usegalaxy.org PMID:21531983

  4. An overview on integrated data system for archiving and sharing marine geology and geophysical data in Korea Institute of Ocean Science & Technology (KIOST)

    NASA Astrophysics Data System (ADS)

    Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa

    2016-04-01

    We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.

  5. The Top 50 Articles on Minimally Invasive Spine Surgery.

    PubMed

    Virk, Sohrab S; Yu, Elizabeth

    2017-04-01

    Bibliometric study of current literature. To catalog the most important minimally invasive spine (MIS) surgery articles using the amount of citations as a marker of relevance. MIS surgery is a relatively new tool used by spinal surgeons. There is a dynamic and evolving field of research related to MIS techniques, clinical outcomes, and basic science research. To date, there is no comprehensive review of the most cited articles related to MIS surgery. A systematic search was performed over three widely used literature databases: Web of Science, Scopus, and Google Scholar. There were four searches performed using the terms "minimally invasive spine surgery," "endoscopic spine surgery," "percutaneous spinal surgery," and "lateral interbody surgery." The amount of citations included was averaged amongst the three databases to rank each article. The query of the three databases was performed in November 2015. Fifty articles were selected based upon the amount of citations each averaged amongst the three databases. The most cited article was titled "Extreme Lateral Interbody Fusion (XLIF): a novel surgical technique for anterior lumbar interbody fusion" by Ozgur et al and was credited with 447, 239, and 279 citations in Google Scholar, Web of Science, and Scopus, respectively. Citations ranged from 27 to 239 for Web of Science, 60 to 279 for Scopus, and 104 to 462 for Google Scholar. There was a large variety of articles written spanning over 14 different topics with the majority dealing with clinical outcomes related to MIS surgery. The majority of the most cited articles were level III and level IV studies. This is likely due to the relatively recent nature of technological advances in the field. Furthermore level I and level II studies are required in MIS surgery in the years ahead. 5.

  6. ALDB: a domestic-animal long noncoding RNA database.

    PubMed

    Li, Aimin; Zhang, Junying; Zhou, Zhongyin; Wang, Lei; Liu, Yujuan; Liu, Yajun

    2015-01-01

    Long noncoding RNAs (lncRNAs) have attracted significant attention in recent years due to their important roles in many biological processes. Domestic animals constitute a unique resource for understanding the genetic basis of phenotypic variation and are ideal models relevant to diverse areas of biomedical research. With improving sequencing technologies, numerous domestic-animal lncRNAs are now available. Thus, there is an immediate need for a database resource that can assist researchers to store, organize, analyze and visualize domestic-animal lncRNAs. The domestic-animal lncRNA database, named ALDB, is the first comprehensive database with a focus on the domestic-animal lncRNAs. It currently archives 12,103 pig intergenic lncRNAs (lincRNAs), 8,923 chicken lincRNAs and 8,250 cow lincRNAs. In addition to the annotations of lincRNAs, it offers related data that is not available yet in existing lncRNA databases (lncRNAdb and NONCODE), such as genome-wide expression profiles and animal quantitative trait loci (QTLs) of domestic animals. Moreover, a collection of interfaces and applications, such as the Basic Local Alignment Search Tool (BLAST), the Generic Genome Browser (GBrowse) and flexible search functionalities, are available to help users effectively explore, analyze and download data related to domestic-animal lncRNAs. ALDB enables the exploration and comparative analysis of lncRNAs in domestic animals. A user-friendly web interface, integrated information and tools make it valuable to researchers in their studies. ALDB is freely available from http://res.xaut.edu.cn/aldb/index.jsp.

  7. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  8. Thailand mutation and variation database (ThaiMUT).

    PubMed

    Ruangrit, Uttapong; Srikummool, Metawee; Assawamakin, Anunchai; Ngamphiw, Chumpol; Chuechote, Suparat; Thaiprasarnsup, Vilasinee; Agavatpanitch, Gallissara; Pasomsab, Ekawat; Yenchitsomanus, Pa-Thai; Mahasirimongkol, Surakameth; Chantratita, Wasun; Palittapongarnpim, Prasit; Uyyanonvara, Bunyarit; Limwongse, Chanin; Tongsima, Sissades

    2008-08-01

    With the completion of the human genome project, novel sequencing and genotyping technologies had been utilized to detect mutations. Such mutations have continually been produced at exponential rate by researchers in various communities. Based on the population's mutation spectra, occurrences of Mendelian diseases are different across ethnic groups. A proportion of Mendelian diseases can be observed in some countries at higher rates than others. Recognizing the importance of mutation effects in Thailand, we established a National and Ethnic Mutation Database (NEMDB) for Thai people. This database, named Thailand Mutation and Variation database (ThaiMUT), offers a web-based access to genetic mutation and variation information in Thai population. This NEMDB initiative is an important informatics tool for both research and clinical purposes to retrieve and deposit human variation data. The mutation data cataloged in ThaiMUT database were derived from journal articles available in PubMed and local publications. In addition to collected mutation data, ThaiMUT also records genetic polymorphisms located in drug related genes. ThaiMUT could then provide useful information for clinical mutation screening services for Mendelian diseases and pharmacogenomic researches. ThaiMUT can be publicly accessed from http://gi.biotec.or.th/thaimut.

  9. Nuclear Energy Infrastructure Database Description and User’s Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less

  10. Opportunities for engaging low-income, vulnerable populations in health care: a systematic review of homeless persons' access to and use of information technologies.

    PubMed

    McInnes, D Keith; Li, Alice E; Hogan, Timothy P

    2013-12-01

    We systematically reviewed the health and social science literature on access to and use of information technologies by homeless persons by searching 5 bibliographic databases. Articles were included if they were in English, represented original research, appeared in peer-reviewed publications, and addressed our research questions. Sixteen articles met our inclusion criteria. We found that mobile phone ownership ranged from 44% to 62%; computer ownership, from 24% to 40%; computer access and use, from 47% to 55%; and Internet use, from 19% to 84%. Homeless persons used technologies for a range of purposes, some of which were health related. Many homeless persons had access to information technologies, suggesting possible health benefits to developing programs that link homeless persons to health care through mobile phones and the Internet.

  11. Towards the ophthalmology patentome: a comprehensive patent database of ocular drugs and biomarkers.

    PubMed

    Mucke, Hermann A M; Mucke, Eva; Mucke, Peter M

    2013-01-01

    We are currently building a database of all patent documents that contain substantial information related to pharmacology, drug delivery, tissue technology, and molecular diagnostics in ophthalmology. The goal is to establish a 'patentome', a body of cleaned and annotated data where all text-based, chemistry and pharmacology information can be accessed and mined in its context. We provide metrics on patent convention treaty documents, which demonstrate that ocular-related patenting has shown stronger growth than general patent cooperation treaty patenting during the past 25 years, and, while the majority of applications of this type have always provided substantial biological data, both data support and objections by patent examiners have been increasing since 2006-2007. Separately, we present a case study of chemistry information extraction from patents published during the 1950s and 1970s, which reveal compounds with corneal anesthesia potential that were never published in the peer-reviewed literature.

  12. UTILIZATION OF GEOGRAPHIC INFORMATION SYSTEMS TECHNOLOGY IN THE ASSESSMENT OF REGIONAL GROUND-WATER QUALITY.

    USGS Publications Warehouse

    Nebert, Douglas; Anderson, Dean

    1987-01-01

    The U. S. Geological Survey (USGS) in cooperation with the U. S. Environmental Protection Agency Office of Pesticide Programs and several State agencies in Oregon has prepared a digital spatial database at 1:500,000 scale to be used as a basis for evaluating the potential for ground-water contamination by pesticides and other agricultural chemicals. Geographic information system (GIS) software was used to assemble, analyze, and manage spatial and tabular environmental data in support of this project. Physical processes were interpreted relative to published spatial data and an integrated database to support the appraisal of regional ground-water contamination was constructed. Ground-water sampling results were reviewed relative to the environmental factors present in several agricultural areas to develop an empirical knowledge base which could be used to assist in the selection of future sampling or study areas.

  13. Sequential data access with Oracle and Hadoop: a performance comparison

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Canali, Luca; Grancher, Eric

    2014-06-01

    The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.

  14. Assistive technology in developing countries: a review from the perspective of the Convention on the Rights of Persons with Disabilities.

    PubMed

    Borg, Johan; Lindström, Anna; Larsson, Stig

    2011-03-01

    The 'Convention on the Rights of Persons with Disabilities' (CRPD) requires governments to meet the assistive technology needs of citizens. However, the access to assistive technology in developing countries is severely limited, which is aggravated by a lack of related services. To summarize current knowledge on assistive technology for low- and lower-middle-income countries published in 1995 or later, and to provide recommendations that facilitate implementation of the CRPD. Literature review. Literature was searched in web-based databases and reference lists. Studies carried out in low- and lower-middle-income countries, or addressing assistive technology for such countries, were included. The 52 included articles are dominated by product oriented research on leg prostheses and manual wheelchairs. Less has been published on hearing aids and virtually nothing on the broad range of other types of assistive technology. To support effective implementation of the CRPD in these countries, there is a need for actions and research related particularly to policies, service delivery, outcomes and international cooperation, but also to product development and production. The article has a potential to contribute to CRPD compliant developments in the provision of assistive technology in developing countries by providing practitioners with an overview of published knowledge and researchers with identified research needs.

  15. Microlithography and resist technology information at your fingertips via SciFinder

    NASA Astrophysics Data System (ADS)

    Konuk, Rengin; Macko, John R.; Staggenborg, Lisa

    1997-07-01

    Finding and retrieving the information you need about microlithography and resist technology in a timely fashion can make or break your competitive edge in today's business environment. Chemical Abstracts Service (CAS) provides the most complete and comprehensive database of the chemical literature in the CAplus, REGISTRY, and CASREACT files including 13 million document references, 15 million substance records and over 1.2 million reactions. This includes comprehensive coverage of positive and negative resist formulations and processing, photoacid generation, silylation, single and multilayer resist systems, photomasks, dry and wet etching, photolithography, electron-beam, ion-beam and x-ray lithography technologies and process control, optical tools, exposure systems, radiation sources and steppers. Journal articles, conference proceedings and patents related to microlithography and resist technology are analyzed and indexed by scientific information analysts with strong technical background in these areas. The full CAS database, which is updated weekly with new information, is now available at your desktop, via a convenient, user-friendly tool called 'SciFinder.' Author, subject and chemical substance searching is simplified by SciFinder's smart search features. Chemical substances can be searched by chemical structure, chemical name, CAS registry number or molecular formula. Drawing chemical structures in SciFinder is easy and does not require compliance with CA conventions. Built-in intelligence of SciFinder enables users to retrieve substances with multiple components, tautomeric forms and salts.

  16. Modern information technologies in environmental health surveillance. An overview and analysis.

    PubMed

    Bédard, Yvan; Henriques, William D

    2002-01-01

    In recent years we have witnessed the massive introduction of new information technologies that are drastically changing the face of our society. These technologies are being implemented en masse in developed countries, but also in some pockets of developing nations as well. They rely on the convergence of several technologies such as powerful and affordable computers, real-time electronic measurement and monitoring devices, massive production of digital information in different formats, and faster, wireless communication media. Such technologies are having significant impacts on every domain of application, including environmental health surveillance. The current paper provides an overview of those technologies that are having or will likely have the most significant impacts on environmental health. They include World Wide Web-based systems and applications, Database Management Systems and Universal Servers, and GIS and related technologies. The usefulness of these technologies as well as the desire to use them further in the future in the context of environmental health are discussed. Expanding the development and use of these technologies to obtain support for global environmental health will require major efforts in the areas of data access, training and support.

  17. Effectiveness and safety of moxibustion treatment for non-specific lower back pain: protocol for a systematic review.

    PubMed

    Leem, Jungtae; Lee, Seunghoon; Park, Yeoncheol; Seo, Byung-Kwan; Cho, Yeeun; Kang, Jung Won; Lee, Yoon Jae; Ha, In-Hyuk; Lee, Hyun-Jong; Kim, Eun-Jung; Lee, Sanghoon; Nam, Dongwoo

    2017-06-23

    Many patients experience acute lower back pain that becomes chronic pain. The proportion of patients using complementary and alternative medicine to treat lower back is increasing. Even though several moxibustion clinical trials for lower back pain have been conducted, the effectiveness and safety of moxibustion intervention is controversial. The purpose of this study protocol for a systematic review is to evaluate the effectiveness and safety of moxibustion treatment for non-specific lower back pain patients. We will conduct an electronic search of several databases from their inception to May 2017, including Embase, PubMed, Cochrane Central Register of Controlled Trial, Allied and Complementary Medicine Database, Wanfang Database, Chongqing VIP Chinese Science and Technology Periodical Database, China National Knowledge Infrastructure Database, Korean Medical Database, Korean Studies Information Service System, National Discovery for Science Leaders, Oriental Medicine Advanced Searching Integrated System, the Korea Institute of Science and Technology, and KoreaMed. Randomised controlled trials investigating any type of moxibustion treatment will be included. The primary outcome will be pain intensity and functional status/disability due to lower back pain. The secondary outcome will be a global measurement of recovery or improvement, work-related outcomes, radiographic improvement of structure, quality of life, and adverse events (presence or absence). Risk ratio or mean differences with a 95% confidence interval will be used to show the effect of moxibustion therapy when it is possible to conduct a meta-analysis. This review will be published in a peer-reviewed journal and will be presented at an international academic conference for dissemination. Our results will provide current evidence of the effectiveness and safety of moxibustion treatment in non-specific lower back pain patients, and thus will be beneficial to patients, practitioners, and policymakers. CRD42016047468 in PROSPERO 2016. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. TECHNOLOGICAL INFORMATION REGARDING PREBIOTICS AND PROBIOTICS NUTRITION VERSUS THE PATENT REGISTERS: WHAT IS NEW?

    PubMed Central

    dos REIS, José Maciel Caldas; PINHEIRO, Maurício Fortuna; OTI, André Takashi; FEITOSA-JUNIOR, Denilson José Silva; PANTOJA, Mauro de Souza; BARROS, Rui Sérgio Monteiro

    2016-01-01

    ABSTRACT Introduction: Food is a key factor both in prevention and in promoting human health. Among the functional food are highlighted probiotics and prebiotics. Patent databases are the main source of technological information about innovation worldwide, providing extensive library for research sector. Objective: Perform mapping in the main patent databases about pre and probiotics, seeking relevant information regarding the use of biotechnology, nanotechnology and genetic engineering in the production of these foods. Method: Electronic consultation was conducted (online) in the main public databases of patents in Brazil (INPI), United States (USPTO) and the European Patent Bank (EPO). The research involved the period from January 2014 to July 2015, being used in the title fields and summary of patents, the following descriptors in INPI "prebiotic", "prebiotic" "probiotics", "probiotic" and the USPTO and EPO: "prebiotic", "prebiotics", "probiotic", "probiotics". Results: This search haven't found any deposit at the brazilian patents website (INPI) in this period; US Patent &Trademark Office had registered 60 titles in patents and the European Patent Office (EPO) showed 10 documents on the issue. Conclusion: Information technology offered by genetic engineering, biotechnology and nanotechnology deposited in the form of titles and abstracts of patents in relation to early nutritional intervention as functional foods, has increasingly required to decrease the risks and control the progression of health problems. But, the existing summaries, although attractive and promising in this sense, are still incipient to recommend them safely as a therapeutic tool. Therefore, they should be seen more as diet elements and healthy lifestyles. PMID:28076487

  19. CD-ROM in the age of internet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conrad, B.; Depp, D.

    1994-12-31

    Networks are hot and CD-ROM is also hot, but how do they mix? CD-ROM is a relatively inexpensive medium for storing and delivering information, and increasingly, users are connected to networks. But the technologies have developed separately, and there are obstacles to their integration. Drawing on their experience networking CD-ROMs at Oak Ridge National Laboratory, the authors discuss CD-ROM`s strengths and weaknesses as a technology for delivering information to the desktop. CD-ROM networking solutions are LAN-based, not ``open systems.`` Despite this limitation, due to the large number of information resources available on CD-ROM and the relative ease of installing andmore » maintaining databases on CD-ROM, CD-ROMs remain an essential piece of the electronic information puzzle.« less

  20. Cryogenic hydrogen-induced air liquefaction technologies

    NASA Technical Reports Server (NTRS)

    Escher, William J. D.

    1990-01-01

    Extensively utilizing a special advanced airbreathing propulsion archives database, as well as direct contacts with individuals who were active in the field in previous years, a technical assessment of cryogenic hydrogen-induced air liquefaction, as a prospective onboard aerospace vehicle process, was performed and documented. The resulting assessment report is summarized. Technical findings are presented relating the status of air liquefaction technology, both as a singular technical area, and also that of a cluster of collateral technical areas including: compact lightweight cryogenic heat exchangers; heat exchanger atmospheric constituents fouling alleviation; para/ortho hydrogen shift conversion catalysts; hydrogen turbine expanders, cryogenic air compressors and liquid air pumps; hydrogen recycling using slush hydrogen as heat sink; liquid hydrogen/liquid air rocket-type combustion devices; air collection and enrichment systems (ACES); and technically related engine concepts.

  1. Hybrid Wing Body Aircraft System Noise Assessment with Propulsion Airframe Aeroacoustic Experiments

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burley, Casey L.; Olson, Erik D.

    2010-01-01

    A system noise assessment of a hybrid wing body configuration was performed using NASA s best available aircraft models, engine model, and system noise assessment method. A propulsion airframe aeroacoustic effects experimental database for key noise sources and interaction effects was used to provide data directly in the noise assessment where prediction methods are inadequate. NASA engine and aircraft system models were created to define the hybrid wing body aircraft concept as a twin engine aircraft with a 7500 nautical mile mission. The engines were modeled as existing technology high bypass ratio turbofans. The baseline hybrid wing body aircraft was assessed at 22 dB cumulative below the FAA Stage 4 certification level. To determine the potential for noise reduction with relatively near term technologies, seven other configurations were assessed beginning with moving the engines two fan nozzle diameters upstream of the trailing edge and then adding technologies for reduction of the highest noise sources. Aft radiated noise was expected to be the most challenging to reduce and, therefore, the experimental database focused on jet nozzle and pylon configurations that could reduce jet noise through a combination of source reduction and shielding effectiveness. The best configuration for reduction of jet noise used state-of-the-art technology chevrons with a pylon above the engine in the crown position. This configuration resulted in jet source noise reduction, favorable azimuthal directivity, and noise source relocation upstream where it is more effectively shielded by the limited airframe surface, and additional fan noise attenuation from acoustic liner on the crown pylon internal surfaces. Vertical and elevon surfaces were also assessed to add shielding area. The elevon deflection above the trailing edge showed some small additional noise reduction whereas vertical surfaces resulted in a slight noise increase. With the effects of the configurations from the database included, the best available noise reduction was 40 dB cumulative. Projected effects from additional technologies were assessed for an advanced noise reduction configuration including landing gear fairings and advanced pylon and chevron nozzles. Incorporating the three additional technology improvements, an aircraft noise is projected of 42.4 dB cumulative below the Stage 4 level.

  2. Investigation of an artificial intelligence technology--Model trees. Novel applications for an immediate release tablet formulation database.

    PubMed

    Shao, Q; Rowe, R C; York, P

    2007-06-01

    This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.

  3. Economic evaluation of manual therapy for musculoskeletal diseases: a protocol for a systematic review and narrative synthesis of evidence.

    PubMed

    Kim, Chang-Gon; Mun, Su-Jeong; Kim, Ka-Na; Shin, Byung-Cheul; Kim, Nam-Kwen; Lee, Dong-Hyo; Lee, Jung-Han

    2016-05-13

    Manual therapy is the non-surgical conservative management of musculoskeletal disorders using the practitioner's hands on the patient's body for diagnosing and treating disease. The aim of this study is to systematically review trial-based economic evaluations of manual therapy relative to other interventions used for the management of musculoskeletal diseases. Randomised clinical trials (RCTs) on the economic evaluation of manual therapy for musculoskeletal diseases will be included in the review. The following databases will be searched from their inception: Medline, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Econlit, Mantis, Index to Chiropractic Literature, Science Citation Index, Social Science Citation Index, Allied and Complementary Medicine Database (AMED), Cochrane Database of Systematic Reviews (CDSR), National Health Service Database of Abstracts of Reviews of Effects (NHS DARE), National Health Service Health Technology Assessment Database (NHS HTA), National Health Service Economic Evaluation Database (NHS EED), CENTRAL, five Korean medical databases (Oriental Medicine Advanced Searching Integrated System (OASIS), Research Information Service System (RISS), DBPIA, Korean Traditional Knowledge Portal (KTKP) and KoreaMed) and three Chinese databases (China National Knowledge Infrastructure (CNKI), VIP and Wanfang). The evidence for the cost-effectiveness, cost-utility and cost-benefit of manual therapy for musculoskeletal diseases will be assessed as the primary outcome. Health-related quality of life and adverse effects will be assessed as secondary outcomes. We will critically appraise the included studies using the Cochrane risk of bias tool and the Drummond checklist. Results will be summarised using Slavin's qualitative best-evidence synthesis approach. The results of the study will be disseminated via a peer-reviewed journal and/or conference presentations. PROSPERO CRD42015026757. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. The collation of forensic DNA case data into a multi-dimensional intelligence database.

    PubMed

    Walsh, S J; Moss, D S; Kliem, C; Vintiner, G M

    2002-01-01

    The primary aim of any DNA Database is to link individuals to unsolved offenses and unsolved offenses to each other via DNA profiling. This aim has been successfully realised during the operation of the New Zealand (NZ) DNA Databank over the past five years. The DNA Intelligence Project (DIP), a collaborative project involving NZ forensic and law enforcement agencies, interrogated the forensic case data held on the NZ DNA databank and collated it into a functional intelligence database. This database has been used to identify significant trends which direct Police and forensic personnel towards the most appropriate use of DNA technology. Intelligence is being provided in areas such as the level of usage of DNA techniques in criminal investigation, the relative success of crime scene samples and the geographical distribution of crimes. The DIP has broadened the dimensions of the information offered through the NZ DNA Databank and has furthered the understanding and investigative capability of both Police and forensic scientists. The outcomes of this research fit soundly with the current policies of 'intelligence led policing', which are being adopted by Police jurisdictions locally and overseas.

  5. Causal Factors and Adverse Conditions of Aviation Accidents and Incidents Related to Integrated Resilient Aircraft Control

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.; Briggs, Jeffrey L.; Evans, Joni K.; Sandifer, Carl E.; Jones, Sharon Monica

    2010-01-01

    The causal factors of accidents from the National Transportation Safety Board (NTSB) database and incidents from the Federal Aviation Administration (FAA) database associated with loss of control (LOC) were examined for four types of operations (i.e., Federal Aviation Regulation Part 121, Part 135 Scheduled, Part 135 Nonscheduled, and Part 91) for the years 1988 to 2004. In-flight LOC is a serious aviation problem. Well over half of the LOC accidents included at least one fatality (80 percent in Part 121), and roughly half of all aviation fatalities in the studied time period occurred in conjunction with LOC. An adverse events table was updated to provide focus to the technology validation strategy of the Integrated Resilient Aircraft Control (IRAC) Project. The table contains three types of adverse conditions: failure, damage, and upset. Thirteen different adverse condition subtypes were gleaned from the Aviation Safety Reporting System (ASRS), the FAA Accident and Incident database, and the NTSB database. The severity and frequency of the damage conditions, initial test conditions, and milestones references are also provided.

  6. The crustal dynamics intelligent user interface anthology

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.

    1987-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.

  7. Construction of databases: advances and significance in clinical research.

    PubMed

    Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian

    2015-12-01

    Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.

  8. Inorganic Crystal Structure Database (ICSD)

    National Institute of Standards and Technology Data Gateway

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  9. Satellite Communications Technology Database. Part 2

    NASA Technical Reports Server (NTRS)

    2001-01-01

    The Satellite Communications Technology Database is a compilation of data on state-of-the-art Ka-band technologies current as of January 2000. Most U.S. organizations have not published much of their Ka-band technology data, and so the great majority of this data is drawn largely from Japanese, European, and Canadian publications and Web sites. The data covers antennas, high power amplifiers, low noise amplifiers, MMIC devices, microwave/IF switch matrices, SAW devices, ASIC devices, power and data storage. The data herein is raw, and is often presented simply as the download of a table or figure from a site, showing specified technical characteristics, with no further explanation.

  10. New generic indexing technology

    NASA Technical Reports Server (NTRS)

    Freeston, Michael

    1996-01-01

    There has been no fundamental change in the dynamic indexing methods supporting database systems since the invention of the B-tree twenty-five years ago. And yet the whole classical approach to dynamic database indexing has long since become inappropriate and increasingly inadequate. We are moving rapidly from the conventional one-dimensional world of fixed-structure text and numbers to a multi-dimensional world of variable structures, objects and images, in space and time. But, even before leaving the confines of conventional database indexing, the situation is highly unsatisfactory. In fact, our research has led us to question the basic assumptions of conventional database indexing. We have spent the past ten years studying the properties of multi-dimensional indexing methods, and in this paper we draw the strands of a number of developments together - some quite old, some very new, to show how we now have the basis for a new generic indexing technology for the next generation of database systems.

  11. An Object-Oriented View of Backend Databases in a Mobile Environment for Navy and Marine Corps Applications

    DTIC Science & Technology

    2006-09-01

    Each of these layers will be described in more detail to include relevant technologies ( Java , PDA, Hibernate , and PostgreSQL) used to implement...Logic Layer -Object-Relational Mapper ( Hibernate ) Data 35 capable in order to interface with Java applications. Based on meeting the selection...further discussed. Query List Application Logic Layer HibernateApache - Java Servlet - Hibernate Interface -OR Mapper -RDBMS Interface

  12. The Geant4 physics validation repository

    NASA Astrophysics Data System (ADS)

    Wenzel, H.; Yarba, J.; Dotti, A.

    2015-12-01

    The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. The functionality of these components and the technology choices we made are also described.

  13. Application of Optical Disc Databases and Related Technology to Public Access Settings

    DTIC Science & Technology

    1992-03-01

    users to download and retain data. A Video Graphics Adapter (VGA) monitor was included. No printer was provided. 2. CD-ROM Product Computer Select, a...download facilities, without printer support, satisfy user needs? 38 A secondary, but significant, objective was avoidance of unnecessary Reader...design of User Log sheets and mitigated against attachment of a printer to the workstation. F. DATA COLLECTION This section describes the methodology

  14. DEXTER: Disease-Expression Relation Extraction from Text.

    PubMed

    Gupta, Samir; Dingerdissen, Hayley; Ross, Karen E; Hu, Yu; Wu, Cathy H; Mazumder, Raja; Vijay-Shanker, K

    2018-01-01

    Gene expression levels affect biological processes and play a key role in many diseases. Characterizing expression profiles is useful for clinical research, and diagnostics and prognostics of diseases. There are currently several high-quality databases that capture gene expression information, obtained mostly from large-scale studies, such as microarray and next-generation sequencing technologies, in the context of disease. The scientific literature is another rich source of information on gene expression-disease relationships that not only have been captured from large-scale studies but have also been observed in thousands of small-scale studies. Expression information obtained from literature through manual curation can extend expression databases. While many of the existing databases include information from literature, they are limited by the time-consuming nature of manual curation and have difficulty keeping up with the explosion of publications in the biomedical field. In this work, we describe an automated text-mining tool, Disease-Expression Relation Extraction from Text (DEXTER) to extract information from literature on gene and microRNA expression in the context of disease. One of the motivations in developing DEXTER was to extend the BioXpress database, a cancer-focused gene expression database that includes data derived from large-scale experiments and manual curation of publications. The literature-based portion of BioXpress lags behind significantly compared to expression information obtained from large-scale studies and can benefit from our text-mined results. We have conducted two different evaluations to measure the accuracy of our text-mining tool and achieved average F-scores of 88.51 and 81.81% for the two evaluations, respectively. Also, to demonstrate the ability to extract rich expression information in different disease-related scenarios, we used DEXTER to extract information on differential expression information for 2024 genes in lung cancer, 115 glycosyltransferases in 62 cancers and 826 microRNA in 171 cancers. All extractions using DEXTER are integrated in the literature-based portion of BioXpress.Database URL: http://biotm.cis.udel.edu/DEXTER.

  15. Home care technology through an ability expectation lens.

    PubMed

    Wolbring, Gregor; Lashewicz, Bonnie

    2014-06-20

    Home care is on the rise, and its delivery is increasingly reliant on an expanding variety of health technologies ranging from computers to telephone "health apps" to social robots. These technologies are most often predicated on expectations that people in their homes (1) can actively interact with these technologies and (2) are willing to submit to the action of the technology in their home. Our purpose is to use an "ability expectations" lens to bring together, and provide some synthesis of, the types of utility and disadvantages that can arise for people with disabilities in relation to home care technology development and use. We searched the academic databases Scopus, Web of Science, EBSCO ALL, IEEE Xplore, and Compendex to collect articles that had the term "home care technology" in the abstract or as a topic (in the case of Web of Science). We also used our background knowledge and related academic literature pertaining to self-diagnosis, health monitoring, companionship, health information gathering, and care. We examined background articles and articles collected through our home care technology search in terms of ability expectations assumed in the presentation of home care technologies, or discussed in relation to home care technologies. While advances in health care support are made possible through emerging technologies, we urge critical examination of such technologies in terms of implications for the rights and dignity of people with diverse abilities. Specifically, we see potential for technologies to result in new forms of exclusion and powerlessness. Ableism influences choices made by funders, policy makers, and the public in the development and use of home health technologies and impacts how people with disabilities are served and how useful health support technologies will be for them. We urge continued critical examination of technology development and use according to ability expectations, and we recommend increasing incorporation of participatory design processes to counteract potential for health support technology to render people with disabilities technologically excluded and powerless.

  16. THz Spectroscopy and Spectroscopic Database for Astrophysics

    NASA Technical Reports Server (NTRS)

    Pearson, John C.; Drouin, Brian J.

    2006-01-01

    Molecule specific astronomical observations rely on precisely determined laboratory molecular data for interpretation. The Herschel Heterodyne Instrument for Far Infrared, a suite of SOFIA instruments, and ALMA are each well placed to expose the limitations of available molecular physics data and spectral line catalogs. Herschel and SOFIA will observe in high spectral resolution over the entire far infrared range. Accurate data to previously unimagined frequencies including infrared ro-vibrational and ro-torsional bands will be required for interpretation of the observations. Planned ALMA observations with a very small beam will reveal weaker emission features requiring accurate knowledge of higher quantum numbers and additional vibrational states. Historically, laboratory spectroscopy has been at the front of submillimeter technology development, but now astronomical receivers have an enormous capability advantage. Additionally, rotational spectroscopy is a relatively mature field attracting little interest from students and funding agencies. Molecular database maintenance is tedious and difficult to justify as research. This severely limits funding opportunities even though data bases require the same level of expertise as research. We report the application of some relatively new receiver technology into a simple solid state THz spectrometer that has the performance required to collect the laboratory data required by astronomical observations. Further detail on the lack of preparation for upcoming missions by the JPL spectral line catalog is given.

  17. Specific energy yield comparison between crystalline silicon and amorphous silicon based PV modules

    NASA Astrophysics Data System (ADS)

    Ferenczi, Toby; Stern, Omar; Hartung, Marianne; Mueggenburg, Eike; Lynass, Mark; Bernal, Eva; Mayer, Oliver; Zettl, Marcus

    2009-08-01

    As emerging thin-film PV technologies continue to penetrate the market and the number of utility scale installations substantially increase, detailed understanding of the performance of the various PV technologies becomes more important. An accurate database for each technology is essential for precise project planning, energy yield prediction and project financing. However recent publications showed that it is very difficult to get accurate and reliable performance data of theses technologies. This paper evaluates previously reported claims the amorphous silicon based PV modules have a higher annual energy yield compared to crystalline silicon modules relative to their rated performance. In order to acquire a detailed understanding of this effect, outdoor module tests were performed at GE Global Research Center in Munich. In this study we examine closely two of the five reported factors that contribute to enhanced energy yield of amorphous silicon modules. We find evidence to support each of these factors and evaluate their relative significance. We discuss aspects for improvement in how PV modules are sold and identify areas for further study further study.

  18. Analysis Report for Exascale Storage Requirements for Scientific Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruwart, Thomas M.

    Over the next 10 years, the Department of Energy will be transitioning from Petascale to Exascale Computing resulting in data storage, networking, and infrastructure requirements to increase by three orders of magnitude. The technologies and best practices used today are the result of a relatively slow evolution of ancestral technologies developed in the 1950s and 1960s. These include magnetic tape, magnetic disk, networking, databases, file systems, and operating systems. These technologies will continue to evolve over the next 10 to 15 years on a reasonably predictable path. Experience with the challenges involved in transitioning these fundamental technologies from Terascale tomore » Petascale computing systems has raised questions about how these will scale another 3 or 4 orders of magnitude to meet the requirements imposed by Exascale computing systems. This report is focused on the most concerning scaling issues with data storage systems as they relate to High Performance Computing- and presents options for a path forward. Given the ability to store exponentially increasing amounts of data, far more advanced concepts and use of metadata will be critical to managing data in Exascale computing systems.« less

  19. Secondary Analysis and Integration of Existing Data to Elucidate the Genetic Architecture of Cancer Risk and Related Outcomes, R21 | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    This funding opportunity announcement (FOA) encourages applications that propose to conduct secondary data analysis and integration of existing datasets and database resources, with the ultimate aim to elucidate the genetic architecture of cancer risk and related outcomes. The goal of this initiative is to address key scientific questions relevant to cancer epidemiology by supporting the analysis of existing genetic or genomic datasets, possibly in combination with environmental, outcomes, behavioral, lifestyle, and molecular profiles data.

  20. Secondary Analysis and Integration of Existing Data to Elucidate the Genetic Architecture of Cancer Risk and Related Outcomes, R01 | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    This funding opportunity announcement (FOA) encourages applications that propose to conduct secondary data analysis and integration of existing datasets and database resources, with the ultimate aim to elucidate the genetic architecture of cancer risk and related outcomes. The goal of this initiative is to address key scientific questions relevant to cancer epidemiology by supporting the analysis of existing genetic or genomic datasets, possibly in combination with environmental, outcomes, behavioral, lifestyle, and molecular profiles data.

  1. Some Reliability Issues in Very Large Databases.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1988-01-01

    Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…

  2. Library Instruction and Online Database Searching.

    ERIC Educational Resources Information Center

    Mercado, Heidi

    1999-01-01

    Reviews changes in online database searching in academic libraries. Topics include librarians conducting all searches; the advent of end-user searching and the need for user instruction; compact disk technology; online public catalogs; the Internet; full text databases; electronic information literacy; user education and the remote library user;…

  3. Content Independence in Multimedia Databases.

    ERIC Educational Resources Information Center

    de Vries, Arjen P.

    2001-01-01

    Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…

  4. EPAUS9R - An Energy Systems Database for use with the Market Allocation (MARKAL) Model

    EPA Pesticide Factsheets

    EPA’s MARKAL energy system databases estimate future-year technology dispersals and associated emissions. These databases are valuable tools for exploring a variety of future scenarios for the U.S. energy-production systems that can impact climate change c

  5. Managing Content in a Matter of Minutes

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA software created to help scientists expeditiously search and organize their research documents is now aiding compliance personnel, law enforcement investigators, and the general public in their efforts to search, store, manage, and retrieve documents more efficiently. Developed at Ames Research Center, NETMARK software was designed to manipulate vast amounts of unstructured and semi-structured NASA documents. NETMARK is both a relational and object-oriented technology built on an Oracle enterprise-wide database. To ensure easy user access, Ames constructed NETMARK as a Web-enabled platform utilizing the latest in Internet technology. One of the significant benefits of the program was its ability to store and manage mission-critical data.

  6. The use of inexpensive computer-based scanning survey technology to perform medical practice satisfaction surveys.

    PubMed

    Shumaker, L; Fetterolf, D E; Suhrie, J

    1998-01-01

    The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.

  7. Clay and Polymer-Based Composites Applied to Drug Release: A Scientific and Technological Prospection.

    PubMed

    Meirelles, Lyghia Maria Araújo; Raffin, Fernanda Nervo

    2017-01-01

    There has been a growing trend in recent years for the development of hybrid materials, called composites, based on clay and polymers, whose innovative properties render them attractive for drug release. The objective of this manuscript was to conduct a review of original articles on this topic published over the last decade and of the body of patents related to these carriers. A scientific prospection was carried out spanning the period from 2005 to 2015 on the Web of Science database. The technological prospection encompassed the United States Patent and Trademark Office, the European Patent Office, the World International Patent Office and the National Institute of Industrial Property databases, filtering patents with the code A61K. The survey revealed a rise in the number of publications over the past decade, confirming the potential of these hybrids for use in pharmaceutical technology. Through interaction between polymer and clay, the mechanical and thermal properties of composites are enhanced, promoting stable, controlled drugs release in biological media. The most cited clays analyzed in the articles was montmorillonite, owing to its high surface area and capacity for ion exchange. The polymeric part is commonly obtained by copolymerization, particularly using acrylate derivatives. The hybrid materials are obtained mainly in particulate form on a nanometric scale, attaining a modified release profile often sensitive to stimuli in the media. A low number of patents related to the topic were found. The World International Patent Office had the highest number of lodged patents, while Japan was the country which published the most patents. A need to broaden the application of this technology to include more therapeutic classes was identified. Moreover, the absence of regulation of nanomaterials might explain the disparity between scientific and technological output. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.

  8. Home Care Technology Through an Ability Expectation Lens

    PubMed Central

    2014-01-01

    Home care is on the rise, and its delivery is increasingly reliant on an expanding variety of health technologies ranging from computers to telephone “health apps” to social robots. These technologies are most often predicated on expectations that people in their homes (1) can actively interact with these technologies and (2) are willing to submit to the action of the technology in their home. Our purpose is to use an “ability expectations” lens to bring together, and provide some synthesis of, the types of utility and disadvantages that can arise for people with disabilities in relation to home care technology development and use. We searched the academic databases Scopus, Web of Science, EBSCO ALL, IEEE Xplore, and Compendex to collect articles that had the term “home care technology” in the abstract or as a topic (in the case of Web of Science). We also used our background knowledge and related academic literature pertaining to self-diagnosis, health monitoring, companionship, health information gathering, and care. We examined background articles and articles collected through our home care technology search in terms of ability expectations assumed in the presentation of home care technologies, or discussed in relation to home care technologies. While advances in health care support are made possible through emerging technologies, we urge critical examination of such technologies in terms of implications for the rights and dignity of people with diverse abilities. Specifically, we see potential for technologies to result in new forms of exclusion and powerlessness. Ableism influences choices made by funders, policy makers, and the public in the development and use of home health technologies and impacts how people with disabilities are served and how useful health support technologies will be for them. We urge continued critical examination of technology development and use according to ability expectations, and we recommend increasing incorporation of participatory design processes to counteract potential for health support technology to render people with disabilities technologically excluded and powerless. PMID:24956581

  9. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  10. Publishing Linked Open Data for Physical Samples - Lessons Learned

    NASA Astrophysics Data System (ADS)

    Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.

    2016-12-01

    Most data and information about physical samples and associated sampling features currently reside in relational databases. Integrating common concepts from various databases has motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). The goal of our work is threefold: To evaluate and select ontologies in different granularities for common concepts; to establish best practices and develop a generic methodology for publishing physical sample data stored in relational database as Linked Open Data; and to reuse standard community vocabularies from the International Commission on Stratigraphy (ICS), Global Volcanism Program (GVP), General Bathymetric Chart of the Oceans (GEBCO), and others. Our work leverages developments in the EarthCube GeoLink project and the Interdisciplinary Earth Data Alliance (IEDA) facility for modeling and extracting physical sample data stored in relational databases. Reusing ontologies developed by GeoLink and IEDA has facilitated discovery and integration of data and information across multiple collections including the USGS National Geochemical Database (NGDB), System for Earth Sample Registration (SESAR), and Index to Marine & Lacustrine Geological Samples (IMLGS). We have evaluated, tested, and deployed Linked Open Data tools including Morph, Virtuoso Server, LodView, LodLive, and YASGUI for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Using persistent identifiers such as Open Researcher & Contributor IDs (ORCIDs) and International Geo Sample Numbers (IGSNs) at the record level makes it possible for other repositories to link related resources such as persons, datasets, documents, expeditions, awards, etc. to samples, features, and collections. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).

  11. Strategies for medical data extraction and presentation part 2: creating a customizable context and user-specific patient reference database.

    PubMed

    Reiner, Bruce

    2015-06-01

    One of the greatest challenges facing healthcare professionals is the ability to directly and efficiently access relevant data from the patient's healthcare record at the point of care; specific to both the context of the task being performed and the specific needs and preferences of the individual end-user. In radiology practice, the relative inefficiency of imaging data organization and manual workflow requirements serves as an impediment to historical imaging data review. At the same time, clinical data retrieval is even more problematic due to the quality and quantity of data recorded at the time of order entry, along with the relative lack of information system integration. One approach to address these data deficiencies is to create a multi-disciplinary patient referenceable database which consists of high-priority, actionable data within the cumulative patient healthcare record; in which predefined criteria are used to categorize and classify imaging and clinical data in accordance with anatomy, technology, pathology, and time. The population of this referenceable database can be performed through a combination of manual and automated methods, with an additional step of data verification introduced for data quality control. Once created, these referenceable databases can be filtered at the point of care to provide context and user-specific data specific to the task being performed and individual end-user requirements.

  12. The HARPS-N archive through a Cassandra, NoSQL database suite?

    NASA Astrophysics Data System (ADS)

    Molinari, Emilio; Guerra, Jose; Harutyunyan, Avet; Lodi, Marcello; Martin, Adrian

    2016-07-01

    The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.

  13. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes.

    PubMed

    Santos, Alberto; Wernersson, Rasmus; Jensen, Lars Juhl

    2015-01-01

    The eukaryotic cell division cycle is a highly regulated process that consists of a complex series of events and involves thousands of proteins. Researchers have studied the regulation of the cell cycle in several organisms, employing a wide range of high-throughput technologies, such as microarray-based mRNA expression profiling and quantitative proteomics. Due to its complexity, the cell cycle can also fail or otherwise change in many different ways if important genes are knocked out, which has been studied in several microscopy-based knockdown screens. The data from these many large-scale efforts are not easily accessed, analyzed and combined due to their inherent heterogeneity. To address this, we have created Cyclebase--available at http://www.cyclebase.org--an online database that allows users to easily visualize and download results from genome-wide cell-cycle-related experiments. In Cyclebase version 3.0, we have updated the content of the database to reflect changes to genome annotation, added new mRNA and protein expression data, and integrated cell-cycle phenotype information from high-content screens and model-organism databases. The new version of Cyclebase also features a new web interface, designed around an overview figure that summarizes all the cell-cycle-related data for a gene. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Expression and Organization of Geographic Spatial Relations Based on Topic Maps

    NASA Astrophysics Data System (ADS)

    Liang, H. J.; Wang, H.; Cui, T. J.; Guo, J. F.

    2017-09-01

    Spatial Relation is one of the important components of Geographical Information Science and Spatial Database. There have been lots of researches on Spatial Relation and many different spatial relations have been proposed. The relationships among these spatial relations such as hierarchy and so on are complex and this brings some difficulties to the applications and teaching of these spatial relations. This paper summaries some common spatial relations, extracts the topic types, association types, resource types of these spatial relations using the technology of Topic Maps, and builds many different relationships among these spatial relations. Finally, this paper utilizes Java and Ontopia to build a topic map among these common spatial relations, forms a complex knowledge network of spatial relations, and realizes the effective management and retrieval of spatial relations.

  15. Computer Security Products Technology Overview

    DTIC Science & Technology

    1988-10-01

    13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a

  16. Database Software Selection for the Egyptian National STI Network.

    ERIC Educational Resources Information Center

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  17. 76 FR 6789 - Unlicensed Operation in the TV Broadcast Bands

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-08

    ...., Spectrum Bridge Inc., Telcordia Technologies, and WSdb LLC--as TV bands device database administrators. The TV bands databases will be used by fixed and personal portable unlicensed devices to identify unused... administrators to develop the databases that are necessary to enable the introduction of this new class of...

  18. Evolution of grid-wide access to database resident information in ATLAS using Frontier

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Bujor, F.; de Stefano, J.; Dewhurst, A. L.; Dykstra, D.; Front, D.; Gallas, E.; Gamboa, C. F.; Luehring, F.; Walker, R.

    2012-12-01

    The ATLAS experiment deployed Frontier technology worldwide during the initial year of LHC collision data taking to enable user analysis jobs running on the Worldwide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken, such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond user analysis and subsystem specific tasks such as calibration and alignment, extending into production processing areas, such as initial reconstruction and trigger reprocessing. With a more robust and tuned system, we are better equipped to satisfy the still growing number of diverse clients and the demands of increasingly sophisticated processing and analysis.

  19. Geodemographic segmentation systems for screening health data.

    PubMed Central

    Openshaw, S; Blake, M

    1995-01-01

    AIM--To describe how geodemographic segmentation systems might be useful as a quick and easy way of exploring postcoded health databases for potential interesting patterns related to deprivation and other socioeconomic characteristics. DESIGN AND SETTING--This is demonstrated using GB Profiles, a freely available geodemographic classification system developed at Leeds University. It is used here to screen a database of colorectal cancer registrations as a first step in the analysis of that data. RESULTS AND CONCLUSION--Conventional geodemographics is a fairly simple technology and a number of outstanding methodological problems are identified. A solution to some problems is illustrated by using neural net based classifiers and then by reference to a more sophisticated geodemographic approach via a data optimal segmentation technique. Images PMID:8594132

  20. MySQL/PHP web database applications for IPAC proposal submission

    NASA Astrophysics Data System (ADS)

    Crane, Megan K.; Storrie-Lombardi, Lisa J.; Silbermann, Nancy A.; Rebull, Luisa M.

    2008-07-01

    The Infrared Processing and Analysis Center (IPAC) is NASA's multi-mission center of expertise for long-wavelength astrophysics. Proposals for various IPAC missions and programs are ingested via MySQL/PHP web database applications. Proposers use web forms to enter coversheet information and upload PDF files related to the proposal. Upon proposal submission, a unique directory is created on the webserver into which all of the uploaded files are placed. The coversheet information is converted into a PDF file using a PHP extension called FPDF. The files are concatenated into one PDF file using the command-line tool pdftk and then forwarded to the review committee. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.

  1. MOPED 2.5—An Integrated Multi-Omics Resource: Multi-Omics Profiling Expression Database Now Includes Transcriptomics Data

    PubMed Central

    Montague, Elizabeth; Stanberry, Larissa; Higdon, Roger; Janko, Imre; Lee, Elaine; Anderson, Nathaniel; Choiniere, John; Stewart, Elizabeth; Yandl, Gregory; Broomall, William; Kolker, Natali

    2014-01-01

    Abstract Multi-omics data-driven scientific discovery crucially rests on high-throughput technologies and data sharing. Currently, data are scattered across single omics repositories, stored in varying raw and processed formats, and are often accompanied by limited or no metadata. The Multi-Omics Profiling Expression Database (MOPED, http://moped.proteinspire.org) version 2.5 is a freely accessible multi-omics expression database. Continual improvement and expansion of MOPED is driven by feedback from the Life Sciences Community. In order to meet the emergent need for an integrated multi-omics data resource, MOPED 2.5 now includes gene relative expression data in addition to protein absolute and relative expression data from over 250 large-scale experiments. To facilitate accurate integration of experiments and increase reproducibility, MOPED provides extensive metadata through the Data-Enabled Life Sciences Alliance (DELSA Global, http://delsaglobal.org) metadata checklist. MOPED 2.5 has greatly increased the number of proteomics absolute and relative expression records to over 500,000, in addition to adding more than four million transcriptomics relative expression records. MOPED has an intuitive user interface with tabs for querying different types of omics expression data and new tools for data visualization. Summary information including expression data, pathway mappings, and direct connection between proteins and genes can be viewed on Protein and Gene Details pages. These connections in MOPED provide a context for multi-omics expression data exploration. Researchers are encouraged to submit omics data which will be consistently processed into expression summaries. MOPED as a multi-omics data resource is a pivotal public database, interdisciplinary knowledge resource, and platform for multi-omics understanding. PMID:24910945

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trentadue, R.; Clemencic, M.; Dykstra, D.

    The LCG Persistency Framework consists of three software packages (CORAL, COOL and POOL) that address the data access requirements of the LHC experiments in several different areas. The project is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that are using some or all of the Persistency Framework components to access their data. POOL is a hybrid technology store for C++ objects, using a mixture of streaming and relational technologies to implement both object persistency and object metadata catalogs and collections. CORAL is an abstraction layer with an SQL-free APImore » for accessing data stored using relational database technologies. COOL provides specific software components and tools for the handling of the time variation and versioning of the experiment conditions data. This presentation reports on the status and outlook in each of the three sub-projects at the time of the CHEP2012 conference, reviewing the usage of each package in the three LHC experiments.« less

  3. JPEG2000 and dissemination of cultural heritage over the Internet.

    PubMed

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  4. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies.

    PubMed

    de Brevern, Alexandre G; Meyniel, Jean-Philippe; Fairhead, Cécile; Neuvéglise, Cécile; Malpertuy, Alain

    2015-01-01

    Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries.

  5. Trends in IT Innovation to Build a Next Generation Bioinformatics Solution to Manage and Analyse Biological Big Data Produced by NGS Technologies

    PubMed Central

    de Brevern, Alexandre G.; Meyniel, Jean-Philippe; Fairhead, Cécile; Neuvéglise, Cécile; Malpertuy, Alain

    2015-01-01

    Sequencing the human genome began in 1994, and 10 years of work were necessary in order to provide a nearly complete sequence. Nowadays, NGS technologies allow sequencing of a whole human genome in a few days. This deluge of data challenges scientists in many ways, as they are faced with data management issues and analysis and visualization drawbacks due to the limitations of current bioinformatics tools. In this paper, we describe how the NGS Big Data revolution changes the way of managing and analysing data. We present how biologists are confronted with abundance of methods, tools, and data formats. To overcome these problems, focus on Big Data Information Technology innovations from web and business intelligence. We underline the interest of NoSQL databases, which are much more efficient than relational databases. Since Big Data leads to the loss of interactivity with data during analysis due to high processing time, we describe solutions from the Business Intelligence that allow one to regain interactivity whatever the volume of data is. We illustrate this point with a focus on the Amadea platform. Finally, we discuss visualization challenges posed by Big Data and present the latest innovations with JavaScript graphic libraries. PMID:26125026

  6. The Current Status of Usability Studies of Information Technologies in China: A Systematic Study

    PubMed Central

    Lei, Jianbo; Xu, Lufei; Meng, Qun; Zhang, Jiajie; Gong, Yang

    2014-01-01

    Objectives. To systematically review and analyze the current status and characteristics of usability studies in China in the field of information technology in general and in the field of healthcare in particular. Methods. We performed a quantitative literature analysis in three major Chinese academic databases and one English language database using Chinese search terms equivalent to the concept of usability. Results. Six hundred forty-seven publications were selected for analysis. We found that in China the literature on usability in the field of information technology began in 1994 and increased thereafter. The usability definitions from ISO 9241-11:1998 and Nielsen (1993) have been widely recognized and cited. Authors who have published several publications are rare. Fourteen journals have a publishing rate over 1%. Only nine publications about HIT were identified. Discussions. China's usability research started relatively late. There is a lack of organized research teams and dedicated usability journals. High-impact theoretical studies are scarce. On the application side, no original and systematic research frameworks have been developed. The understanding and definition of usability is not well synchronized with international norms. Besides, usability research in HIT is rare. Conclusions. More human and material resources need to be invested in China's usability research, particularly in HIT. PMID:25050362

  7. An innovative approach to capability-based emergency operations planning

    PubMed Central

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology. PMID:28228987

  8. An innovative approach to capability-based emergency operations planning.

    PubMed

    Keim, Mark E

    2013-01-01

    This paper describes the innovative use information technology for assisting disaster planners with an easily-accessible method for writing and improving evidence-based emergency operations plans. This process is used to identify all key objectives of the emergency response according to capabilities of the institution, community or society. The approach then uses a standardized, objective-based format, along with a consensus-based method for drafting capability-based operational-level plans. This information is then integrated within a relational database to allow for ease of access and enhanced functionality to search, sort and filter and emergency operations plan according to user need and technological capacity. This integrated approach is offered as an effective option for integrating best practices of planning with the efficiency, scalability and flexibility of modern information and communication technology.

  9. Fifth NASA Goddard Conference on Mass Storage Systems and Technologies.. Volume 1

    NASA Technical Reports Server (NTRS)

    Kobler, Benjamin (Editor); Hariharan, P. C. (Editor)

    1996-01-01

    This document contains copies of those technical papers received in time for publication prior to the Fifth Goddard Conference on Mass Storage Systems and Technologies. As one of an ongoing series, this conference continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include storage architecture, database management, data distribution, file system performance and modeling, and optical recording technology. There will also be a paper on Application Programming Interfaces (API) for a Physical Volume Repository (PVR) defined in Version 5 of the Institute of Electrical and Electronics Engineers (IEEE) Reference Model (RM). In addition, there are papers on specific archives and storage products.

  10. [Technology to improve adherence in community pharmacy: a literature review].

    PubMed

    Staessen, J

    2015-03-01

    Drug-related problems are very common and they need some specific attention. Improper use of medication as well as poor adherence leads to side effects, interaction, increased healthcare costs,... What technologies can be used in community pharmacies to improve drug adherence? Articles were found in scientific databases Pubmed, Embase and CINAHL using a fixed search strategy. In this review 21 studies were included. The different technologies were compared with each other. Reminders using sms or smartphone were the most effective. There are already plenty of reminder systems (SMS, Email, internet, smartphone) and practical tools (medication dispensers, MEMS) available in community pharmacies. A major hurdle is the lack of the infrastructure. There needs to be invested in systems were patients are confronted with their own drug use.

  11. Opportunities for Engaging Low-Income, Vulnerable Populations in Health Care: A Systematic Review of Homeless Persons’ Access to and Use of Information Technologies

    PubMed Central

    Li, Alice E.; Hogan, Timothy P.

    2013-01-01

    We systematically reviewed the health and social science literature on access to and use of information technologies by homeless persons by searching 5 bibliographic databases. Articles were included if they were in English, represented original research, appeared in peer-reviewed publications, and addressed our research questions. Sixteen articles met our inclusion criteria. We found that mobile phone ownership ranged from 44% to 62%; computer ownership, from 24% to 40%; computer access and use, from 47% to 55%; and Internet use, from 19% to 84%. Homeless persons used technologies for a range of purposes, some of which were health related. Many homeless persons had access to information technologies, suggesting possible health benefits to developing programs that link homeless persons to health care through mobile phones and the Internet. PMID:24148036

  12. Development blocks in innovation networks: The Swedish manufacturing industry, 1970-2007.

    PubMed

    Taalbi, Josef

    2017-01-01

    The notion of development blocks (Dahmén, 1950, 1991) suggests the co-evolution of technologies and industries through complementarities and the overcoming of imbalances. This study proposes and applies a methodology to analyse development blocks empirically. To assess the extent and character of innovational interdependencies between industries the study combines analysis of innovation biographies and statistical network analysis. This is made possible by using data from a newly constructed innovation output database for Sweden. The study finds ten communities of closely related industries in which innovation activity has been prompted by the emergence of technological imbalances or by the exploitation of new technological opportunities. The communities found in the Swedish network of innovation are shown to be stable over time and often characterized by strong user-supplier interdependencies. These findings serve to stress how historical imbalances and opportunities are key to understanding the dynamics of the long-run development of industries and new technologies.

  13. JNDMS Task Authorization 2 Report

    DTIC Science & Technology

    2013-10-01

    uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and

  14. Novel strategies to mine alcoholism-related haplotypes and genes by combining existing knowledge framework.

    PubMed

    Zhang, RuiJie; Li, Xia; Jiang, YongShuai; Liu, GuiYou; Li, ChuanXing; Zhang, Fan; Xiao, Yun; Gong, BinSheng

    2009-02-01

    High-throughout single nucleotide polymorphism detection technology and the existing knowledge provide strong support for mining the disease-related haplotypes and genes. In this study, first, we apply four kinds of haplotype identification methods (Confidence Intervals, Four Gamete Tests, Solid Spine of LD and fusing method of haplotype block) into high-throughout SNP genotype data to identify blocks, then use cluster analysis to verify the effectiveness of the four methods, and select the alcoholism-related SNP haplotypes through risk analysis. Second, we establish a mapping from haplotypes to alcoholism-related genes. Third, we inquire NCBI SNP and gene databases to locate the blocks and identify the candidate genes. In the end, we make gene function annotation by KEGG, Biocarta, and GO database. We find 159 haplotype blocks, which relate to the alcoholism most possibly on chromosome 1 approximately 22, including 227 haplotypes, of which 102 SNP haplotypes may increase the risk of alcoholism. We get 121 alcoholism-related genes and verify their reliability by the functional annotation of biology. In a word, we not only can handle the SNP data easily, but also can locate the disease-related genes precisely by combining our novel strategies of mining alcoholism-related haplotypes and genes with existing knowledge framework.

  15. Virtual Interactive Musculoskeletal System (VIMS) in orthopaedic research, education and clinical patient care.

    PubMed

    Chao, Edmund Y S; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki

    2007-03-08

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation.

  16. Virtual interactive musculoskeletal system (VIMS) in orthopaedic research, education and clinical patient care

    PubMed Central

    Chao, Edmund YS; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki

    2007-01-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation. PMID:17343764

  17. Construction of Pará rubber tree genome and multi-transcriptome database accelerates rubber researches.

    PubMed

    Makita, Yuko; Kawashima, Mika; Lau, Nyok Sean; Othman, Ahmad Sofiman; Matsui, Minami

    2018-01-19

    Natural rubber is an economically important material. Currently the Pará rubber tree, Hevea brasiliensis is the main commercial source. Little is known about rubber biosynthesis at the molecular level. Next-generation sequencing (NGS) technologies brought draft genomes of three rubber cultivars and a variety of RNA sequencing (RNA-seq) data. However, no current genome or transcriptome databases (DB) are organized by gene. A gene-oriented database is a valuable support for rubber research. Based on our original draft genome sequence of H. brasiliensis RRIM600, we constructed a rubber tree genome and transcriptome DB. Our DB provides genome information including gene functional annotations and multi-transcriptome data of RNA-seq, full-length cDNAs including PacBio Isoform sequencing (Iso-Seq), ESTs and genome wide transcription start sites (TSSs) derived from CAGE technology. Using our original and publically available RNA-seq data, we calculated co-expressed genes for identifying functionally related gene sets and/or genes regulated by the same transcription factor (TF). Users can access multi-transcriptome data through both a gene-oriented web page and a genome browser. For the gene searching system, we provide keyword search, sequence homology search and gene expression search; users can also select their expression threshold easily. The rubber genome and transcriptome DB provides rubber tree genome sequence and multi-transcriptomics data. This DB is useful for comprehensive understanding of the rubber transcriptome. This will assist both industrial and academic researchers for rubber and economically important close relatives such as R. communis, M. esculenta and J. curcas. The Rubber Transcriptome DB release 2017.03 is accessible at http://matsui-lab.riken.jp/rubber/ .

  18. The Role of Information Provision in Economic Evaluations of Newborn Bloodspot Screening: A Systematic Review.

    PubMed

    Wright, Stuart J; Jones, Cheryl; Payne, Katherine; Dharni, Nimarta; Ulph, Fiona

    2015-12-01

    The extent to which economic evaluations have included the healthcare resource and outcome-related implications of information provision in national newborn bloodspot screening programmes (NBSPs) is not currently known. To identify if, and how, information provision has been incorporated into published economic evaluations of NBSPs. A systematic review of economic evaluations of NBSPs (up to November 2014) was conducted. Three electronic databases were searched (Ovid: Medline, Embase, CINAHL) using an electronic search strategy combining a published economic search filter with terms related to national NBSPs and screening-related technologies. These electronic searches were supplemented by searching the NHS Economic Evaluations Database (NHS EED) and hand-searching identified study reference lists. The results were tabulated and summarised as part of a narrative synthesis. A total of 27 economic evaluations [screening-related technologies (n = 11) and NBSPs (n = 16)] were identified. The majority of economic evaluations did not quantify the impact of information provision in terms of healthcare costs or outcomes. Five studies did include an estimate of the time cost associated with information provision. Four studies included a value to reflect the disutility associated with parental anxiety caused by false-positive results, which was used as a proxy for the impact of imperfect information. A limited evidence base currently quantifies the impact of information provision on the healthcare costs and impact on the users of NBSPs; the parents of newborns. We suggest that economic evaluations of expanded NBSPs need to take account of information provision otherwise the impact on healthcare costs and the outcomes for newborns and their parents may be underestimated.

  19. Bibliometric analysis of nutrition and dietetics research activity in Arab countries using ISI Web of Science database.

    PubMed

    Sweileh, Waleed M; Al-Jabi, Samah W; Sawalha, Ansam F; Zyoud, Sa'ed H

    2014-01-01

    Reducing nutrition-related health problems in Arab countries requires an understanding of the performance of Arab countries in the field of nutrition and dietetics research. Assessment of research activity from a particular country or region could be achieved through bibliometric analysis. This study was carried out to investigate research activity in "nutrition and dietetics" in Arab countries. Original and review articles published from Arab countries in "nutrition and dietetics" Web of Science category up until 2012 were retrieved and analyzed using the ISI Web of Science database. The total number of documents published in "nutrition and dietetics" category from Arab countries was 2062. This constitutes 1% of worldwide research activity in the field. Annual research productivity showed a significant increase after 2005. Approximately 60% of published documents originated from three Arab countries, particularly Egypt, Kingdom of Saudi Arabia, and Tunisia. However, Kuwait has the highest research productivity per million inhabitants. Main research areas of published documents were in "Food Science/Technology" and "Chemistry" which constituted 75% of published documents compared with 25% for worldwide documents in nutrition and dietetics. A total of 329 (15.96%) nutrition - related diabetes or obesity or cancer documents were published from Arab countries compared with 21% for worldwide published documents. Interest in nutrition and dietetics research is relatively recent in Arab countries. Focus of nutrition research is mainly toward food technology and chemistry with lesser activity toward nutrition-related health research. International cooperation in nutrition research will definitely help Arab researchers in implementing nutrition research that will lead to better national policies regarding nutrition.

  20. Discovering Knowledge from AIS Database for Application in VTS

    NASA Astrophysics Data System (ADS)

    Tsou, Ming-Cheng

    The widespread use of the Automatic Identification System (AIS) has had a significant impact on maritime technology. AIS enables the Vessel Traffic Service (VTS) not only to offer commonly known functions such as identification, tracking and monitoring of vessels, but also to provide rich real-time information that is useful for marine traffic investigation, statistical analysis and theoretical research. However, due to the rapid accumulation of AIS observation data, the VTS platform is often unable quickly and effectively to absorb and analyze it. Traditional observation and analysis methods are becoming less suitable for the modern AIS generation of VTS. In view of this, we applied the same data mining technique used for business intelligence discovery (in Customer Relation Management (CRM) business marketing) to the analysis of AIS observation data. This recasts the marine traffic problem as a business-marketing problem and integrates technologies such as Geographic Information Systems (GIS), database management systems, data warehousing and data mining to facilitate the discovery of hidden and valuable information in a huge amount of observation data. Consequently, this provides the marine traffic managers with a useful strategic planning resource.

  1. Bibliography on propulsion airframe integration technologies for high-speed civil transport applications, 1980-1991

    NASA Technical Reports Server (NTRS)

    Anderson, David J.; Mizukami, Masashi

    1993-01-01

    NASA has initiated the High Speed Research (HSR) program with the goal to develop technologies for a new generation, economically viable, environmentally acceptable, supersonic transport (SST) called the High Speed Civil Transport (HSCT). A significant part of this effort is expected to be in multidisciplinary systems integration, such as in propulsion airframe integration (PAI). In order to assimilate the knowledge database on PAI for SST type aircraft, a bibliography on this subject was compiled. The bibliography with over 1200 entries, full abstracts, and indexes. Related topics are also covered, such as the following: engine inlets, engine cycles, nozzles, existing supersonic cruise aircraft, noise issues, computational fluid dynamics, aerodynamics, and external interference. All identified documents from 1980 through early 1991 are included; this covers the latter part of the NASA Supersonic Cruise Research (SCR) program and the beginnings of the HSR program. In addition, some pre-1980 documents of significant merit or reference value are also included. The references were retrieved via a computerized literature search using the NASA RECON database system.

  2. The Geant4 physics validation repository

    DOE PAGES

    Wenzel, H.; Yarba, J.; Dotti, A.

    2015-12-23

    The Geant4 collaboration regularly performs validation and regression tests. The results are stored in a central repository and can be easily accessed via a web application. In this article we describe the Geant4 physics validation repository which consists of a relational database storing experimental data and Geant4 test results, a java API and a web application. Lastly, the functionality of these components and the technology choices we made are also described

  3. Using Eye Trackers for Usability Evaluation of Health Information Technology: A Systematic Literature Review

    PubMed Central

    Yang, Yushi

    2015-01-01

    Background Eye-tracking technology has been used to measure human cognitive processes and has the potential to improve the usability of health information technology (HIT). However, it is still unclear how the eye-tracking method can be integrated with other traditional usability methodologies to achieve its full potential. Objective The objective of this study was to report on HIT evaluation studies that have used eye-tracker technology, and to envision the potential use of eye-tracking technology in future research. Methods We used four reference databases to initially identify 5248 related papers, which resulted in only 9 articles that met our inclusion criteria. Results Eye-tracking technology was useful in finding usability problems in many ways, but is still in its infancy for HIT usability evaluation. Limited types of HITs have been evaluated by eye trackers, and there has been a lack of evaluation research in natural settings. Conclusions More research should be done in natural settings to discover the real contextual-based usability problems of clinical and mobile HITs using eye-tracking technology with more standardized methodologies and guidance. PMID:27026079

  4. Real-Time Payload Control and Monitoring on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)

    1998-01-01

    World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.

  5. Organizational issues in the implementation and adoption of health information technology innovations: an interpretative review.

    PubMed

    Cresswell, Kathrin; Sheikh, Aziz

    2013-05-01

    Implementations of health information technologies are notoriously difficult, which is due to a range of inter-related technical, social and organizational factors that need to be considered. In the light of an apparent lack of empirically based integrated accounts surrounding these issues, this interpretative review aims to provide an overview and extract potentially generalizable findings across settings. We conducted a systematic search and critique of the empirical literature published between 1997 and 2010. In doing so, we searched a range of medical databases to identify review papers that related to the implementation and adoption of eHealth applications in organizational settings. We qualitatively synthesized this literature extracting data relating to technologies, contexts, stakeholders, and their inter-relationships. From a total body of 121 systematic reviews, we identified 13 systematic reviews encompassing organizational issues surrounding health information technology implementations. By and large, the evidence indicates that there are a range of technical, social and organizational considerations that need to be deliberated when attempting to ensure that technological innovations are useful for both individuals and organizational processes. However, these dimensions are inter-related, requiring a careful balancing act of strategic implementation decisions in order to ensure that unintended consequences resulting from technology introduction do not pose a threat to patients. Organizational issues surrounding technology implementations in healthcare settings are crucially important, but have as yet not received adequate research attention. This may in part be due to the subjective nature of factors, but also due to a lack of coordinated efforts toward more theoretically-informed work. Our findings may be used as the basis for the development of best practice guidelines in this area. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Global Collaboration Enhances Technology Literacy

    ERIC Educational Resources Information Center

    Cook, Linda A.; Bell, Meredith L.; Nugent, Jill; Smith, Walter S.

    2016-01-01

    Today's learners routinely use technology outside of school to communicate, collaborate, and gather information about the world around them. Classroom learning experiences are relevant when they include communication technologies such as social networking, blogging, and video conferencing, and information technologies such as databases, browsers,…

  7. BNDB - the Biochemical Network Database.

    PubMed

    Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter

    2007-10-02

    Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.

  8. Database of Industrial Technological Information in Kanagawa : Networks for Technology Activities

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Shindo, Tadashi

    This system is one of the databases which require participation by its members and of which premise is to open all the data in it. Aiming at free technological cooperation and exchange among industries it was constructed by Kanagawa Prefecture in collaboration with enterprises located in it. The input data is 36 items such as major product, special and advantageous technology, technolagy to be wanted for cooperation, facility and equipment, which technologically characterize each enterprise. They are expressed in 2,000 characters and written by natural language including Kanji except for some coded items. 24 search items are accessed by natural language so that in addition to interactive searching procedures including menu-type it enables extensive searching. The information service started in Oct., 1986 covering data from 2,000 enterprisen.

  9. Facilitating Collaboration, Knowledge Construction and Communication with Web-Enabled Databases.

    ERIC Educational Resources Information Center

    McNeil, Sara G.; Robin, Bernard R.

    This paper presents an overview of World Wide Web-enabled databases that dynamically generate Web materials and focuses on the use of this technology to support collaboration, knowledge construction, and communication. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and…

  10. A UNIMARC Bibliographic Format Database for ABCD

    ERIC Educational Resources Information Center

    Megnigbeto, Eustache

    2012-01-01

    Purpose: ABCD is a web-based open and free software suite for library management derived from the UNESCO CDS/ISIS software technology. The first version was launched officially in December 2009 with a MARC 21 bibliographic format database. This paper aims to detail the building of the UNIMARC bibliographic format database for ABCD.…

  11. Generation of an Aerothermal Data Base for the X33 Spacecraft

    NASA Technical Reports Server (NTRS)

    Roberts, Cathy; Huynh, Loc

    1998-01-01

    The X-33 experimental program is a cooperative program between industry and NASA, managed by Lockheed-Martin Skunk Works to develop an experimental vehicle to demonstrate new technologies for a single-stage-to-orbit, fully reusable launch vehicle (RLV). One of the new technologies to be demonstrated is an advanced Thermal Protection System (TPS) being designed by BF Goodrich (formerly Rohr, Inc.) with support from NASA. The calculation of an aerothermal database is crucial to identifying the critical design environment data for the TPS. The NASA Ames X-33 team has generated such a database using Computational Fluid Dynamics (CFD) analyses, engineering analysis methods and various programs to compare and interpolate the results from the CFD and the engineering analyses. This database, along with a program used to query the database, is used extensively by several X-33 team members to help them in designing the X-33. This paper will describe the methods used to generate this database, the program used to query the database, and will show some of the aerothermal analysis results for the X-33 aircraft.

  12. Technology, Educator Intention, and Relationships in Virtual Learning Spaces: A Qualitative Metasynthesis.

    PubMed

    Gdanetz, Lorraine M; Hamer, Mika K; Thomas, Eileen; Tarasenko, Lindsey M; Horton-Deutsch, Sara; Jones, Jacqueline

    2018-04-01

    A main concern that remains with the continued growth of online nursing education programs is the way educator and student relationships can be affected by new technologies. This interpretive study aims to gain an understanding of how technology influences the development of interpersonal relationships between the student and faculty in a virtual learning environment. Using an established structured approach to qualitative metasynthesis, a search was conducted using PubMed, EBSCO, CINAHL, Medline, ProQuest, Ovid Nursing databases, and Google Scholar, focused on caring and relational aspects of online nursing education. Technology alters communication, thereby positioning the intentionality of the educator at the heart of interpersonal relationship development in virtual learning spaces. This interpretive synthesis of prior qualitative research supports the development of a framework for online nursing courses, the need for continuing education of nursing faculty, the value of caring intentions, and enhancement of the educator's technological proficiency. [J Nurs Educ. 2018;57(4):197-202.]. Copyright 2018, SLACK Incorporated.

  13. Internet-based profiler system as integrative framework to support translational research

    PubMed Central

    Kim, Robert; Demichelis, Francesca; Tang, Jeffery; Riva, Alberto; Shen, Ronglai; Gibbs, Doug F; Mahavishno, Vasudeva; Chinnaiyan, Arul M; Rubin, Mark A

    2005-01-01

    Background Translational research requires taking basic science observations and developing them into clinically useful tests and therapeutics. We have developed a process to develop molecular biomarkers for diagnosis and prognosis by integrating tissue microarray (TMA) technology and an internet-database tool, Profiler. TMA technology allows investigators to study hundreds of patient samples on a single glass slide resulting in the conservation of tissue and the reduction in inter-experimental variability. The Profiler system allows investigator to reliably track, store, and evaluate TMA experiments. Here within we describe the process that has evolved through an empirical basis over the past 5 years at two academic institutions. Results The generic design of this system makes it compatible with multiple organ system (e.g., prostate, breast, lung, renal, and hematopoietic system,). Studies and folders are restricted to authorized users as required. Over the past 5 years, investigators at 2 academic institutions have scanned 656 TMA experiments and collected 63,311 digital images of these tissue samples. 68 pathologists from 12 major user groups have accessed the system. Two groups directly link clinical data from over 500 patients for immediate access and the remaining groups choose to maintain clinical and pathology data on separate systems. Profiler currently has 170 K data points such as staining intensity, tumor grade, and nuclear size. Due to the relational database structure, analysis can be easily performed on single or multiple TMA experimental results. The TMA module of Profiler can maintain images acquired from multiple systems. Conclusion We have developed a robust process to develop molecular biomarkers using TMA technology and an internet-based database system to track all steps of this process. This system is extendable to other types of molecular data as separate modules and is freely available to academic institutions for licensing. PMID:16364175

  14. Internet-based Profiler system as integrative framework to support translational research.

    PubMed

    Kim, Robert; Demichelis, Francesca; Tang, Jeffery; Riva, Alberto; Shen, Ronglai; Gibbs, Doug F; Mahavishno, Vasudeva; Chinnaiyan, Arul M; Rubin, Mark A

    2005-12-19

    Translational research requires taking basic science observations and developing them into clinically useful tests and therapeutics. We have developed a process to develop molecular biomarkers for diagnosis and prognosis by integrating tissue microarray (TMA) technology and an internet-database tool, Profiler. TMA technology allows investigators to study hundreds of patient samples on a single glass slide resulting in the conservation of tissue and the reduction in inter-experimental variability. The Profiler system allows investigator to reliably track, store, and evaluate TMA experiments. Here within we describe the process that has evolved through an empirical basis over the past 5 years at two academic institutions. The generic design of this system makes it compatible with multiple organ system (e.g., prostate, breast, lung, renal, and hematopoietic system,). Studies and folders are restricted to authorized users as required. Over the past 5 years, investigators at 2 academic institutions have scanned 656 TMA experiments and collected 63,311 digital images of these tissue samples. 68 pathologists from 12 major user groups have accessed the system. Two groups directly link clinical data from over 500 patients for immediate access and the remaining groups choose to maintain clinical and pathology data on separate systems. Profiler currently has 170 K data points such as staining intensity, tumor grade, and nuclear size. Due to the relational database structure, analysis can be easily performed on single or multiple TMA experimental results. The TMA module of Profiler can maintain images acquired from multiple systems. We have developed a robust process to develop molecular biomarkers using TMA technology and an internet-based database system to track all steps of this process. This system is extendable to other types of molecular data as separate modules and is freely available to academic institutions for licensing.

  15. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  16. Coastal Prairie Restoration Information System: Version 1 (Louisiana)

    USGS Publications Warehouse

    Allain, Larry

    2007-01-01

    The Coastal Prairie Restoration Information System (CPR) is a Microsoft Access database that allows users to query and view data about Louisiana coastal prairie species. Less than 0.1% of Louisiana's coastal prairie vegetation remains in a relatively undisturbed condition. Encompassing as much as 1 million hectares of land, coastal prairie is a hybrid of coastal wetlands and tall grass prairie. Over 550 plant species have been identified in Louisiana's coastal prairies to date. Efforts to conserve and restore this endangered ecosystem are limited by the ability of workers to identify and access knowledge about this diverse group of plants. In this database, a variety of data are provided for each of 650 coastal prairie species in Louisiana. The database was developed at the U.S. Geological Survey National Wetlands Research Center by Larry Allain, with software development by Myra Silva. Additional funding was provided by the biology department of the University of Louisiana at Lafayette (ULL), the ULL Center for Environmental and Ecological Technology, and the National Science Foundation.

  17. An experiment in big data: storage, querying and visualisation of data taken from the Liverpool Telescope's wide field cameras

    NASA Astrophysics Data System (ADS)

    Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R.

    2014-07-01

    The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented.

  18. Scientific Use Cases for the Virtual Atomic and Molecular Data Center

    NASA Astrophysics Data System (ADS)

    Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.

    2014-12-01

    VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.

  19. Significance of genome-wide association studies in molecular anthropology.

    PubMed

    Gupta, Vipin; Khadgawat, Rajesh; Sachdeva, Mohinder Pal

    2009-12-01

    The successful advent of a genome-wide approach in association studies raises the hopes of human geneticists for solving a genetic maze of complex traits especially the disorders. This approach, which is replete with the application of cutting-edge technology and supported by big science projects (like Human Genome Project; and even more importantly the International HapMap Project) and various important databases (SNP database, CNV database, etc.), has had unprecedented success in rapidly uncovering many of the genetic determinants of complex disorders. The magnitude of this approach in the genetics of classical anthropological variables like height, skin color, eye color, and other genome diversity projects has certainly expanded the horizons of molecular anthropology. Therefore, in this article we have proposed a genome-wide association approach in molecular anthropological studies by providing lessons from the exemplary study of the Wellcome Trust Case Control Consortium. We have also highlighted the importance and uniqueness of Indian population groups in facilitating the design and finding optimum solutions for other genome-wide association-related challenges.

  20. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  1. Identifying genetic relatives without compromising privacy

    PubMed Central

    He, Dan; Furlotte, Nicholas A.; Hormozdiari, Farhad; Joo, Jong Wha J.; Wadia, Akshay; Ostrovsky, Rafail; Sahai, Amit; Eskin, Eleazar

    2014-01-01

    The development of high-throughput genomic technologies has impacted many areas of genetic research. While many applications of these technologies focus on the discovery of genes involved in disease from population samples, applications of genomic technologies to an individual’s genome or personal genomics have recently gained much interest. One such application is the identification of relatives from genetic data. In this application, genetic information from a set of individuals is collected in a database, and each pair of individuals is compared in order to identify genetic relatives. An inherent issue that arises in the identification of relatives is privacy. In this article, we propose a method for identifying genetic relatives without compromising privacy by taking advantage of novel cryptographic techniques customized for secure and private comparison of genetic information. We demonstrate the utility of these techniques by allowing a pair of individuals to discover whether or not they are related without compromising their genetic information or revealing it to a third party. The idea is that individuals only share enough special-purpose cryptographically protected information with each other to identify whether or not they are relatives, but not enough to expose any information about their genomes. We show in HapMap and 1000 Genomes data that our method can recover first- and second-order genetic relationships and, through simulations, show that our method can identify relationships as distant as third cousins while preserving privacy. PMID:24614977

  2. Identifying genetic relatives without compromising privacy.

    PubMed

    He, Dan; Furlotte, Nicholas A; Hormozdiari, Farhad; Joo, Jong Wha J; Wadia, Akshay; Ostrovsky, Rafail; Sahai, Amit; Eskin, Eleazar

    2014-04-01

    The development of high-throughput genomic technologies has impacted many areas of genetic research. While many applications of these technologies focus on the discovery of genes involved in disease from population samples, applications of genomic technologies to an individual's genome or personal genomics have recently gained much interest. One such application is the identification of relatives from genetic data. In this application, genetic information from a set of individuals is collected in a database, and each pair of individuals is compared in order to identify genetic relatives. An inherent issue that arises in the identification of relatives is privacy. In this article, we propose a method for identifying genetic relatives without compromising privacy by taking advantage of novel cryptographic techniques customized for secure and private comparison of genetic information. We demonstrate the utility of these techniques by allowing a pair of individuals to discover whether or not they are related without compromising their genetic information or revealing it to a third party. The idea is that individuals only share enough special-purpose cryptographically protected information with each other to identify whether or not they are relatives, but not enough to expose any information about their genomes. We show in HapMap and 1000 Genomes data that our method can recover first- and second-order genetic relationships and, through simulations, show that our method can identify relationships as distant as third cousins while preserving privacy.

  3. Computerized Design Synthesis (CDS), A database-driven multidisciplinary design tool

    NASA Technical Reports Server (NTRS)

    Anderson, D. M.; Bolukbasi, A. O.

    1989-01-01

    The Computerized Design Synthesis (CDS) system under development at McDonnell Douglas Helicopter Company (MDHC) is targeted to make revolutionary improvements in both response time and resource efficiency in the conceptual and preliminary design of rotorcraft systems. It makes the accumulated design database and supporting technology analysis results readily available to designers and analysts of technology, systems, and production, and makes powerful design synthesis software available in a user friendly format.

  4. Design and implementation of website information disclosure assessment system.

    PubMed

    Cho, Ying-Chiang; Pan, Jen-Yi

    2015-01-01

    Internet application technologies, such as cloud computing and cloud storage, have increasingly changed people's lives. Websites contain vast amounts of personal privacy information. In order to protect this information, network security technologies, such as database protection and data encryption, attract many researchers. The most serious problems concerning web vulnerability are e-mail address and network database leakages. These leakages have many causes. For example, malicious users can steal database contents, taking advantage of mistakes made by programmers and administrators. In order to mitigate this type of abuse, a website information disclosure assessment system is proposed in this study. This system utilizes a series of technologies, such as web crawler algorithms, SQL injection attack detection, and web vulnerability mining, to assess a website's information disclosure. Thirty websites, randomly sampled from the top 50 world colleges, were used to collect leakage information. This testing showed the importance of increasing the security and privacy of website information for academic websites.

  5. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, themore » International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.« less

  6. System-of-Systems Technology-Portfolio-Analysis Tool

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel; Mankins, John; Feingold, Harvey; Johnson, Wayne

    2012-01-01

    Advanced Technology Life-cycle Analysis System (ATLAS) is a system-of-systems technology-portfolio-analysis software tool. ATLAS affords capabilities to (1) compare estimates of the mass and cost of an engineering system based on competing technological concepts; (2) estimate life-cycle costs of an outer-space-exploration architecture for a specified technology portfolio; (3) collect data on state-of-the-art and forecasted technology performance, and on operations and programs; and (4) calculate an index of the relative programmatic value of a technology portfolio. ATLAS facilitates analysis by providing a library of analytical spreadsheet models for a variety of systems. A single analyst can assemble a representation of a system of systems from the models and build a technology portfolio. Each system model estimates mass, and life-cycle costs are estimated by a common set of cost models. Other components of ATLAS include graphical-user-interface (GUI) software, algorithms for calculating the aforementioned index, a technology database, a report generator, and a form generator for creating the GUI for the system models. At the time of this reporting, ATLAS is a prototype, embodied in Microsoft Excel and several thousand lines of Visual Basic for Applications that run on both Windows and Macintosh computers.

  7. An Analysis of NASA Technology Transfer. Degree awarded by Pennsylvania State Univ.

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.

    1996-01-01

    A review of previous technology transfer metrics, recommendations, and measurements is presented within the paper. A quantitative and qualitative analysis of NASA's technology transfer efforts is performed. As a relative indicator, NASA's intellectual property performance is benchmarked against a database of over 100 universities. Successful technology transfer (commercial sales, production savings, etc.) cases were tracked backwards through their history to identify the key critical elements that lead to success. Results of this research indicate that although NASA's performance is not measured well by quantitative values (intellectual property stream data), it has a net positive impact on the private sector economy. Policy recommendations are made regarding technology transfer within the context of the documented technology transfer policies since the framing of the Constitution. In the second thrust of this study, researchers at NASA Langley Research Center were surveyed to determine their awareness of, attitude toward, and perception about technology transfer. Results indicate that although researchers believe technology transfer to be a mission of the Agency, they should not be held accountable or responsible for its performance. In addition, the researchers are not well educated about the mechanisms to perform, or policies regarding, technology transfer.

  8. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  9. Surgical research using national databases

    PubMed Central

    Leland, Hyuma; Heckmann, Nathanael

    2016-01-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945

  10. Surgical research using national databases.

    PubMed

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  11. The Design and Product of National 1:1000000 Cartographic Data of Topographic Map

    NASA Astrophysics Data System (ADS)

    Wang, Guizhi

    2016-06-01

    National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.

  12. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  13. Bibliometrics of NIHR HTA monographs and their related journal articles

    PubMed Central

    Royle, Pamela

    2015-01-01

    Objectives A bibliometric analysis of the UK National Institute for Health Research (NIHR) Health Technology Assessment (HTA) monographs and their related journal articles by: (1) exploring the differences in citations to the HTA monographs in Google Scholar (GS), Scopus and Web of Science (WoS), and (2) comparing Scopus citations to the monographs with their related journal articles. Setting A study of 111 HTA monographs published in 2010 and 2011, and their external journal articles. Main outcome measures Citations to the monographs in GS, Scopus and WoS, and to their external journal articles in Scopus. Results The number of citations varied among the three databases, with GS having the highest and WoS the lowest; however, the citation-based rankings among the databases were highly correlated. Overall, 56% of monographs had a related publication, with the highest proportion for primary research (76%) and lowest for evidence syntheses (43%). There was a large variation in how the monographs were cited, compared to journal articles, resulting in more frequent problems, with unlinked citations in Scopus and WoS. When comparing differences in the number of citations between monograph publications with their related journal articles from the same project, we found that monographs received more citations than their journal articles for evidence syntheses and methodology projects; by contrast, journal articles related to primary research monographs were more highly cited than their monograph. Conclusions The numbers of citations to the HTA monographs differed considerably between the databases, but were highly correlated. When a HTA monograph had a journal article from the same study, there were more citations to the journal article for primary research, but more to the monographs for evidence syntheses. Citations to the related journal articles were more reliably recorded than citations to the HTA monographs. PMID:25694457

  14. The Magnetics Information Consortium (MagIC)

    NASA Astrophysics Data System (ADS)

    Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.

    2003-12-01

    The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.

  15. U.S. Geological Survey national computer technology meeting; program and abstracts, New Orleans, Louisiana, April 10-15, 1994

    USGS Publications Warehouse

    Balthrop, B. H.; Baker, E.G.

    1994-01-01

    This report contains some of the abstracts of papers that were presented at the National Computer Technology Meeting that was held in April 1994. This meeting was sponsored by the Water Resources Division of the U.S. Geological Survey, and was attended by more than 200 technical and managerial personnel representing all the Divisions of the U.S. Geological Survey. Computer-related information from all Divisions of the U.S. Geological Survey are discussed in this compilation of abstracts. Some of the topics addressed are data transfer, data-base management, hydrologic applications, national water information systems, and geographic information systems applications and techniques.

  16. Pacific Northwest Laboratory Annual Report for 1992 to the DOE Office of Energy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreml, S.A.; Park, J.F.

    1993-06-01

    This report summarizes progress in OHER biological research and general life sciences research programs conducted at PNL in FY 1992. The research develops the knowledge and fundamental principles necessary to identify, understand, and anticipate the long-term health consequences of energy-related radiation and chemicals. Our continuing emphasis is to decrease the uncertainty of health risk estimates from energy-related technologies through an increase understanding of the ways in which radiation and chemicals cause biological damage. Descriptors of individual research projects as detailed in this report one separately abstracted and indexed for the database.

  17. Distributed Episodic Exploratory Planning (DEEP)

    DTIC Science & Technology

    2008-12-01

    API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of

  18. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  19. New data sources and derived products for the SRER digital spatial database

    Treesearch

    Craig Wissler; Deborah Angell

    2003-01-01

    The Santa Rita Experimental Range (SRER) digital database was developed to automate and preserve ecological data and increase their accessibility. The digital data holdings include a spatial database that is used to integrate ecological data in a known reference system and to support spatial analyses. Recently, the Advanced Resource Technology (ART) facility has added...

  20. Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course

    ERIC Educational Resources Information Center

    Mason, Raina; Seton, Carolyn; Cooper, Graham

    2016-01-01

    Cognitive load theory (CLT) was used to redesign a Database Systems course for Information Technology students. The redesign was intended to address poor student performance and low satisfaction, and to provide a more relevant foundation in database design and use for subsequent studies and industry. The original course followed the conventional…

  1. Common Database Interface for Heterogeneous Software Engineering Tools.

    DTIC Science & Technology

    1987-12-01

    SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager

  2. Efficient data management tools for the heterogeneous big data warehouse

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  3. Planned and ongoing projects (pop) database: development and results.

    PubMed

    Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice

    2014-11-01

    The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.

  4. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. WikiPEATia - a web based platform for assembling peatland data through ‘crowd sourcing’

    NASA Astrophysics Data System (ADS)

    Wisser, D.; Glidden, S.; Fieseher, C.; Treat, C. C.; Routhier, M.; Frolking, S. E.

    2009-12-01

    The Earth System Science community is realizing that peatlands are an important and unique terrestrial ecosystem that has not yet been well-integrated into large-scale earth system analyses. A major hurdle is the lack of accessible, geospatial data of peatland distribution, coupled with data on peatland properties (e.g., vegetation composition, peat depth, basal dates, soil chemistry, peatland class) at the global scale. This data, however, is available at the local scale. Although a comprehensive global database on peatlands probably lags similar data on more economically important ecosystems such as forests, grasslands, croplands, a large amount of field data have been collected over the past several decades. A few efforts have been made to map peatlands at large scales but existing data have not been assembled into a single geospatial database that is publicly accessible or do not depict data with a level of detail that is needed in the Earth System Science Community. A global peatland database would contribute to advances in a number of research fields such as hydrology, vegetation and ecosystem modeling, permafrost modeling, and earth system modeling. We present a Web 2.0 approach that uses state-of-the-art webserver and innovative online mapping technologies and is designed to create such a global database through ‘crowd-sourcing’. Primary functions of the online system include form-driven textual user input of peatland research metadata, spatial data input of peatland areas via a mapping interface, database editing and querying editing capabilities, as well as advanced visualization and data analysis tools. WikiPEATia provides an integrated information technology platform for assembling, integrating, and posting peatland-related geospatial datasets facilitates and encourages research community involvement. A successful effort will make existing peatland data much more useful to the research community, and will help to identify significant data gaps.

  6. UniPROBE, update 2015: new tools and content for the online database of protein-binding microarray data on protein-DNA interactions.

    PubMed

    Hume, Maxwell A; Barrera, Luis A; Gisselbrecht, Stephen S; Bulyk, Martha L

    2015-01-01

    The Universal PBM Resource for Oligonucleotide Binding Evaluation (UniPROBE) serves as a convenient source of information on published data generated using universal protein-binding microarray (PBM) technology, which provides in vitro data about the relative DNA-binding preferences of transcription factors for all possible sequence variants of a length k ('k-mers'). The database displays important information about the proteins and displays their DNA-binding specificity data in terms of k-mers, position weight matrices and graphical sequence logos. This update to the database documents the growth of UniPROBE since the last update 4 years ago, and introduces a variety of new features and tools, including a new streamlined pipeline that facilitates data deposition by universal PBM data generators in the research community, a tool that generates putative nonbinding (i.e. negative control) DNA sequences for one or more proteins and novel motifs obtained by analyzing the PBM data using the BEEML-PBM algorithm for motif inference. The UniPROBE database is available at http://uniprobe.org. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. HUNT: launch of a full-length cDNA database from the Helix Research Institute.

    PubMed

    Yudate, H T; Suwa, M; Irie, R; Matsui, H; Nishikawa, T; Nakamura, Y; Yamaguchi, D; Peng, Z Z; Yamamoto, T; Nagai, K; Hayashi, K; Otsuki, T; Sugiyama, T; Ota, T; Suzuki, Y; Sugano, S; Isogai, T; Masuho, Y

    2001-01-01

    The Helix Research Institute (HRI) in Japan is releasing 4356 HUman Novel Transcripts and related information in the newly established HUNT database. The institute is a joint research project principally funded by the Japanese Ministry of International Trade and Industry, and the clones were sequenced in the governmental New Energy and Industrial Technology Development Organization (NEDO) Human cDNA Sequencing Project. The HUNT database contains an extensive amount of annotation from advanced analysis and represents an essential bioinformatics contribution towards understanding of the gene function. The HRI human cDNA clones were obtained from full-length enriched cDNA libraries constructed with the oligo-capping method and have resulted in novel full-length cDNA sequences. A large fraction has little similarity to any proteins of known function and to obtain clues about possible function we have developed original analysis procedures. Any putative function deduced here can be validated or refuted by complementary analysis results. The user can also extract information from specific categories like PROSITE patterns, PFAM domains, PSORT localization, transmembrane helices and clones with GENIUS structure assignments. The HUNT database can be accessed at http://www.hri.co.jp/HUNT.

  8. Magnetic Resonance Imaging as an Adjunct to Mammography for Breast Cancer Screening in Women at Less Than High Risk for Breast Cancer: A Health Technology Assessment

    PubMed Central

    Nikitovic-Jokic, Milica; Holubowich, Corinne

    2016-01-01

    Background Screening with mammography can detect breast cancer early, before clinical symptoms appear. Some cancers, however, are not captured with mammography screening alone. Among women at high risk for breast cancer, magnetic resonance imaging (MRI) has been suggested as a safe adjunct (supplemental) screening tool that can detect breast cancers missed on screening mammography, potentially reducing the number of deaths associated with the disease. However, the use of adjunct screening tests may also increase the number of false-positive test results, which may lead to unnecessary follow-up testing, as well as patient stress and anxiety. We investigated the benefits and harms of MRI as an adjunct to mammography compared with mammography alone for screening women at less than high risk (average or higher than average risk) for breast cancer. Methods We searched Ovid MEDLINE, Ovid Embase, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects (DARE), Centre for Reviews and Dissemination (CRD) Health Technology Assessment Database, and National Health Service (NHS) Economic Evaluation Database, from January 2002 to January 2016, for evidence of effectiveness, harms, and diagnostic accuracy. Only studies evaluating the use of screening breast MRI as an adjunct to mammography in the specified populations were included. Results No studies in women at less than high risk for breast cancer met our inclusion criteria. Conclusions It remains uncertain if the use of adjunct screening breast MRI in women at less than high risk (average or higher than average risk) for breast cancer will reduce breast cancer–related mortality without significant increases in unnecessary follow-up testing and treatment. PMID:27990198

  9. Proteogenomic strategies for identification of aberrant cancer peptides using large-scale Next Generation Sequencing data

    DOE PAGES

    Woo, Sunghee; Cha, Seong Won; Na, Seungjin; ...

    2014-11-17

    Cancer is driven by the acquisition of somatic DNA lesions. Distinguishing the early driver mutations from subsequent passenger mutations is key to molecular sub-typing of cancers, and the discovery of novel biomarkers. The availability of genomics technologies (mainly wholegenome and exome sequencing, and transcript sampling via RNA-seq, collectively referred to as NGS) have fueled recent studies on somatic mutation discovery. However, the vision is challenged by the complexity, redundancy, and errors in genomic data, and the difficulty of investigating the proteome using only genomic approaches. Recently, combination of proteomic and genomic technologies are increasingly employed. However, the complexity and redundancymore » of NGS data remains a challenge for proteogenomics, and various trade-offs must be made to allow for the searches to take place. This paperprovides a discussion of two such trade-offs, relating to large database search, and FDR calculations, and their implication to cancer proteogenomics. Moreover, it extends and develops the idea of a unified genomic variant database that can be searched by any mass spectrometry sample. A total of 879 BAM files downloaded from TCGA repository were used to create a 4.34 GB unified FASTA database which contained 2,787,062 novel splice junctions, 38,464 deletions, 1105 insertions, and 182,302 substitutions. Proteomic data from a single ovarian carcinoma sample (439,858 spectra) was searched against the database. By applying the most conservative FDR measure, we have identified 524 novel peptides and 65,578 known peptides at 1% FDR threshold. The novel peptides include interesting examples of doubly mutated peptides, frame-shifts, and non-sample-recruited mutations, which emphasize the strength of our approach.« less

  10. Receptivity of Librarians to Optical Information Technologies and Products.

    ERIC Educational Resources Information Center

    Eaton, Nancy

    1986-01-01

    Examines factors which may affect the receptivity of librarians to the use of optical disk technologies, including hardware and software issues, the content of currently available databases, and the integration of optical technologies into existing library services. (CLB)

  11. Scalable global grid catalogue for Run3 and beyond

    NASA Astrophysics Data System (ADS)

    Martinez Pedreira, M.; Grigoras, C.; ALICE Collaboration

    2017-10-01

    The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.

  12. Pathway — Using a State-of-the-Art Digital Video Database for Research and Development in Teacher Education

    NASA Astrophysics Data System (ADS)

    Adrian, Brian; Zollman, Dean; Stevens, Scott

    2006-02-01

    To demonstrate how state-of-the-art video databases can address issues related to the lack of preparation of many physics teachers, we have created the prototype Physics Teaching Web Advisory (Pathway). Pathway's Synthetic Interviews and related video materials are beginning to provide pre-service and out-of-field in-service teachers with much-needed professional development and well-prepared teachers with new perspectives on teaching physics. The prototype was limited to a demonstration of the systems. Now, with an additional grant we will extend the system and conduct research and evaluation on its effectiveness. This project will provide virtual expert help on issues of pedagogy and content. In particular, the system will convey, by example and explanation, contemporary ideas about the teaching of physics and applications of physics education research. The research effort will focus on the value of contemporary technology to address the continuing education of teachers who are teaching in a field in which they have not been trained.

  13. Keeping Track Every Step of the Way

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Knowledge Sharing Systems, Inc., a producer of intellectual assets management software systems for the federal government, universities, non-profit laboratories, and private companies, constructed and presently manages the NASA Technology Tracking System, also known as TechTracS. Under contract to Langley Research Center, TechTracS identifies and captures all NASA technologies, manages the patent prosecution process, and then tracks their progress en route to commercialization. The system supports all steps involved in various technology transfer activities, and is considered the premier intellectual asset management system used in the federal government today. NASA TechTracS consists of multiple relational databases and web servers, located at each of the 10 field centers, as well as NASA Headquarters. The system is capable of supporting the following functions: planning commercial technologies; commercialization activities; reporting new technologies and inventions; and processing and tracking intellectual property rights, licensing, partnerships, awards, and success stories. NASA TechTracS is critical to the Agency's ongoing mission to commercialize its revolutionary technologies in a variety of sectors within private industry, both aerospace and non- aerospace.

  14. Technologies for developing an advanced intelligent ATM with self-defence capabilities

    NASA Astrophysics Data System (ADS)

    Sako, Hiroshi

    2010-01-01

    We have developed several technologies for protecting automated teller machines. These technologies are based mainly on pattern recognition and are used to implement various self-defence functions. They include (i) banknote recognition and information retrieval for preventing machines from accepting counterfeit and damaged banknotes and for retrieving information about detected counterfeits from a relational database, (ii) form processing and character recognition for preventing machines from accepting remittance forms without due dates and/or insufficient payment, (iii) person identification to prevent machines from transacting with non-customers, and (iv) object recognition to guard machines against foreign objects such as spy cams that might be surreptitiously attached to them and to protect users against someone attempting to peek at their user information such as their personal identification number. The person identification technology has been implemented in most ATMs in Japan, and field tests have demonstrated that the banknote recognition technology can recognise more then 200 types of banknote from 30 different countries. We are developing an "advanced intelligent ATM" that incorporates all of these technologies.

  15. Identifying work-related motor vehicle crashes in multiple databases.

    PubMed

    Thomas, Andrea M; Thygerson, Steven M; Merrill, Ray M; Cook, Lawrence J

    2012-01-01

    To compare and estimate the magnitude of work-related motor vehicle crashes in Utah using 2 probabilistically linked statewide databases. Data from 2006 and 2007 motor vehicle crash and hospital databases were joined through probabilistic linkage. Summary statistics and capture-recapture were used to describe occupants injured in work-related motor vehicle crashes and estimate the size of this population. There were 1597 occupants in the motor vehicle crash database and 1673 patients in the hospital database identified as being in a work-related motor vehicle crash. We identified 1443 occupants with at least one record from either the motor vehicle crash or hospital database indicating work-relatedness that linked to any record in the opposing database. We found that 38.7 percent of occupants injured in work-related motor vehicle crashes identified in the motor vehicle crash database did not have a primary payer code of workers' compensation in the hospital database and 40.0 percent of patients injured in work-related motor vehicle crashes identified in the hospital database did not meet our definition of a work-related motor vehicle crash in the motor vehicle crash database. Depending on how occupants injured in work-related motor crashes are identified, we estimate the population to be between 1852 and 8492 in Utah for the years 2006 and 2007. Research on single databases may lead to biased interpretations of work-related motor vehicle crashes. Combining 2 population based databases may still result in an underestimate of the magnitude of work-related motor vehicle crashes. Improved coding of work-related incidents is needed in current databases.

  16. High-Resolution Chromosome Ideogram Representation of Currently Recognized Genes for Autism Spectrum Disorders

    PubMed Central

    Butler, Merlin G.; Rafi, Syed K.; Manzardo, Ann M.

    2015-01-01

    Recently, autism-related research has focused on the identification of various genes and disturbed pathways causing the genetically heterogeneous group of autism spectrum disorders (ASD). The list of autism-related genes has significantly increased due to better awareness with advances in genetic technology and expanding searchable genomic databases. We compiled a master list of known and clinically relevant autism spectrum disorder genes identified with supporting evidence from peer-reviewed medical literature sources by searching key words related to autism and genetics and from authoritative autism-related public access websites, such as the Simons Foundation Autism Research Institute autism genomic database dedicated to gene discovery and characterization. Our list consists of 792 genes arranged in alphabetical order in tabular form with gene symbols placed on high-resolution human chromosome ideograms, thereby enabling clinical and laboratory geneticists and genetic counsellors to access convenient visual images of the location and distribution of ASD genes. Meaningful correlations of the observed phenotype in patients with suspected/confirmed ASD gene(s) at the chromosome region or breakpoint band site can be made to inform diagnosis and gene-based personalized care and provide genetic counselling for families. PMID:25803107

  17. Assistive technology for communication of older adults: a systematic review.

    PubMed

    Pedrozo Campos Antunes, Thaiany; Souza Bulle de Oliveira, Acary; Hudec, Robert; Brusque Crocetta, Tania; Ferreira de Lima Antão, Jennifer Yohanna; de Almeida Barbosa, Renata Thais; Guarnieri, Regiani; Massetti, Thais; Garner, David M; de Abreu, Luiz Carlos

    2018-02-16

    Describe the use of assistive technology to enhance communication opportunities for older adults. A systematic review was conducted in two databases, PubMed and Web of Science, by using two different searches in each. The search was limited to original articles, in English language, including people aged 60 years and older that used any type of assistive technology for communication. The articles found in the initial search were filtered by title, abstracts and the remaining articles were fully read. Eighteen studies were included in this review after the reading of full-texts. Most of the studies included apparently healthy participants with communication limitations due to aging related changes and the others included people with some pathology that prevent them from normal communication. Four categories of assistive technology were identified: assistive technology for people with speech problems; robot or videoconferencing systems; Information and Communication Technologies and, other types of assistive technology for communication, such as hearing aids and scrapbooks. Assistive technology for communication of older adults is not only used by people with disabilities that prevent them from usual communication. They are mostly for older adults without a pathological communication problem.

  18. Implementation of the CUAHSI information system for regional hydrological research and workflow

    NASA Astrophysics Data System (ADS)

    Bugaets, Andrey; Gartsman, Boris; Bugaets, Nadezhda; Krasnopeyev, Sergey; Krasnopeyeva, Tatyana; Sokolov, Oleg; Gonchukov, Leonid

    2013-04-01

    Environmental research and education have become increasingly data-intensive as a result of the proliferation of digital technologies, instrumentation, and pervasive networks through which data are collected, generated, shared, and analyzed. Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history (Cox et al., 2006). Successful using these data to achieve new scientific breakthroughs depends on the ability to access, organize, integrate, and analyze these large datasets. The new project of PGI FEB RAS (http://tig.dvo.ru), FERHRI (www.ferhri.org) and Primgidromet (www.primgidromet.ru) is focused on creation of an open unified hydrological information system according to the international standards to support hydrological investigation, water management and forecasts systems. Within the hydrologic science community, the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (http://his.cuahsi.org) has been developing a distributed network of data sources and functions that are integrated using web services and that provide access to data, tools, and models that enable synthesis, visualization, and evaluation of hydrologic system behavior. Based on the top of CUAHSI technologies two first template databases were developed for primary datasets of special observations on experimental basins in the Far East Region of Russia. The first database contains data of special observation performed on the former (1957-1994) Primorskaya Water-Balance Station (1500 km2). Measurements were carried out on 20 hydrological and 40 rain gauging station and were published as special series but only as hardcopy books. Database provides raw data from loggers with hourly and daily time support. The second database called «FarEastHydro» provides published standard daily measurement performed at Roshydromet observation network (200 hydrological and meteorological stations) for the period beginning 1930 through 1990. Both of the data resources are maintained in a test mode at the project site http://gis.dvo.ru:81/, which is permanently updated. After first success, the decision was made to use the CUAHSI technology as a basis for development of hydrological information system to support data publishing and workflow of Primgidromet, the regional office of Federal State Hydrometeorological Agency. At the moment, Primgidromet observation network is equipped with 34 automatic SEBA hydrological pressure sensor pneumatic gauges PS-Light-2 and 36 automatic SEBA weather stations. Large datasets generated by sensor networks are organized and stored within a central ODM database which allows to unambiguously interpret the data with sufficient metadata and provides traceable heritage from raw measurements to useable information. Organization of the data within a central CUAHSI ODM database was the most critical step, with several important implications. This technology is widespread and well documented, and it ensures that all datasets are publicly available and readily used by other investigators and developers to support additional analyses and hydrological modeling. Implementation of ODM within a Relational Database Management System eliminates the potential data manipulation errors and intermediate the data processing steps. Wrapping CUAHSI WaterOneFlow web-service into OpenMI 2.0 linkable component (www.openmi.org) allows a seamless integration with well-known hydrological modeling systems.

  19. Enabling heterogenous multi-scale database for emergency service functions through geoinformation technologies

    NASA Astrophysics Data System (ADS)

    Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.

  20. A Support Database System for Integrated System Health Management (ISHM)

    NASA Technical Reports Server (NTRS)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.

  1. Blending Technology with Camp Tradition: Technology Can Simplify Camp Operations.

    ERIC Educational Resources Information Center

    Salzman, Jeff

    2000-01-01

    Discusses uses of technology appropriate for camps, which are service organizations based on building relationships. Describes relationship marketing and how it can be enhanced through use of Web sites, interactive brochures, and client databases. Outlines other technology uses at camp: automated dispensing of medications, satellite tracking of…

  2. The New Library, A Hybrid Organization.

    ERIC Educational Resources Information Center

    Waaijers, Leo

    This paper discusses changes in technology in libraries over the last decade, beginning with an overview of the impact of databases, the Internet, and the World Wide Web on libraries. The integration of technology at Delft University of Technology (Netherlands) is described, including use of scanning technology, fax, and e-mail for document…

  3. Performance assessment of EMR systems based on post-relational database.

    PubMed

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  4. Wireless access to a pharmaceutical database: a demonstrator for data driven Wireless Application Protocol (WAP) applications in medical information processing.

    PubMed

    Schacht Hansen, M; Dørup, J

    2001-01-01

    The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.

  5. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical information processing

    PubMed Central

    Hansen, Michael Schacht

    2001-01-01

    Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946

  6. Information Technology Support in the 8000 Directorate

    NASA Technical Reports Server (NTRS)

    2004-01-01

    My summer internship was spent supporting various projects within the Environmental Management Office and Glenn Safety Office. Mentored by Eli Abumeri, I was trained in areas of Information Technology such as: Servers, printers, scanners, CAD systems, Web, Programming, and Database Management, ODIN (networking, computers, and phones). I worked closely with the Chemical Sampling and Analysis Team (CSAT) to redesign a database to more efficiently manage and maintain data collected for the Drinking Water Program. This Program has been established for over fifteen years here at the Glenn Research Center. It involves the continued testing and retesting of all drinking water dispensers. The quality of the drinking water is of great importance and is determined by comparing the concentration of contaminants in the water with specifications set forth by the Environmental Protection Agency (EPA) in the Safe Drinking Water Act (SDWA) and its 1986 and 1991 amendments. The Drinking Water Program consists of periodic testing of all drinking water fountains and sinks. Each is tested at least once every 2 years for contaminants and naturally occurring species. The EPA's protocol is to collect an initial and a 5 minute draw from each dispenser. The 5 minute draw is what is used for the maximum contaminant level. However, the CS&AT has added a 30 second draw since most individuals do not run the water 5 minutes prior to drinking. This data is then entered into a relational Microsoft Access database. The database allows for the quick retrieval of any test@) done on any dispenser. The data can be queried by building number, date or test type, and test results are documented in an analytical report for employees to read. To aid with the tracking of recycled materials within the lab, my help was enlisted to create a database that could make this process less cumbersome and more efficient. The date of pickup, type of material, weight received, and unit cost per recyclable. This information could then calculate the dollar amount generated by the recycling of certain materials. This database will ultimately prove useful in determining the amounts of materials consumed by the lab and will help serve as an indicator potential overuse.

  7. Digital divide, biometeorological data infrastructures and human vulnerability definition

    NASA Astrophysics Data System (ADS)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  8. Digital divide, biometeorological data infrastructures and human vulnerability definition.

    PubMed

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  9. Digital divide, biometeorological data infrastructures and human vulnerability definition

    NASA Astrophysics Data System (ADS)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2017-06-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  10. Liz Torres | NREL

    Science.gov Websites

    of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005

  11. Microcomputers in Libraries.

    ERIC Educational Resources Information Center

    Ertel, Monica M.

    1984-01-01

    This discussion of current microcomputer technologies available to libraries focuses on software applications in four major classifications: communications (online database searching); word processing; administration; and database management systems. Specific examples of library applications are given and six references are cited. (EJS)

  12. Towards linked open gene mutations data

    PubMed Central

    2012-01-01

    Background With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. Methods A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. Results We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. Conclusions This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development. The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on mutation data and other biological databases may support data retrieval which is presently not possible. Moreover, reasoning on integrated variation data may support discoveries towards personalized medicine. PMID:22536974

  13. Towards linked open gene mutations data.

    PubMed

    Zappa, Achille; Splendiani, Andrea; Romano, Paolo

    2012-03-28

    With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development.The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on mutation data and other biological databases may support data retrieval which is presently not possible. Moreover, reasoning on integrated variation data may support discoveries towards personalized medicine.

  14. Advances in Satellite Microwave Precipitation Retrieval Algorithms Over Land

    NASA Astrophysics Data System (ADS)

    Wang, N. Y.; You, Y.; Ferraro, R. R.

    2015-12-01

    Precipitation plays a key role in the earth's climate system, particularly in the aspect of its water and energy balance. Satellite microwave (MW) observations of precipitation provide a viable mean to achieve global measurement of precipitation with sufficient sampling density and accuracy. However, accurate precipitation information over land from satellite MW is a challenging problem. The Goddard Profiling Algorithm (GPROF) algorithm for the Global Precipitation Measurement (GPM) is built around the Bayesian formulation (Evans et al., 1995; Kummerow et al., 1996). GPROF uses the likelihood function and the prior probability distribution function to calculate the expected value of precipitation rate, given the observed brightness temperatures. It is particularly convenient to draw samples from a prior PDF from a predefined database of observations or models. GPROF algorithm does not search all database entries but only the subset thought to correspond to the actual observation. The GPM GPROF V1 database focuses on stratification by surface emissivity class, land surface temperature and total precipitable water. However, there is much uncertainty as to what is the optimal information needed to subset the database for different conditions. To this end, we conduct a database stratification study of using National Mosaic and Multi-Sensor Quantitative Precipitation Estimation, Special Sensor Microwave Imager/Sounder (SSMIS) and Advanced Technology Microwave Sounder (ATMS) and reanalysis data from Modern-Era Retrospective Analysis for Research and Applications (MERRA). Our database study (You et al., 2015) shows that environmental factors such as surface elevation, relative humidity, and storm vertical structure and height, and ice thickness can help in stratifying a single large database to smaller and more homogeneous subsets, in which the surface condition and precipitation vertical profiles are similar. It is found that the probability of detection (POD) increases about 8% and 12% by using stratified databases for rainfall and snowfall detection, respectively. In addition, by considering the relative humidity at lower troposphere and the vertical velocity at 700 hPa in the precipitation detection process, the POD for snowfall detection is further increased by 20.4% from 56.0% to 76.4%.

  15. Factors that influence the recognition, reporting and resolution of incidents related to medical devices and other healthcare technologies: a systematic review.

    PubMed

    Polisena, Julie; Gagliardi, Anna; Urbach, David; Clifford, Tammy; Fiander, Michelle

    2015-03-29

    Medical devices have improved the treatment of many medical conditions. Despite their benefit, the use of devices can lead to unintended incidents, potentially resulting in unnecessary harm, injury or complications to the patient, a complaint, loss or damage. Devices are used in hospitals on a routine basis. Research to date, however, has been primarily limited to describing incidents rates, so the optimal design of a hospital-based surveillance system remains unclear. Our research objectives were twofold: i) to explore factors that influence device-related incident recognition, reporting and resolution and ii) to investigate interventions or strategies to improve the recognition, reporting and resolution of medical device-related incidents. We searched the bibliographic databases: MEDLINE, Embase, the Cochrane Central Register of Controlled Trials and PsycINFO database. Grey literature (literature that is not commercially available) was searched for studies on factors that influence incident recognition, reporting and resolution published and interventions or strategies for their improvement from 2003 to 2014. Although we focused on medical devices, other health technologies were eligible for inclusion. Thirty studies were included in our systematic review, but most studies were concentrated on other health technologies. The study findings indicate that fear of punishment, uncertainty of what should be reported and how incident reports will be used and time constraints to incident reporting are common barriers to incident recognition and reporting. Relevant studies on the resolution of medical errors were not found. Strategies to improve error reporting include the use of an electronic error reporting system, increased training and feedback to frontline clinicians about the reported error. The available evidence on factors influencing medical device-related incident recognition, reporting and resolution by healthcare professionals can inform data collection and analysis in future studies. Since evidence gaps on medical device-related incidents exist, telephone interviews with frontline clinicians will be conducted to solicit information about their experiences with medical devices and suggested strategies for device surveillance improvement in a hospital context. Further research also should investigate the impact of human, system, organizational and education factors on the development and implementation of local medical device surveillance systems.

  16. Physical Samples Linked Data in Action

    NASA Astrophysics Data System (ADS)

    Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.

    2017-12-01

    Most data and metadata related to physical samples currently reside in isolated relational databases driven by diverse data models. How to approach the challenge for sharing, interchanging and integrating data from these difference relational databases motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). In last few years, we have released four knowledge graphs concentrated on physical samples, including System for Earth Sample Registration (SESAR), USGS National Geochemical Database (NGDC), Ocean Biogeographic Information System (OBIS), and Earthchem Database. Currently the four knowledge graphs contain over 12 million facets (triples) about objects of interest to the geoscience domain. Choosing appropriate domain ontologies for representing context of data is the core of the whole work. Geolink ontology developed by Earthcube Geolink project was used as top level to represent common concepts like person, organization, cruise, etc. Physical sample ontology developed by Interdisciplinary Earth Data Alliance (IEDA) and Darwin Core vocabulary were used as second level to describe details about geological samples and biological diversity. We also focused on finding and building best tool chains to support the whole life cycle of publishing linked data we have, including information retrieval, linked data browsing and data visualization. Currently, Morph, Virtuoso Server, LodView, LodLive, and YASGUI were employed for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Persistent digital identifier is another main point we concentrated on. Open Researcher & Contributor IDs (ORCIDs), International Geo Sample Numbers (IGSNs), Global Research Identifier Database (GRID) and other persistent identifiers were used to link different resources from various graphs with person, sample, organization, cruise, etc. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).

  17. Older adults' perceptions of technologies aimed at falls prevention, detection or monitoring: a systematic review.

    PubMed

    Hawley-Hague, Helen; Boulton, Elisabeth; Hall, Alex; Pfeiffer, Klaus; Todd, Chris

    2014-06-01

    Over recent years a number of Information and Communication Technologies (ICTs) have emerged aiming at falls prevention, falls detection and alarms for use in case of fall. There are also a range of ICT interventions, which have been created or adapted to be pro-active in preventing falls, such as those which provide strength and balance training to older adults in the prevention of falls. However, there are issues related to the adoption and continued use of these technologies by older adults. This review provides an overview of older adults' perceptions of falls technologies. We undertook systematic searches of MEDLINE, EMBASE, CINAHL and PsychINFO, COMPENDEX and the Cochrane database. Key search terms included 'older adults', 'seniors', 'preference', 'attitudes' and a wide range of technologies, they also included the key word 'fall*'. We considered all studies that included older adults aged 50 and above. Studies had to include technologies related specifically to falls prevention, detection or monitoring. The Joanna Briggs Institute (JBI) tool and the Quality Assessment Tool for Quantitative Studies by the Effective Public Health Practice Project (EPHPP) were used. We identified 76 potentially relevant papers. Some 21 studies were considered for quality review. Twelve qualitative studies, three quantitative studies and 6 mixed methods studies were included. The literature related to technologies aimed at predicting, monitoring and preventing falls suggest that intrinsic factors related to older adults' attitudes around control, independence and perceived need/requirements for safety are important for their motivation to use and continue using technologies. Extrinsic factors such as usability, feedback gained and costs are important elements which support these attitudes and perceptions. Positive messages about the benefits of falls technologies for promoting healthy active ageing and independence are critical, as is ensuring that the technologies are simple, reliable and effective and tailored to individual need. The technologies need to be clearly described in research and older peoples' attitudes towards different sorts of technologies must be clarified if specific recommendations are to be made. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Early economic evaluation of emerging health technologies: protocol of a systematic review

    PubMed Central

    2014-01-01

    Background The concept of early health technology assessment, discussed well over a decade, has now been collaboratively implemented by industry, government, and academia to select and expedite the development of emerging technologies that may address the needs of patients and health systems. Early economic evaluation is essential to assess the value of emerging technologies, but empirical data to inform the current practice of early evaluation is limited. We propose a systematic review of early economic evaluation studies in order to better understand the current practice. Methods/design This protocol describes a systematic review of economic evaluation studies of regulated health technologies in which the evaluation is conducted prior to regulatory approval and when the technology effectiveness is not well established. Included studies must report an economic evaluation, defined as the comparative analysis of alternatives with respect to their associated costs and health consequences, and must evaluate some regulated health technology such as pharmaceuticals, biologics, high-risk medical devices, or biomarkers. We will conduct the literature search on multiple databases, including MEDLINE, EMBASE, the Centre for Reviews and Dissemination Databases, and EconLit. Additional citations will be identified via scanning reference lists and author searching. We suspect that many early economic evaluation studies are unpublished, especially those conducted for internal use only. Additionally, we use a chain-referral sampling approach to identify authors of unpublished studies who work in technology discovery and development, starting out with our contact lists and authors who published relevant studies. Citation screening and full-text review will be conducted by pairs of reviewers. Abstracted data will include those related to the decision context and decision problem of the early evaluation, evaluation methods (e.g., data sources, methods, and assumptions used to identify, measure, and value the likely effectiveness and the costs and consequences of the new technology, handling of uncertainty), and whether the study results adequately address the main study question or objective. Data will be summarized overall and stratified by publication status. Discussion This study is timely to inform early economic evaluation practice, given the international trend in early health technology assessment initiatives. PMID:25055987

  19. Early economic evaluation of emerging health technologies: protocol of a systematic review.

    PubMed

    Pham, Ba'; Tu, Hong Anh Thi; Han, Dolly; Pechlivanoglou, Petros; Miller, Fiona; Rac, Valeria; Chin, Warren; Tricco, Andrea C; Paulden, Mike; Bielecki, Joanna; Krahn, Murray

    2014-07-23

    The concept of early health technology assessment, discussed well over a decade, has now been collaboratively implemented by industry, government, and academia to select and expedite the development of emerging technologies that may address the needs of patients and health systems. Early economic evaluation is essential to assess the value of emerging technologies, but empirical data to inform the current practice of early evaluation is limited. We propose a systematic review of early economic evaluation studies in order to better understand the current practice. This protocol describes a systematic review of economic evaluation studies of regulated health technologies in which the evaluation is conducted prior to regulatory approval and when the technology effectiveness is not well established. Included studies must report an economic evaluation, defined as the comparative analysis of alternatives with respect to their associated costs and health consequences, and must evaluate some regulated health technology such as pharmaceuticals, biologics, high-risk medical devices, or biomarkers. We will conduct the literature search on multiple databases, including MEDLINE, EMBASE, the Centre for Reviews and Dissemination Databases, and EconLit. Additional citations will be identified via scanning reference lists and author searching. We suspect that many early economic evaluation studies are unpublished, especially those conducted for internal use only. Additionally, we use a chain-referral sampling approach to identify authors of unpublished studies who work in technology discovery and development, starting out with our contact lists and authors who published relevant studies. Citation screening and full-text review will be conducted by pairs of reviewers. Abstracted data will include those related to the decision context and decision problem of the early evaluation, evaluation methods (e.g., data sources, methods, and assumptions used to identify, measure, and value the likely effectiveness and the costs and consequences of the new technology, handling of uncertainty), and whether the study results adequately address the main study question or objective. Data will be summarized overall and stratified by publication status. This study is timely to inform early economic evaluation practice, given the international trend in early health technology assessment initiatives.

  20. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

Top