Sample records for database management research

  1. Construction of databases: advances and significance in clinical research.

    PubMed

    Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian

    2015-12-01

    Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.

  2. Maintaining Research Documents with Database Management Software.

    ERIC Educational Resources Information Center

    Harrington, Stuart A.

    1999-01-01

    Discusses taking notes for research projects and organizing them into card files; reviews the literature on personal filing systems; introduces the basic process of database management; and offers a plan for managing research notes. Describes field groups and field definitions, data entry, and creating reports. (LRW)

  3. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  4. National Library of Medicine

    MedlinePlus

    ... Disasters and Public Health Emergencies The NLM Disaster Information Management Research Center has tools, guides, and databases to ... Disasters and Public Health Emergencies The NLM Disaster Information Management Research Center has tools, guides, and databases to ...

  5. Design and deployment of a large brain-image database for clinical and nonclinical research

    NASA Astrophysics Data System (ADS)

    Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.

    2004-04-01

    An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.

  6. How I do it: a practical database management system to assist clinical research teams with data collection, organization, and reporting.

    PubMed

    Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe

    2015-04-01

    The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  7. Database & information tools for transportation research management : Connecticut transportation research peer exchange report of a thematic peer exchange.

    DOT National Transportation Integrated Search

    2006-05-01

    Specific objectives of the Peer Exchange were: : Discuss and exchange information about databases and other software : used to support the program-cycles managed by state transportation : research offices. Elements of the program cycle include: :...

  8. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  9. Databases for multilevel biophysiology research available at Physiome.jp.

    PubMed

    Asai, Yoshiyuki; Abe, Takeshi; Li, Li; Oka, Hideki; Nomura, Taishin; Kitano, Hiroaki

    2015-01-01

    Physiome.jp (http://physiome.jp) is a portal site inaugurated in 2007 to support model-based research in physiome and systems biology. At Physiome.jp, several tools and databases are available to support construction of physiological, multi-hierarchical, large-scale models. There are three databases in Physiome.jp, housing mathematical models, morphological data, and time-series data. In late 2013, the site was fully renovated, and in May 2015, new functions were implemented to provide information infrastructure to support collaborative activities for developing models and performing simulations within the database framework. This article describes updates to the databases implemented since 2013, including cooperation among the three databases, interactive model browsing, user management, version management of models, management of parameter sets, and interoperability with applications.

  10. Concierge: Personal Database Software for Managing Digital Research Resources

    PubMed Central

    Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro

    2007-01-01

    This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800

  11. Implementation of an open adoption research data management system for clinical studies.

    PubMed

    Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate

    2017-07-06

    Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.

  12. Evidence generation from healthcare databases: recommendations for managing change.

    PubMed

    Bourke, Alison; Bate, Andrew; Sauer, Brian C; Brown, Jeffrey S; Hall, Gillian C

    2016-07-01

    There is an increasing reliance on databases of healthcare records for pharmacoepidemiology and other medical research, and such resources are often accessed over a long period of time so it is vital to consider the impact of changes in data, access methodology and the environment. The authors discuss change in communication and management, and provide a checklist of issues to consider for both database providers and users. The scope of the paper is database research, and changes are considered in relation to the three main components of database research: the data content itself, how it is accessed, and the support and tools needed to use the database. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Implementing a Microcomputer Database Management System.

    ERIC Educational Resources Information Center

    Manock, John J.; Crater, K. Lynne

    1985-01-01

    Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…

  14. Building an Ontology-driven Database for Clinical Immune Research

    PubMed Central

    Ma, Jingming

    2006-01-01

    The clinical researches of immune response usually generate a huge amount of biomedical testing data over a certain period of time. The user-friendly data management systems based on the relational database will help immunologists/clinicians to fully manage the data. On the other hand, the same biological assays such as ELISPOT and flow cytometric assays are involved in immunological experiments no matter of different study purposes. The reuse of biological knowledge is one of driving forces behind this ontology-driven data management. Therefore, an ontology-driven database will help to handle different clinical immune researches and help immunologists/clinicians easily understand the immunological data from each other. We will discuss some outlines for building an ontology-driven data management for clinical immune researches (ODMim). PMID:17238637

  15. AGRICULTURAL BEST MANAGEMENT PRACTICE EFFECTIVENESS DATABASE

    EPA Science Inventory

    Resource Purpose:The Agricultural Best Management Practice Effectiveness Database contains the results of research projects which have collected water quality data for the purpose of determining the effectiveness of agricultural management practices in reducing pollutants ...

  16. A Conceptual Model and Database to Integrate Data and Project Management

    NASA Astrophysics Data System (ADS)

    Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.

    2015-12-01

    Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype database and build it in a way that is modular and can be changed or expanded to meet user needs. Our hope is that others, especially those managing large collaborative research grants, will be able to use our project model and database design to enhance the value of their project and data management both during and following the active research period.

  17. Resident database interfaces to the DAVID system, a heterogeneous distributed database management system

    NASA Technical Reports Server (NTRS)

    Moroh, Marsha

    1988-01-01

    A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.

  18. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  19. Insight: An ontology-based integrated database and analysis platform for epilepsy self-management research.

    PubMed

    Sahoo, Satya S; Ramesh, Priya; Welter, Elisabeth; Bukach, Ashley; Valdez, Joshua; Tatsuoka, Curtis; Bamps, Yvan; Stoll, Shelley; Jobst, Barbara C; Sajatovic, Martha

    2016-10-01

    We present Insight as an integrated database and analysis platform for epilepsy self-management research as part of the national Managing Epilepsy Well Network. Insight is the only available informatics platform for accessing and analyzing integrated data from multiple epilepsy self-management research studies with several new data management features and user-friendly functionalities. The features of Insight include, (1) use of Common Data Elements defined by members of the research community and an epilepsy domain ontology for data integration and querying, (2) visualization tools to support real time exploration of data distribution across research studies, and (3) an interactive visual query interface for provenance-enabled research cohort identification. The Insight platform contains data from five completed epilepsy self-management research studies covering various categories of data, including depression, quality of life, seizure frequency, and socioeconomic information. The data represents over 400 participants with 7552 data points. The Insight data exploration and cohort identification query interface has been developed using Ruby on Rails Web technology and open source Web Ontology Language Application Programming Interface to support ontology-based reasoning. We have developed an efficient ontology management module that automatically updates the ontology mappings each time a new version of the Epilepsy and Seizure Ontology is released. The Insight platform features a Role-based Access Control module to authenticate and effectively manage user access to different research studies. User access to Insight is managed by the Managing Epilepsy Well Network database steering committee consisting of representatives of all current collaborating centers of the Managing Epilepsy Well Network. New research studies are being continuously added to the Insight database and the size as well as the unique coverage of the dataset allows investigators to conduct aggregate data analysis that will inform the next generation of epilepsy self-management studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. NBIC: National Ballast Information Clearinghouse

    Science.gov Websites

    Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database / Database Manager: Tami Huber Senior Analyst / Ecologist: Mark Minton Data Managers Ashley Arnwine Jessica Hardee Amanda Reynolds Database Design and Programming / Application Programming: Paul Winterbauer

  1. Geoscience research databases for coastal Alabama ecosystem management

    USGS Publications Warehouse

    Hummell, Richard L.

    1995-01-01

    Effective management of complex coastal ecosystems necessitates access to scientific knowledge that can be acquired through a multidisciplinary approach involving Federal and State scientists that take advantage of agency expertise and resources for the benefit of all participants working toward a set of common research and management goals. Cooperative geostatic investigations have led toward building databases of fundamental scientific knowledge that can be utilized to manage coastal Alabama's natural and future development. These databases have been used to assess the occurrence and economic potential of hard mineral resources in the Alabama EFZ, and to support oil spill contingency planning and environmental analysis for coastal Alabama.

  2. Kristin Munch | NREL

    Science.gov Websites

    Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab

  3. Computer Science and Technology: Modeling and Measurement Techniques for Evaluation of Design Alternatives in the Implementation of Database Management Software. Final Report.

    ERIC Educational Resources Information Center

    Deutsch, Donald R.

    This report describes a research effort that was carried out over a period of several years to develop and demonstrate a methodology for evaluating proposed Database Management System designs. The major proposition addressed by this study is embodied in the thesis statement: Proposed database management system designs can be evaluated best through…

  4. Design and utilization of a Flight Test Engineering Database Management System at the NASA Dryden Flight Research Facility

    NASA Technical Reports Server (NTRS)

    Knighton, Donna L.

    1992-01-01

    A Flight Test Engineering Database Management System (FTE DBMS) was designed and implemented at the NASA Dryden Flight Research Facility. The X-29 Forward Swept Wing Advanced Technology Demonstrator flight research program was chosen for the initial system development and implementation. The FTE DBMS greatly assisted in planning and 'mass production' card preparation for an accelerated X-29 research program. Improved Test Plan tracking and maneuver management for a high flight-rate program were proven, and flight rates of up to three flights per day, two times per week were maintained.

  5. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.

  6. Research information needs on terrestrial vertebrate species of the interior Columbia basin and northern portions of the Klamath and Great Basins: a research, development, and application database.

    Treesearch

    Bruce G. Marcot

    1997-01-01

    Research information needs on selected invertebrates and all vertebrates of the interior Columbia River basin and adjacent areas in the United States were collected into a research, development, and application database as part of the Interior Columbia Basin Ecosystem Management Project. The database includes 482 potential research study topics on 232 individual...

  7. [Research and development of medical case database: a novel medical case information system integrating with biospecimen management].

    PubMed

    Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang

    2010-04-01

    To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.

  8. EPA U.S. NATIONAL MARKAL DATABASE: DATABASE DOCUMENTATION

    EPA Science Inventory

    This document describes in detail the U.S. Energy System database developed by EPA's Integrated Strategic Assessment Work Group for use with the MARKAL model. The group is part of the Office of Research and Development and is located in the National Risk Management Research Labor...

  9. Customized laboratory information management system for a clinical and research leukemia cytogenetics laboratory.

    PubMed

    Bakshi, Sonal R; Shukla, Shilin N; Shah, Pankaj M

    2009-01-01

    We developed a Microsoft Access-based laboratory management system to facilitate database management of leukemia patients referred for cytogenetic tests in regards to karyotyping and fluorescence in situ hybridization (FISH). The database is custom-made for entry of patient data, clinical details, sample details, cytogenetics test results, and data mining for various ongoing research areas. A number of clinical research laboratoryrelated tasks are carried out faster using specific "queries." The tasks include tracking clinical progression of a particular patient for multiple visits, treatment response, morphological and cytogenetics response, survival time, automatic grouping of patient inclusion criteria in a research project, tracking various processing steps of samples, turn-around time, and revenue generated. Since 2005 we have collected of over 5,000 samples. The database is easily updated and is being adapted for various data maintenance and mining needs.

  10. Liz Torres | NREL

    Science.gov Websites

    of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005

  11. Research on computer virus database management system

    NASA Astrophysics Data System (ADS)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  12. A Summary of the Naval Postgraduate School Research Program

    DTIC Science & Technology

    1989-08-30

    5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database

  13. Adopting a corporate perspective on databases. Improving support for research and decision making.

    PubMed

    Meistrell, M; Schlehuber, C

    1996-03-01

    The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.

  14. Clinical Databases for Chest Physicians.

    PubMed

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  15. Advanced Traffic Management Systems (ATMS) research analysis database system

    DOT National Transportation Integrated Search

    2001-06-01

    The ATMS Research Analysis Database Systems (ARADS) consists of a Traffic Software Data Dictionary (TSDD) and a Traffic Software Object Model (TSOM) for application to microscopic traffic simulation and signal optimization domains. The purpose of thi...

  16. Use of a Relational Database to Support Clinical Research: Application in a Diabetes Program

    PubMed Central

    Lomatch, Diane; Truax, Terry; Savage, Peter

    1981-01-01

    A database has been established to support conduct of clinical research and monitor delivery of medical care for 1200 diabetic patients as part of the Michigan Diabetes Research and Training Center (MDRTC). Use of an intelligent microcomputer to enter and retrieve the data and use of a relational database management system (DBMS) to store and manage data have provided a flexible, efficient method of achieving both support of small projects and monitoring overall activity of the Diabetes Center Unit (DCU). Simplicity of access to data, efficiency in providing data for unanticipated requests, ease of manipulations of relations, security and “logical data independence” were important factors in choosing a relational DBMS. The ability to interface with an interactive statistical program and a graphics program is a major advantage of this system. Out database currently provides support for the operation and analysis of several ongoing research projects.

  17. A simple versatile solution for collecting multidimensional clinical data based on the CakePHP web application framework.

    PubMed

    Biermann, Martin

    2014-04-01

    Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  18. Tautomerism in chemical information management systems

    NASA Astrophysics Data System (ADS)

    Warr, Wendy A.

    2010-06-01

    Tautomerism has an impact on many of the processes in chemical information management systems including novelty checking during registration into chemical structure databases; storage of structures; exact and substructure searching in chemical structure databases; and depiction of structures retrieved by a search. The approaches taken by 27 different software vendors and database producers are compared. It is hoped that this comparison will act as a discussion document that could ultimately improve databases and software for researchers in the future.

  19. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  20. Science Inventory Products About Land and Waste Management Research

    EPA Pesticide Factsheets

    Resources from the Science Inventory database of EPA's Office of Research and Development, as well as EPA's Science Matters journal, include research on managing contaminated sites and ground water modeling and decontamination technologies.

  1. Land and Waste Management Research Publications in the Science Inventory

    EPA Pesticide Factsheets

    Resources from the Science Inventory database of EPA's Office of Research and Development, as well as EPA's Science Matters journal, include research on managing contaminated sites and ground water modeling and decontamination technologies.

  2. The Use of a Relational Database in Qualitative Research on Educational Computing.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Carriere, Mario

    1990-01-01

    Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…

  3. A RESEARCH DATABASE FOR IMPROVED DATA MANAGEMENT AND ANALYSIS IN LONGITUDINAL STUDIES

    PubMed Central

    BIELEFELD, ROGER A.; YAMASHITA, TOYOKO S.; KEREKES, EDWARD F.; ERCANLI, EHAT; SINGER, LYNN T.

    2014-01-01

    We developed a research database for a five-year prospective investigation of the medical, social, and developmental correlates of chronic lung disease during the first three years of life. We used the Ingres database management system and the Statit statistical software package. The database includes records containing 1300 variables each, the results of 35 psychological tests, each repeated five times (providing longitudinal data on the child, the parents, and behavioral interactions), both raw and calculated variables, and both missing and deferred values. The four-layer menu-driven user interface incorporates automatic activation of complex functions to handle data verification, missing and deferred values, static and dynamic backup, determination of calculated values, display of database status, reports, bulk data extraction, and statistical analysis. PMID:7596250

  4. Migration from relational to NoSQL database

    NASA Astrophysics Data System (ADS)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  5. Small Business Innovations (Integrated Database)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.

  6. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    ERIC Educational Resources Information Center

    Fitzgibbons, Megan; Meert, Deborah

    2010-01-01

    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  7. Interactive, Automated Management of Icing Data

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.

    2009-01-01

    IceVal DatAssistant is software (see figure) that provides an automated, interactive solution for the management of data from research on aircraft icing. This software consists primarily of (1) a relational database component used to store ice shape and airfoil coordinates and associated data on operational and environmental test conditions and (2) a graphically oriented database access utility, used to upload, download, process, and/or display data selected by the user. The relational database component consists of a Microsoft Access 2003 database file with nine tables containing data of different types. Included in the database are the data for all publicly releasable ice tracings with complete and verifiable test conditions from experiments conducted to date in the Glenn Research Center Icing Research Tunnel. Ice shapes from computational simulations with the correspond ing conditions performed utilizing the latest version of the LEWICE ice shape prediction code are likewise included, and are linked to the equivalent experimental runs. The database access component includes ten Microsoft Visual Basic 6.0 (VB) form modules and three VB support modules. Together, these modules enable uploading, downloading, processing, and display of all data contained in the database. This component also affords the capability to perform various database maintenance functions for example, compacting the database or creating a new, fully initialized but empty database file.

  8. Land, Oil Spill, and Waste Management Research Publications in the Science Inventory

    EPA Pesticide Factsheets

    Resources from the Science Inventory database of EPA's Office of Research and Development, as well as EPA's Science Matters journal, include research on managing contaminated sites and ground water modeling and decontamination technologies.

  9. A DBMS architecture for global change research

    NASA Astrophysics Data System (ADS)

    Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.

    1993-08-01

    The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.

  10. Video Games for Diabetes Self-Management: Examples and Design Strategies

    PubMed Central

    Lieberman, Debra A.

    2012-01-01

    The July 2012 issue of the Journal of Diabetes Science and Technology includes a special symposium called “Serious Games for Diabetes, Obesity, and Healthy Lifestyle.” As part of the symposium, this article focuses on health behavior change video games that are designed to improve and support players’ diabetes self-management. Other symposium articles include one that recommends theory-based approaches to the design of health games and identifies areas in which additional research is needed, followed by five research articles presenting studies of the design and effectiveness of games and game technologies that require physical activity in order to play. This article briefly describes 14 diabetes self-management video games, and, when available, cites research findings on their effectiveness. The games were found by searching the Health Games Research online searchable database, three bibliographic databases (ACM Digital Library, PubMed, and Social Sciences Databases of CSA Illumina), and the Google search engine, using the search terms “diabetes” and “game.” Games were selected if they addressed diabetes self-management skills. PMID:22920805

  11. Video games for diabetes self-management: examples and design strategies.

    PubMed

    Lieberman, Debra A

    2012-07-01

    The July 2012 issue of the Journal of Diabetes Science and Technology includes a special symposium called "Serious Games for Diabetes, Obesity, and Healthy Lifestyle." As part of the symposium, this article focuses on health behavior change video games that are designed to improve and support players' diabetes self-management. Other symposium articles include one that recommends theory-based approaches to the design of health games and identifies areas in which additional research is needed, followed by five research articles presenting studies of the design and effectiveness of games and game technologies that require physical activity in order to play. This article briefly describes 14 diabetes self-management video games, and, when available, cites research findings on their effectiveness. The games were found by searching the Health Games Research online searchable database, three bibliographic databases (ACM Digital Library, PubMed, and Social Sciences Databases of CSA Illumina), and the Google search engine, using the search terms "diabetes" and "game." Games were selected if they addressed diabetes self-management skills. © 2012 Diabetes Technology Society.

  12. A survey of commercial object-oriented database management systems

    NASA Technical Reports Server (NTRS)

    Atkins, John

    1992-01-01

    The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.

  13. [Role and management of cancer clinical database in the application of gastric cancer precision medicine].

    PubMed

    Li, Yuanfang; Zhou, Zhiwei

    2016-02-01

    Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.

  14. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework

    EPA Science Inventory

    Managing the world’s largest and complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that are comparable across the region. To meet such a need, we developed a hierarchi...

  15. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    DTIC Science & Technology

    2002-01-01

    to the OODBMS approach. The ORDBMS approach produced such research prototypes as Postgres [155], and Starburst [67] and commercial products such as...Kemnitz. The POSTGRES Next-Generation Database Management System. Communications of the ACM, 34(10):78–92, 1991. [156] Michael Stonebreaker and Dorothy

  16. An Improved Database System for Program Assessment

    ERIC Educational Resources Information Center

    Haga, Wayne; Morris, Gerard; Morrell, Joseph S.

    2011-01-01

    This research paper presents a database management system for tracking course assessment data and reporting related outcomes for program assessment. It improves on a database system previously presented by the authors and in use for two years. The database system presented is specific to assessment for ABET (Accreditation Board for Engineering and…

  17. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.

  18. MANAGEMENT AND DISSEMINATION OF HUMAN EXPOSURE DATABASES AND OTHER DATABASES NEEDED FOR HUMAN EXPOSURE MODELING AND ANALYSIS

    EPA Science Inventory

    Researchers in the National Exposure Research Laboratory (NERL) have performed a number of large human exposure measurement studies during the past decade. It is the goal of the NERL to make the data available to other researchers for analysis in order to further the scientific ...

  19. The future application of GML database in GIS

    NASA Astrophysics Data System (ADS)

    Deng, Yuejin; Cheng, Yushu; Jing, Lianwen

    2006-10-01

    In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.

  20. Application of cloud database in the management of clinical data of patients with skin diseases.

    PubMed

    Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning

    2015-04-01

    To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.

  1. Questions to Ask Your Doctor

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  2. Omics databases on kidney disease: where they can be found and how to benefit from them.

    PubMed

    Papadopoulos, Theofilos; Krochmal, Magdalena; Cisek, Katryna; Fernandes, Marco; Husi, Holger; Stevens, Robert; Bascands, Jean-Loup; Schanstra, Joost P; Klein, Julie

    2016-06-01

    In the recent decades, the evolution of omics technologies has led to advances in all biological fields, creating a demand for effective storage, management and exchange of rapidly generated data and research discoveries. To address this need, the development of databases of experimental outputs has become a common part of scientific practice in order to serve as knowledge sources and data-sharing platforms, providing information about genes, transcripts, proteins or metabolites. In this review, we present omics databases available currently, with a special focus on their application in kidney research and possibly in clinical practice. Databases are divided into two categories: general databases with a broad information scope and kidney-specific databases distinctively concentrated on kidney pathologies. In research, databases can be used as a rich source of information about pathophysiological mechanisms and molecular targets. In the future, databases will support clinicians with their decisions, providing better and faster diagnoses and setting the direction towards more preventive, personalized medicine. We also provide a test case demonstrating the potential of biological databases in comparing multi-omics datasets and generating new hypotheses to answer a critical and common diagnostic problem in nephrology practice. In the future, employment of databases combined with data integration and data mining should provide powerful insights into unlocking the mysteries of kidney disease, leading to a potential impact on pharmacological intervention and therapeutic disease management.

  3. The liver tissue bank and clinical database in China.

    PubMed

    Yang, Yuan; Liu, Yi-Min; Wei, Ming-Yue; Wu, Yi-Fei; Gao, Jun-Hui; Liu, Lei; Zhou, Wei-Ping; Wang, Hong-Yang; Wu, Meng-Chao

    2010-12-01

    To develop a standardized and well-rounded material available for hepatology research, the National Liver Tissue Bank (NLTB) Project began in 2008 in China to make well-characterized and optimally preserved liver tumor tissue and clinical database. From Dec 2008 to Jun 2010, over 3000 individuals have been enrolled as liver tumor donors to the NLTB, including 2317 cases of newly diagnosed hepatocellular carcinoma (HCC) and about 1000 cases of diagnosed benign or malignant liver tumors. The clinical database and sample store can be managed easily and correctly with the data management platform used. We believe that the high-quality samples with detailed information database will become the cornerstone of hepatology research especially in studies exploring the diagnosis and new treatments for HCC and other liver diseases.

  4. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  5. Online Searching of Bibliographic Databases: Microcomputer Access to National Information Systems.

    ERIC Educational Resources Information Center

    Coons, Bill

    This paper describes the range and scope of various information databases available for technicians, researchers, and managers employed in forestry and the forest products industry. Availability of information on reports of field and laboratory research, business trends, product prices, and company profiles through national distributors of…

  6. Network Configuration of Oracle and Database Programming Using SQL

    NASA Technical Reports Server (NTRS)

    Davis, Melton; Abdurrashid, Jibril; Diaz, Philip; Harris, W. C.

    2000-01-01

    A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and interact with the database. The Oracle 8 Server is a state-of-the-art information management environment. It is a repository for very large amounts of data, and gives users rapid access to that data. The Oracle 8 Server allows for sharing of data between applications; the information is stored in one place and used by many systems. My research will focus primarily on SQL (Structured Query Language) programming. SQL is the way you define and manipulate data in Oracle's relational database. SQL is the industry standard adopted by all database vendors. When programming with SQL, you work on sets of data (i.e., information is not processed one record at a time).

  7. Decision Support Systems for Research and Management in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Rodriquez, Luis F.

    2004-01-01

    Decision support systems have been implemented in many applications including strategic planning for battlefield scenarios, corporate decision making for business planning, production planning and control systems, and recommendation generators like those on Amazon.com(Registered TradeMark). Such tools are reviewed for developing a similar tool for NASA's ALS Program. DSS are considered concurrently with the development of the OPIS system, a database designed for chronicling of research and development in ALS. By utilizing the OPIS database, it is anticipated that decision support can be provided to increase the quality of decisions by ALS managers and researchers.

  8. Transitioning Newborns from NICU to Home: Family Information Packet

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  9. Next Steps After Your Diagnosis: Finding Information and Support

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  10. Blood Thinner Pills: Your Guide to Using Them Safely

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  11. Question Builder: Be Prepared for Your Next Medical Appointment

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  12. Computer Science Research in Europe.

    DTIC Science & Technology

    1984-08-29

    most attention, multi- database and its structure, and (3) the dependencies between databases Distributed Systems and multi- databases . Having...completed a multi- database Newcastle University, UK system for distributed data management, At the University of Newcastle the INRIA is now working on a real...communications re- INRIA quirements of distributed database A project called SIRIUS was estab- systems, protocols for checking the lished in 1977 at the

  13. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, themore » International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.« less

  14. Functions and Relations: Some Applications from Database Management for the Teaching of Classroom Mathematics.

    ERIC Educational Resources Information Center

    Hauge, Sharon K.

    While functions and relations are important concepts in the teaching of mathematics, research suggests that many students lack an understanding and appreciation of these concepts. The present paper discusses an approach for teaching functions and relations that draws on the use of illustrations from database management. This approach has the…

  15. Meta-analysis constrained by data: Recommendations to improve relevance of nutrient management research

    USDA-ARS?s Scientific Manuscript database

    Five research teams received funding through the North American 4R Research Fund to conduct meta-analyses of the air and water quality impacts of on-farm 4R nutrient management practices. In compiling or expanding databases for these analyses on environmental and crop production effects, researchers...

  16. The spectral database Specchio: Data management, data sharing and initial processing of field spectrometer data within the Dimensions of Biodiversity project

    NASA Astrophysics Data System (ADS)

    Hueni, A.; Schweiger, A. K.

    2015-12-01

    Field spectrometry has substantially gained importance in vegetation ecology due to the increasing knowledge about causal ties between vegetation spectra and biochemical and structural plant traits. Additionally, worldwide databases enable the exchange of spectral and plant trait data and promote global research cooperation. This can be expected to further enhance the use of field spectrometers in ecological studies. However, the large amount of data collected during spectral field campaigns poses major challenges regarding data management, archiving and processing. The spectral database Specchio is designed to organize, manage, process and share spectral data and metadata. We provide an example for using Specchio based on leaf level spectra of prairie plant species collected during the 2015 field campaign of the Dimensions of Biodiversity research project, conducted at the Cedar Creek Long-Term Ecological Research site, in central Minnesota. We show how spectral data collections can be efficiently administered, organized and shared between distinct research groups and explore the capabilities of Specchio for data quality checks and initial processing steps.

  17. Be More Involved in Your Health Care: Tips for Patients

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  18. MPD3: a useful medicinal plants database for drug designing.

    PubMed

    Mumtaz, Arooj; Ashfaq, Usman Ali; Ul Qamar, Muhammad Tahir; Anwar, Farooq; Gulzar, Faisal; Ali, Muhammad Amjad; Saari, Nazamid; Pervez, Muhammad Tariq

    2017-06-01

    Medicinal plants are the main natural pools for the discovery and development of new drugs. In the modern era of computer-aided drug designing (CADD), there is need of prompt efforts to design and construct useful database management system that allows proper data storage, retrieval and management with user-friendly interface. An inclusive database having information about classification, activity and ready-to-dock library of medicinal plant's phytochemicals is therefore required to assist the researchers in the field of CADD. The present work was designed to merge activities of phytochemicals from medicinal plants, their targets and literature references into a single comprehensive database named as Medicinal Plants Database for Drug Designing (MPD3). The newly designed online and downloadable MPD3 contains information about more than 5000 phytochemicals from around 1000 medicinal plants with 80 different activities, more than 900 literature references and 200 plus targets. The designed database is deemed to be very useful for the researchers who are engaged in medicinal plants research, CADD and drug discovery/development with ease of operation and increased efficiency. The designed MPD3 is a comprehensive database which provides most of the information related to the medicinal plants at a single platform. MPD3 is freely available at: http://bioinform.info .

  19. MPD: a pathogen genome and metagenome database

    PubMed Central

    Zhang, Tingting; Miao, Jiaojiao; Han, Na; Qiang, Yujun; Zhang, Wen

    2018-01-01

    Abstract Advances in high-throughput sequencing have led to unprecedented growth in the amount of available genome sequencing data, especially for bacterial genomes, which has been accompanied by a challenge for the storage and management of such huge datasets. To facilitate bacterial research and related studies, we have developed the Mypathogen database (MPD), which provides access to users for searching, downloading, storing and sharing bacterial genomics data. The MPD represents the first pathogenic database for microbial genomes and metagenomes, and currently covers pathogenic microbial genomes (6604 genera, 11 071 species, 41 906 strains) and metagenomic data from host, air, water and other sources (28 816 samples). The MPD also functions as a management system for statistical and storage data that can be used by different organizations, thereby facilitating data sharing among different organizations and research groups. A user-friendly local client tool is provided to maintain the steady transmission of big sequencing data. The MPD is a useful tool for analysis and management in genomic research, especially for clinical Centers for Disease Control and epidemiological studies, and is expected to contribute to advancing knowledge on pathogenic bacteria genomes and metagenomes. Database URL: http://data.mypathogen.org PMID:29917040

  20. An Extensible "SCHEMA-LESS" Database Framework for Managing High-Throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  1. An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.

  2. NETMARK: A Schema-less Extension for Relational Databases for Managing Semi-structured Data Dynamically

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  3. Managing Data, Provenance and Chaos through Standardization and Automation at the Georgia Coastal Ecosystems LTER Site

    NASA Astrophysics Data System (ADS)

    Sheldon, W.

    2013-12-01

    Managing data for a large, multidisciplinary research program such as a Long Term Ecological Research (LTER) site is a significant challenge, but also presents unique opportunities for data stewardship. LTER research is conducted within multiple organizational frameworks (i.e. a specific LTER site as well as the broader LTER network), and addresses both specific goals defined in an NSF proposal as well as broader goals of the network; therefore, every LTER data can be linked to rich contextual information to guide interpretation and comparison. The challenge is how to link the data to this wealth of contextual metadata. At the Georgia Coastal Ecosystems LTER we developed an integrated information management system (GCE-IMS) to manage, archive and distribute data, metadata and other research products as well as manage project logistics, administration and governance (figure 1). This system allows us to store all project information in one place, and provide dynamic links through web applications and services to ensure content is always up to date on the web as well as in data set metadata. The database model supports tracking changes over time in personnel roles, projects and governance decisions, allowing these databases to serve as canonical sources of project history. Storing project information in a central database has also allowed us to standardize both the formatting and content of critical project information, including personnel names, roles, keywords, place names, attribute names, units, and instrumentation, providing consistency and improving data and metadata comparability. Lookup services for these standard terms also simplify data entry in web and database interfaces. We have also coupled the GCE-IMS to our MATLAB- and Python-based data processing tools (i.e. through database connections) to automate metadata generation and packaging of tabular and GIS data products for distribution. Data processing history is automatically tracked throughout the data lifecycle, from initial import through quality control, revision and integration by our data processing system (GCE Data Toolbox for MATLAB), and included in metadata for versioned data products. This high level of automation and system integration has proven very effective in managing the chaos and scalability of our information management program.

  4. Database for landscape-scale carbon monitoring sites

    Treesearch

    Jason A. Cole; Kristopher D. Johnson; Richard A. Birdsey; Yude Pan; Craig A. Wayson; Kevin McCullough; Coeli M. Hoover; David Y. Hollinger; John B. Bradford; Michael G. Ryan; Randall K. Kolka; Peter Wieshampel; Kenneth L. Clark; Nicholas S. Skowronski; John Hom; Scott V. Ollinger; Steven G. McNulty; Michael J. Gavazzi

    2013-01-01

    This report describes the database used to compile, store, and manage intensive ground-based biometric data collected at research sites in Colorado, Minnesota, New Hampshire, New Jersey, North Carolina, and Wyoming, supporting research activities of the U.S. North American Carbon Program (NACP). This report also provides details of each site, the sampling design and...

  5. Corridor incident management (CIM)

    DOT National Transportation Integrated Search

    2007-09-01

    The objective of the Corridor Incident Management (CIM) research project was to develop and demonstrate a set of multi-purpose methods, tools and databases to improve corridor incident management in Tennessee, relying primarily on resources already a...

  6. Djeen (Database for Joomla!'s Extensible Engine): a research information management system for flexible multi-technology project administration.

    PubMed

    Stahl, Olivier; Duvergey, Hugo; Guille, Arnaud; Blondin, Fanny; Vecchio, Alexandre Del; Finetti, Pascal; Granjeaud, Samuel; Vigy, Oana; Bidaut, Ghislain

    2013-06-06

    With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. We developed Djeen (Database for Joomla!'s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group.Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material.

  7. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    PubMed Central

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  8. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  9. Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database

    PubMed Central

    Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.

    2010-01-01

    Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938

  10. Federated web-accessible clinical data management within an extensible neuroimaging database.

    PubMed

    Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S

    2010-12-01

    Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.

  11. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  12. Small Business Innovations (Automated Information)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Bruce G. Jackson & Associates Document Director is an automated tool that combines word processing and database management technologies to offer the flexibility and convenience of text processing with the linking capability of database management. Originally developed for NASA, it provides a means to collect and manage information associated with requirements development. The software system was used by NASA in the design of the Assured Crew Return Vehicle, as well as by other government and commercial organizations including the Southwest Research Institute.

  13. IceVal DatAssistant: An Interactive, Automated Icing Data Management System

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Wright, William B.

    2008-01-01

    As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions documented during these tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy for managing this data. To address the situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database. Simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and are linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.

  14. IceVal DatAssistant: An Interactive, Automated Icing Data Management System

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Wright, William B.

    2008-01-01

    As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions during those tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy to manage the resulting data. To address this situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database; and simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.

  15. Alberta Carpenter | NREL

    Science.gov Websites

    cycle assessment in industrial by-product management, waste management, biofuels and manufacturing technologies Life cycle inventory database management Research Interests Life cycle assessment Life cycle inventory management Biofuels Advanced manufacturing Supply chain analysis Education Ph.D in environmental

  16. DataHub: Knowledge-based data management for data discovery

    NASA Astrophysics Data System (ADS)

    Handley, Thomas H.; Li, Y. Philip

    1993-08-01

    Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.

  17. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  18. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    PubMed

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.

  19. IDA 2004 Cost Research Symposium: Investments in, Use of, and Management of Cost Research

    DTIC Science & Technology

    2004-09-01

    Database: None Publication: Technical Report Keywords: Government, Aircraft, SD&D, Production, Integration, Data Collection, Database, CER B- 71 ... Martin Plant in Marietta , Georgia,” IDA Paper P-3590, July 2001 “Econometric Modeling of Acquisition Category I Systems at the Raytheon Plant in...NAVSEA) ............................................................ B- 71 Naval Surface Warfare Center, Dahlgren Division (NSWCDD

  20. A lake-centric geospatial database to guide research and inform management decisions in an Arctic watershed in northern Alaska experiencing climate and land-use changes

    USGS Publications Warehouse

    Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido

    2017-01-01

    Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.

  1. Data base management system for lymphatic filariasis--a neglected tropical disease.

    PubMed

    Upadhyayula, Suryanaryana Murty; Mutheneni, Srinivasa Rao; Kadiri, Madhusudhan Rao; Kumaraswamy, Sriram; Nelaturu, Sarat Chandra Babu

    2012-01-01

    Researchers working in the area of Public Health are being confronted with large volumes of data on various aspects of entomology and epidemiology. To obtain the relevant information out of these data requires particular database management system. In this paper, we have described about the usages of our developed database on lymphatic filariasis. This database application is developed using Model View Controller (MVC) architecture, with MySQL as database and a web based interface. We have collected and incorporated the data on filariasis in the database from Karimnagar, Chittoor, East and West Godavari districts of Andhra Pradesh, India. The importance of this database is to store the collected data, retrieve the information and produce various combinational reports on filarial aspects which in turn will help the public health officials to understand the burden of disease in a particular locality. This information is likely to have an imperative role on decision making for effective control of filarial disease and integrated vector management operations.

  2. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory

    PubMed Central

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.

    2011-01-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443

  3. The Protein Information Management System (PiMS): a generic tool for any structural biology research laboratory.

    PubMed

    Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M

    2011-04-01

    The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.

  4. An Evaluator's Guide to Using DB MASTER: A Microcomputer Based File Management Program. Research on Evaluation Program, Paper and Report Series No. 91.

    ERIC Educational Resources Information Center

    Gray, Peter J.

    Ways a microcomputer can be used to establish and maintain an evaluation database and types of data management features possible on a microcomputer are described in this report, which contains step-by-step procedures and numerous examples for establishing a database, manipulating data, and designing and printing reports. Following a brief…

  5. Functionally Graded Materials Database

    NASA Astrophysics Data System (ADS)

    Kisara, Katsuto; Konno, Tomomi; Niino, Masayuki

    2008-02-01

    Functionally Graded Materials Database (hereinafter referred to as FGMs Database) was open to the society via Internet in October 2002, and since then it has been managed by the Japan Aerospace Exploration Agency (JAXA). As of October 2006, the database includes 1,703 research information entries with 2,429 researchers data, 509 institution data and so on. Reading materials such as "Applicability of FGMs Technology to Space Plane" and "FGMs Application to Space Solar Power System (SSPS)" were prepared in FY 2004 and 2005, respectively. The English version of "FGMs Application to Space Solar Power System (SSPS)" is now under preparation. This present paper explains the FGMs Database, describing the research information data, the sitemap and how to use it. From the access analysis, user access results and users' interests are discussed.

  6. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences.

    PubMed

    Stephens, Susie M; Chen, Jake Y; Davidson, Marcel G; Thomas, Shiby; Trute, Barry M

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html.

  7. Software support for Huntingtons disease research.

    PubMed

    Conneally, P M; Gersting, J M; Gray, J M; Beidleman, K; Wexler, N S; Smith, C L

    1991-01-01

    Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data.

  8. Vehicle Thermal Management Publications | Transportation Research | NREL

    Science.gov Websites

    Publications Vehicle Thermal Management Publications Explore NREL's recent publications about light - and heavy-duty vehicle thermal management. For the complete collection of NREL's vehicle thermal management publications, search the NREL Publications Database. All Light-Duty Electric-Drive Light-Duty

  9. JDD, Inc. Database

    NASA Technical Reports Server (NTRS)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the information to the database. It now consists of seven different categories of data (carpet cleaning, forms, NASA Event Schedules, training certifications, wall and vent cleaning, work schedules, and miscellaneous) . I also did some field inspecting with the supervisors around the site and was present at all of the training certification courses that have been scheduled since June 2004. My future outlook for the JDD, Inc. database is to have all of company s information from future contract proposals, weekly inventory, to employee timesheets all in this same database.

  10. Coordinated Research in Robotics and Integrated Manufacturing.

    DTIC Science & Technology

    1983-07-31

    of three research divisions: Robot Systems, Management Systems, and Integrated Design and Manufacturing, and involves about 40 faculty spanning the...keystone of their program. A relatively smaller level of effort is being supported within the Management Systems Division. This is the first annual...SYSTEMS MANAGEMENT 0 DESIGN DATABASES " ROBOT-BASED 0 HUMAN FACTORSMANUFACTURING • CAD CELL* PRODUCTIONMUCR LANNING * INTEGRATION LANGUAGE AND VIA LOCAL

  11. Chemical Inventory Management at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Kraft, Shirley S.; Homan, Joseph R.; Bajorek, Michael J.; Dominguez, Manuel B.; Smith, Vanessa L.

    1997-01-01

    The Chemical Management System (CMS) is a client/server application developed with Power Builder and Sybase for the Lewis Research Center (LeRC). Power Builder is a client-server application development tool, Sybase is a Relational Database Management System. The entire LeRC community can access the CMS from any desktop environment. The multiple functions and benefits of the CMS are addressed.

  12. Development of tools for evaluating integrated municipal waste management using life-cycle management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorneloe, S.; Weitz, K.; Nishtala, S.

    1998-08-01

    Municipal solid waste (MSW) management increasingly is based on integrated systems. The US initiated research in 1994 through funding by the US Environmental Protection Agency and the US Department of Energy to develop (1) a decision support tool; (2) a database; and (3) case studies. This paper provides an overview of the research that is in process.

  13. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  14. Tripal: a construction toolkit for online genome databases.

    PubMed

    Ficklin, Stephen P; Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E; Main, Doreen

    2011-01-01

    As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net.

  15. Tripal: a construction toolkit for online genome databases

    PubMed Central

    Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E.; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E.; Main, Doreen

    2011-01-01

    As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net PMID:21959868

  16. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    NASA Astrophysics Data System (ADS)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.

  17. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849

  18. The research of network database security technology based on web service

    NASA Astrophysics Data System (ADS)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  19. Updated Palaeotsunami Database for Aotearoa/New Zealand

    NASA Astrophysics Data System (ADS)

    Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.

    2016-12-01

    The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a similar one currently under development in Japan. Expressions of interest in collaborating with the A/NZ team to expand the database are invited from other Pacific nations.

  20. Using databases in medical education research: AMEE Guide No. 77.

    PubMed

    Cleland, Jennifer; Scott, Neil; Harrild, Kirsten; Moffat, Mandy

    2013-05-01

    This AMEE Guide offers an introduction to the use of databases in medical education research. It is intended for those who are contemplating conducting research in medical education but are new to the field. The Guide is structured around the process of planning your research so that data collection, management and analysis are appropriate for the research question. Throughout we consider contextual possibilities and constraints to educational research using databases, such as the resources available, and provide concrete examples of medical education research to illustrate many points. The first section of the Guide explains the difference between different types of data and classifying data, and addresses the rationale for research using databases in medical education. We explain the difference between qualitative research and qualitative data, the difference between categorical and quantitative data, and the difference types of data which fall into these categories. The Guide reviews the strengths and weaknesses of qualitative and quantitative research. The next section is structured around how to work with quantitative and qualitative databases and provides guidance on the many practicalities of setting up a database. This includes how to organise your database, including anonymising data and coding, as well as preparing and describing your data so it is ready for analysis. The critical matter of the ethics of using databases in medical educational research, including using routinely collected data versus data collected for research purposes, and issues of confidentiality, is discussed. Core to the Guide is drawing out the similarities and differences in working with different types of data and different types of databases. Future AMEE Guides in the research series will address statistical analysis of data in more detail.

  1. Oracle Database 10g: a platform for BLAST search and Regular Expression pattern matching in life sciences

    PubMed Central

    Stephens, Susie M.; Chen, Jake Y.; Davidson, Marcel G.; Thomas, Shiby; Trute, Barry M.

    2005-01-01

    As database management systems expand their array of analytical functionality, they become powerful research engines for biomedical data analysis and drug discovery. Databases can hold most of the data types commonly required in life sciences and consequently can be used as flexible platforms for the implementation of knowledgebases. Performing data analysis in the database simplifies data management by minimizing the movement of data from disks to memory, allowing pre-filtering and post-processing of datasets, and enabling data to remain in a secure, highly available environment. This article describes the Oracle Database 10g implementation of BLAST and Regular Expression Searches and provides case studies of their usage in bioinformatics. http://www.oracle.com/technology/software/index.html PMID:15608287

  2. STI Handbook: Guidelines for Producing, Using, and Managing Scientific and Technical Information in the Department of the Navy. A Handbook for Navy Scientists and Engineers on the Use of Scientific and Technical Information

    DTIC Science & Technology

    1992-02-01

    6 What Information Should Be Included in the TR Database? 2-6 What Types of Media Can Be Used to Submit Information to the TR Database? 2-9 How Is...reports. Contract administration documents. Regulations. Commercially published books. WHAT TYPES OF MEDIA CAN BE USED TO SUBMIT INFORMATION TO THE TR...TOWARD DTIC’S WUIS DATA- BASE ? The WUIS database, used to control and report technical and management data, summarizes ongoing research and technology

  3. Software support for Huntingtons disease research.

    PubMed Central

    Conneally, P. M.; Gersting, J. M.; Gray, J. M.; Beidleman, K.; Wexler, N. S.; Smith, C. L.

    1991-01-01

    Huntingtons disease (HD) is a hereditary disorder involving the central nervous system. Its effects are devastating, to the affected person as well as his family. The Department of Medical and Molecular Genetics at Indiana University (IU) plays an integral part in Huntingtons research by providing computerized repositories of HD family information for researchers and families. The National Huntingtons Disease Research Roster, founded in 1979 at IU, and the Huntingtons Disease in Venezuela Project database contain information that has proven to be invaluable in the worldwide field of HD research. This paper addresses the types of information stored in each database, the pedigree database program (MEGADATS) used to manage the data, and significant findings that have resulted from access to the data. PMID:1839672

  4. Huntington's Disease Research Roster Support with a Microcomputer Database Management System

    PubMed Central

    Gersting, J. M.; Conneally, P. M.; Beidelman, K.

    1983-01-01

    This paper chronicles the MEGADATS (Medical Genetics Acquisition and DAta Transfer System) database development effort in collecting, storing, retrieving, and plotting human family pedigrees. The newest system, MEGADATS-3M, is detailed. Emphasis is on the microcomputer version of MEGADATS-3M and its use to support the Huntington's Disease research roster project. Examples of data input and pedigree plotting are included.

  5. A Middle-Range Explanatory Theory of Self-Management Behavior for Collaborative Research and Practice.

    PubMed

    Blok, Amanda C

    2017-04-01

    To report an analysis of the concept of self-management behaviors. Self-management behaviors are typically associated with disease management, with frequent use by nurse researchers related to chronic illness management and by international health organizations for development of disease management interventions. A concept analysis was conducted within the context of Orem's self-care framework. Walker and Avant's eight-step concept analysis approach guided the analysis. Academic databases were searched for relevant literature including CIHAHL, Cochrane Databases of Systematic Reviews and Register of Controlled Trials, MEDLINE, PsycARTICLES and PsycINFO, and SocINDEX. Literature using the term "self-management behavior" and published between April 2001 and March 2015 was analyzed for attributes, antecedents, and consequences. A total of 189 journal articles were reviewed. Self-management behaviors are defined as proactive actions related to lifestyle, a problem, planning, collaborating, and mental support, as well as reactive actions related to a circumstantial change, to achieve a goal influenced by the antecedents of physical, psychological, socioeconomic, and cultural characteristics, as well as collaborative and received support. The theoretical definition and middle-range explanatory theory of self-management behaviors will guide future collaborative research and clinical practice for disease management. © 2016 Wiley Periodicals, Inc.

  6. Integration of environmental simulation models with satellite remote sensing and geographic information systems technologies: case studies

    USGS Publications Warehouse

    Steyaert, Louis T.; Loveland, Thomas R.; Brown, Jesslyn F.; Reed, Bradley C.

    1993-01-01

    Environmental modelers are testing and evaluating a prototype land cover characteristics database for the conterminous United States developed by the EROS Data Center of the U.S. Geological Survey and the University of Nebraska Center for Advanced Land Management Information Technologies. This database was developed from multi temporal, 1-kilometer advanced very high resolution radiometer (AVHRR) data for 1990 and various ancillary data sets such as elevation, ecological regions, and selected climatic normals. Several case studies using this database were analyzed to illustrate the integration of satellite remote sensing and geographic information systems technologies with land-atmosphere interactions models at a variety of spatial and temporal scales. The case studies are representative of contemporary environmental simulation modeling at local to regional levels in global change research, land and water resource management, and environmental simulation modeling at local to regional levels in global change research, land and water resource management and environmental risk assessment. The case studies feature land surface parameterizations for atmospheric mesoscale and global climate models; biogenic-hydrocarbons emissions models; distributed parameter watershed and other hydrological models; and various ecological models such as ecosystem, dynamics, biogeochemical cycles, ecotone variability, and equilibrium vegetation models. The case studies demonstrate the important of multi temporal AVHRR data to develop to develop and maintain a flexible, near-realtime land cover characteristics database. Moreover, such a flexible database is needed to derive various vegetation classification schemes, to aggregate data for nested models, to develop remote sensing algorithms, and to provide data on dynamic landscape characteristics. The case studies illustrate how such a database supports research on spatial heterogeneity, land use, sensitivity analysis, and scaling issues involving regional extrapolations and parameterizations of dynamic land processes within simulation models.

  7. The Marshall Islands Data Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoker, A.C.; Conrado, C.L.

    1995-09-01

    This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less

  8. An optical scan/statistical package for clinical data management in C-L psychiatry.

    PubMed

    Hammer, J S; Strain, J J; Lyerly, M

    1993-03-01

    This paper explores aspects of the need for clinical database management systems that permit ongoing service management, measurement of the quality and appropriateness of care, databased administration of consultation liaison (C-L) services, teaching/educational observations, and research. It describes an OPTICAL SCAN databased management system that permits flexible form generation, desktop publishing, and linking of observations in multiple files. This enhanced MICRO-CARES software system--Medical Application Platform (MAP)--permits direct transfer of the data to ASCII and SAS format for mainframe manipulation of the clinical information. The director of a C-L service may now develop his or her own forms, incorporate structured instruments, or develop "branch chains" of essential data to add to the core data set without the effort and expense to reprint forms or consult with commercial vendors.

  9. Southern African Treatment Resistance Network (SATuRN) RegaDB HIV drug resistance and clinical management database: supporting patient management, surveillance and research in southern Africa

    PubMed Central

    Manasa, Justen; Lessells, Richard; Rossouw, Theresa; Naidu, Kevindra; Van Vuuren, Cloete; Goedhals, Dominique; van Zyl, Gert; Bester, Armand; Skingsley, Andrew; Stott, Katharine; Danaviah, Siva; Chetty, Terusha; Singh, Lavanya; Moodley, Pravi; Iwuji, Collins; McGrath, Nuala; Seebregts, Christopher J.; de Oliveira, Tulio

    2014-01-01

    Abstract Substantial amounts of data have been generated from patient management and academic exercises designed to better understand the human immunodeficiency virus (HIV) epidemic and design interventions to control it. A number of specialized databases have been designed to manage huge data sets from HIV cohort, vaccine, host genomic and drug resistance studies. Besides databases from cohort studies, most of the online databases contain limited curated data and are thus sequence repositories. HIV drug resistance has been shown to have a great potential to derail the progress made thus far through antiretroviral therapy. Thus, a lot of resources have been invested in generating drug resistance data for patient management and surveillance purposes. Unfortunately, most of the data currently available relate to subtype B even though >60% of the epidemic is caused by HIV-1 subtype C. A consortium of clinicians, scientists, public health experts and policy markers working in southern Africa came together and formed a network, the Southern African Treatment and Resistance Network (SATuRN), with the aim of increasing curated HIV-1 subtype C and tuberculosis drug resistance data. This article describes the HIV-1 data curation process using the SATuRN Rega database. The data curation is a manual and time-consuming process done by clinical, laboratory and data curation specialists. Access to the highly curated data sets is through applications that are reviewed by the SATuRN executive committee. Examples of research outputs from the analysis of the curated data include trends in the level of transmitted drug resistance in South Africa, analysis of the levels of acquired resistance among patients failing therapy and factors associated with the absence of genotypic evidence of drug resistance among patients failing therapy. All these studies have been important for informing first- and second-line therapy. This database is a free password-protected open source database available on www.bioafrica.net. Database URL: http://www.bioafrica.net/regadb/ PMID:24504151

  10. [Research on Zhejiang blood information network and management system].

    PubMed

    Yan, Li-Xing; Xu, Yan; Meng, Zhong-Hua; Kong, Chang-Hong; Wang, Jian-Min; Jin, Zhen-Liang; Wu, Shi-Ding; Chen, Chang-Shui; Luo, Ling-Fei

    2007-02-01

    This research was aimed to develop the first level blood information centralized database and real time communication network at a province area in China. Multiple technology like local area network database separate operation, real time data concentration and distribution mechanism, allopatric backup, and optical fiber virtual private network (VPN) were used. As a result, the blood information centralized database and management system were successfully constructed, which covers all the Zhejiang province, and the real time exchange of blood data was realised. In conclusion, its implementation promote volunteer blood donation and ensure the blood safety in Zhejiang, especially strengthen the quick response to public health emergency. This project lays the first stone of centralized test and allotment among blood banks in Zhejiang, and can serve as a reference of contemporary blood bank information systems in China.

  11. Software for pest-management science: computer models and databases from the United States Department of Agriculture-Agricultural Research Service.

    PubMed

    Wauchope, R Don; Ahuja, Lajpat R; Arnold, Jeffrey G; Bingner, Ron; Lowrance, Richard; van Genuchten, Martinus T; Adams, Larry D

    2003-01-01

    We present an overview of USDA Agricultural Research Service (ARS) computer models and databases related to pest-management science, emphasizing current developments in environmental risk assessment and management simulation models. The ARS has a unique national interdisciplinary team of researchers in surface and sub-surface hydrology, soil and plant science, systems analysis and pesticide science, who have networked to develop empirical and mechanistic computer models describing the behavior of pests, pest responses to controls and the environmental impact of pest-control methods. Historically, much of this work has been in support of production agriculture and in support of the conservation programs of our 'action agency' sister, the Natural Resources Conservation Service (formerly the Soil Conservation Service). Because we are a public agency, our software/database products are generally offered without cost, unless they are developed in cooperation with a private-sector cooperator. Because ARS is a basic and applied research organization, with development of new science as our highest priority, these products tend to be offered on an 'as-is' basis with limited user support except for cooperating R&D relationship with other scientists. However, rapid changes in the technology for information analysis and communication continually challenge our way of doing business.

  12. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  13. Use of a secure Internet Web site for collaborative medical research.

    PubMed

    Marshall, W W; Haley, R W

    2000-10-11

    Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.

  14. Management of information in distributed biomedical collaboratories.

    PubMed

    Keator, David B

    2009-01-01

    Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an alarming rate. The specialization of fields in biology and medicine demonstrates the need for somewhat different structures for storage and retrieval of data. For biologists, the need for structured information and integration across a number of domains drives development. For clinical researchers and hospitals, the need for a structured medical record accessible to, ideally, any medical practitioner who might require it during the course of research or patient treatment, patient confidentiality, and security are the driving developmental factors. Scientific data management systems generally consist of a few core services: a backend database system, a front-end graphical user interface, and an export/import mechanism or data interchange format to both get data into and out of the database and share data with collaborators. The chapter introduces some existing databases, distributed file systems, and interchange languages used within the biomedical research and clinical communities for scientific data management and exchange.

  15. FJET Database Project: Extract, Transform, and Load

    NASA Technical Reports Server (NTRS)

    Samms, Kevin O.

    2015-01-01

    The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.

  16. The National Nonindigenous Aquatic Species Database

    USGS Publications Warehouse

    Neilson, Matthew E.; Fuller, Pamela L.

    2012-01-01

    The U.S. Geological Survey (USGS) Nonindigenous Aquatic Species (NAS) Program maintains a database that monitors, records, and analyzes sightings of nonindigenous aquatic plant and animal species throughout the United States. The program is based at the USGS Wetland and Aquatic Research Center in Gainesville, Florida.The initiative to maintain scientific information on nationwide occurrences of nonindigenous aquatic species began with the Aquatic Nuisance Species Task Force, created by Congress in 1990 to provide timely information to natural resource managers. Since then, the NAS database has been a clearinghouse of information for confirmed sightings of nonindigenous, also known as nonnative, aquatic species throughout the Nation. The database is used to produce email alerts, maps, summary graphs, publications, and other information products to support natural resource managers.

  17. Data bases for forest inventory in the North-Central Region.

    Treesearch

    Jerold T. Hahn; Mark H. Hansen

    1985-01-01

    Describes the data collected by the Forest Inventory and Analysis (FIA) Research Work Unit at the North Central Forest Experiment Station. Explains how interested parties may obtain information from the databases either through direct access or by special requests to the FIA database manager.

  18. TRANSPORTATION RESEARCH IMPLEMENTATION MANAGEMENT : DEVELOPMENT OF PERFORMANCE BASED PROCESSES, METRICS, AND TOOLS

    DOT National Transportation Integrated Search

    2018-02-02

    The objective of this study is to develop an evidencebased research implementation database and tool to support research implementation at the Georgia Department of Transportation (GDOT).A review was conducted drawing from the (1) implementati...

  19. Research and Design of Embedded Wireless Meal Ordering System Based on SQLite

    NASA Astrophysics Data System (ADS)

    Zhang, Jihong; Chen, Xiaoquan

    The paper describes features and internal architecture and developing method of SQLite. And then it gives a design and program of meal ordering system. The system realizes the information interaction among the users and embedded devices with SQLite as database system. The embedded database SQLite manages the data and achieves wireless communication by using Bluetooth. A system program based on Qt/Embedded and Linux drivers realizes the local management of environmental data.

  20. Development of the Lymphoma Enterprise Architecture Database: A caBIG(tm) Silver level compliant System

    PubMed Central

    Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.

    2009-01-01

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074

  1. Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.

    PubMed

    Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R

    2009-04-03

    Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.

  2. A dedicated database system for handling multi-level data in systems biology.

    PubMed

    Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens

    2014-01-01

    Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.

  3. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  4. Development of a functional, internet-accessible department of surgery outcomes database.

    PubMed

    Newcomb, William L; Lincourt, Amy E; Gersin, Keith; Kercher, Kent; Iannitti, David; Kuwada, Tim; Lyons, Cynthia; Sing, Ronald F; Hadzikadic, Mirsad; Heniford, B Todd; Rucho, Susan

    2008-06-01

    The need for surgical outcomes data is increasing due to pressure from insurance companies, patients, and the need for surgeons to keep their own "report card". Current data management systems are limited by inability to stratify outcomes based on patients, surgeons, and differences in surgical technique. Surgeons along with research and informatics personnel from an academic, hospital-based Department of Surgery and a state university's Department of Information Technology formed a partnership to develop a dynamic, internet-based, clinical data warehouse. A five-component model was used: data dictionary development, web application creation, participating center education and management, statistics applications, and data interpretation. A data dictionary was developed from a list of data elements to address needs of research, quality assurance, industry, and centers of excellence. A user-friendly web interface was developed with menu-driven check boxes, multiple electronic data entry points, direct downloads from hospital billing information, and web-based patient portals. Data were collected on a Health Insurance Portability and Accountability Act-compliant server with a secure firewall. Protected health information was de-identified. Data management strategies included automated auditing, on-site training, a trouble-shooting hotline, and Institutional Review Board oversight. Real-time, daily, monthly, and quarterly data reports were generated. Fifty-eight publications and 109 abstracts have been generated from the database during its development and implementation. Seven national academic departments now use the database to track patient outcomes. The development of a robust surgical outcomes database requires a combination of clinical, informatics, and research expertise. Benefits of surgeon involvement in outcomes research include: tracking individual performance, patient safety, surgical research, legal defense, and the ability to provide accurate information to patient and payers.

  5. Federal Funding Opportunities - National Site for the Regional IPM Centers

    Science.gov Websites

    Pest Management (CPPM) Small Business Innovation Research (SBIR) Sustainable Agriculture (SARE ) Specialty Crop Research Initiative / Citrus Disease Research and Extension Organic Agriculture Research and Opportunities Funding Opportunities Database United States Department of Agriculture - National Institute of

  6. A spatial classification and database for management, research, and policy making: The Great Lakes aquatic habitat framework

    USGS Publications Warehouse

    Wang, Lizhu; Riseng, Catherine M.; Mason, Lacey; Werhrly, Kevin; Rutherford, Edward; McKenna, James E.; Castiglione, Chris; Johnson, Lucinda B.; Infante, Dana M.; Sowa, Scott P.; Robertson, Mike; Schaeffer, Jeff; Khoury, Mary; Gaiot, John; Hollenhurst, Tom; Brooks, Colin N.; Coscarelli, Mark

    2015-01-01

    Managing the world's largest and most complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that is comparable across the region. To meet such a need, we developed a spatial classification framework and database — Great Lakes Aquatic Habitat Framework (GLAHF). GLAHF consists of catchments, coastal terrestrial, coastal margin, nearshore, and offshore zones that encompass the entire Great Lakes Basin. The catchments captured in the database as river pour points or coastline segments are attributed with data known to influence physicochemical and biological characteristics of the lakes from the catchments. The coastal terrestrial zone consists of 30-m grid cells attributed with data from the terrestrial region that has direct connection with the lakes. The coastal margin and nearshore zones consist of 30-m grid cells attributed with data describing the coastline conditions, coastal human disturbances, and moderately to highly variable physicochemical and biological characteristics. The offshore zone consists of 1.8-km grid cells attributed with data that are spatially less variable compared with the other aquatic zones. These spatial classification zones and their associated data are nested within lake sub-basins and political boundaries and allow the synthesis of information from grid cells to classification zones, within and among political boundaries, lake sub-basins, Great Lakes, or within the entire Great Lakes Basin. This spatially structured database could help the development of basin-wide management plans, prioritize locations for funding and specific management actions, track protection and restoration progress, and conduct research for science-based decision making.

  7. 75 FR 61761 - Renewal of Charter for the Chronic Fatigue Syndrome Advisory Committee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-06

    ... professionals, and the biomedical, academic, and research communities about chronic fatigue syndrome advances... accessing the FACA database that is maintained by the Committee Management Secretariat under the General Services Administration. The Web site address for the FACA database is http://fido.gov/facadatabase . Dated...

  8. Design and Performance of a Xenobiotic Metabolism Database Manager for Building Metabolic Pathway Databases

    EPA Science Inventory

    A major challenge for scientists and regulators is accounting for the metabolic activation of chemicals that may lead to increased toxicity. Reliable forecasting of chemical metabolism is a critical factor in estimating a chemical’s toxic potential. Research is underway to develo...

  9. PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-04

    PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kogalovskii, M.R.

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  11. Optimization of the efficiency of search operations in the relational database of radio electronic systems

    NASA Astrophysics Data System (ADS)

    Wajszczyk, Bronisław; Biernacki, Konrad

    2018-04-01

    The increase of interoperability of radio electronic systems used in the Armed Forces requires the processing of very large amounts of data. Requirements for the integration of information from many systems and sensors, including radar recognition, electronic and optical recognition, force to look for more efficient methods to support information retrieval in even-larger database resources. This paper presents the results of research on methods of improving the efficiency of databases using various types of indexes. The data structure indexing technique is a solution used in RDBMS systems (relational database management system). However, the analysis of the performance of indices, the description of potential applications, and in particular the presentation of a specific scale of performance growth for individual indices are limited to few studies in this field. This paper contains analysis of methods affecting the work efficiency of a relational database management system. As a result of the research, a significant increase in the efficiency of operations on data was achieved through the strategy of indexing data structures. The presentation of the research topic discussed in this paper mainly consists of testing the operation of various indexes against the background of different queries and data structures. The conclusions from the conducted experiments allow to assess the effectiveness of the solutions proposed and applied in the research. The results of the research indicate the existence of a real increase in the performance of operations on data using indexation of data structures. In addition, the level of this growth is presented, broken down by index types.

  12. Medical informatics in medical research - the Severe Malaria in African Children (SMAC) Network's experience.

    PubMed

    Olola, C H O; Missinou, M A; Issifou, S; Anane-Sarpong, E; Abubakar, I; Gandi, J N; Chagomerana, M; Pinder, M; Agbenyega, T; Kremsner, P G; Newton, C R J C; Wypij, D; Taylor, T E

    2006-01-01

    Computers are widely used for data management in clinical trials in the developed countries, unlike in developing countries. Dependable systems are vital for data management, and medical decision making in clinical research. Monitoring and evaluation of data management is critical. In this paper we describe database structures and procedures of systems used to implement, coordinate, and sustain data management in Africa. We outline major lessons, challenges and successes achieved, and recommendations to improve medical informatics application in biomedical research in sub-Saharan Africa. A consortium of experienced research units at five sites in Africa in studying children with disease formed a new clinical trials network, Severe Malaria in African Children. In December 2000, the network introduced an observational study involving these hospital-based sites. After prototyping, relational database management systems were implemented for data entry and verification, data submission and quality assurance monitoring. Between 2000 and 2005, 25,858 patients were enrolled. Failure to meet data submission deadline and data entry errors correlated positively (correlation coefficient, r = 0.82), with more errors occurring when data was submitted late. Data submission lateness correlated inversely with hospital admissions (r = -0.62). Developing and sustaining dependable DBMS, ongoing modifications to optimize data management is crucial for clinical studies. Monitoring and communication systems are vital in multi-center networks for good data management. Data timeliness is associated with data quality and hospital admissions.

  13. The Computerized Reference Department: Buying the Future.

    ERIC Educational Resources Information Center

    Kriz, Harry M.; Kok, Victoria T.

    1985-01-01

    Basis for systematic computerization of academic research library's reference, collection development, and collection management functions emphasizes productivity enhancement for librarians and support staff. Use of microcomputer and university's mainframe computer to develop applications of database management systems, electronic spreadsheets,…

  14. Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana

    2002-01-01

    Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.

  15. Generation of comprehensive thoracic oncology database--tool for translational research.

    PubMed

    Surati, Mosmi; Robinson, Matthew; Nandi, Suvobroto; Faoro, Leonardo; Demchuk, Carley; Kanteti, Rajani; Ferguson, Benjamin; Gangadhar, Tara; Hensing, Thomas; Hasina, Rifat; Husain, Aliya; Ferguson, Mark; Karrison, Theodore; Salgia, Ravi

    2011-01-22

    The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.

  16. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  17. HCE Research Coordination Directorate (ReCoorD Database)

    DTIC Science & Technology

    2016-04-27

    portfolio management is often hidden within broader mission scopes and visibility into those portfolio is often limited at best. Current specialty...specific tracking databases do not exist. Current broad-sweeping portfolio management tools do not exist (not true--define terms?). The HCE receives...requests from a variety of oversight bodies for reports on the current state of project-through- portfolio efforts. Tools such as NIH’s Reporter, while still in development, do not yet appear to meet HCE element requirements.

  18. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    PubMed

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  19. Information for Institutional Renewal.

    ERIC Educational Resources Information Center

    Spencer, Richard L.

    1979-01-01

    Discusses a planning, management, and evaluation system, an objective-based planning process, research databases, analytical reports, and transactional data as state-of-the-art tools available to generate data which link research directly to planning for institutional renewal. (RC)

  20. Information management systems for pharmacogenomics.

    PubMed

    Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko

    2002-09-01

    The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.

  1. Designing and Developing a NASA Research Projects Knowledge Base and Implementing Knowledge Management and Discovery Techniques

    NASA Astrophysics Data System (ADS)

    Dabiru, L.; O'Hara, C. G.; Shaw, D.; Katragadda, S.; Anderson, D.; Kim, S.; Shrestha, B.; Aanstoos, J.; Frisbie, T.; Policelli, F.; Keblawi, N.

    2006-12-01

    The Research Project Knowledge Base (RPKB) is currently being designed and will be implemented in a manner that is fully compatible and interoperable with enterprise architecture tools developed to support NASA's Applied Sciences Program. Through user needs assessment, collaboration with Stennis Space Center, Goddard Space Flight Center, and NASA's DEVELOP Staff personnel insight to information needs for the RPKB were gathered from across NASA scientific communities of practice. To enable efficient, consistent, standard, structured, and managed data entry and research results compilation a prototype RPKB has been designed and fully integrated with the existing NASA Earth Science Systems Components database. The RPKB will compile research project and keyword information of relevance to the six major science focus areas, 12 national applications, and the Global Change Master Directory (GCMD). The RPKB will include information about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry. The purpose of this project is to intelligently harvest the results of research sponsored by the NASA Applied Sciences Program and related research program results. We present various approaches for a wide spectrum of knowledge discovery of research results, publications, projects, etc. from the NASA Systems Components database and global information systems and show how this is implemented in SQL Server database. The application of knowledge discovery is useful for intelligent query answering and multiple-layered database construction. Using advanced EA tools such as the Earth Science Architecture Tool (ESAT), RPKB will enable NASA and partner agencies to efficiently identify the significant results for new experiment directions and principle investigators to formulate experiment directions for new proposals.

  2. Following the Yellow Brick Road to Simplified Link Management

    ERIC Educational Resources Information Center

    Engard, Nicole C.

    2005-01-01

    Jenkins Law Library is the oldest law library in America, and has a reputation for offering great content not only to local attorneys, but also to the entire legal research community. In this article, the author, who is Web manager at Jenkins, describes the development of an automated process by which research links can be added to the database so…

  3. Microcomputer Software for Libraries: A Survey.

    ERIC Educational Resources Information Center

    Nolan, Jeanne M.

    1983-01-01

    Reports on findings of research done by Nolan Information Management Services concerning availability of microcomputer software for libraries. Highlights include software categories (specific, generic-database management programs, original); number of programs available in 1982 for 12 applications; projections for 1983; and future software…

  4. Utilizing LIDAR data to analyze access management criteria in Utah.

    DOT National Transportation Integrated Search

    2017-05-01

    The primary objective of this research was to increase understanding of the safety impacts across the state related to access management. This was accomplished by using the Light Detection and Ranging (LiDAR) database to evaluate driveway spacing and...

  5. Energy science and technology database (on the internet). Online data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The Energy Science and Technology Database (EDB) is a multidisciplinary file containing worldwide references to basic and applied scientific and technical research literature. The information is collected for use by government managers, researchers at the national laboratories, and other research efforts sponsored by the U.S. Department of Energy, and the results of this research are transferred to the public. Abstracts are included for records from 1976 to the present. The EDB also contains the Nuclear Science Abstracts which is a comprehensive abstract and index collection to the international nuclear science and technology literature for the period 1948 through 1976. Includedmore » are scientific and technical reports of the U.S. Atomic Energy Commission, U.S. Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Approximately 25% of the records in the file contain abstracts. Nuclear Science Abstracts contains over 900,000 bibliographic records. The entire Energy Science and Technology Database contains over 3 million bibliographic records. This database is now available for searching through the GOV. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  6. Creating Smarter Classrooms: Data-Based Decision Making for Effective Classroom Management

    ERIC Educational Resources Information Center

    Gage, Nicholas A.; McDaniel, Sara

    2012-01-01

    The term "data-based decision making" (DBDM) has become pervasive in education and typically refers to the use of data to make decisions in schools, from assessment of an individual student's academic progress to whole-school reform efforts. Research suggests that special education teachers who use progress monitoring data (a DBDM…

  7. Computer networks for financial activity management, control and statistics of databases of economic administration at the Joint Institute for Nuclear Research

    NASA Astrophysics Data System (ADS)

    Tyupikova, T. V.; Samoilov, V. N.

    2003-04-01

    Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research.

  8. Integration of Web-based and PC-based clinical research databases.

    PubMed

    Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M

    2004-01-01

    We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.

  9. Construction of a database for published phase II/III drug intervention clinical trials for the period 2009-2014 comprising 2,326 records, 90 disease categories, and 939 drug entities.

    PubMed

    Jeong, Sohyun; Han, Nayoung; Choi, Boyoon; Sohn, Minji; Song, Yun-Kyoung; Chung, Myeon-Woo; Na, Han-Sung; Ji, Eunhee; Kim, Hyunah; Rhew, Ki Yon; Kim, Therasa; Kim, In-Wha; Oh, Jung Mi

    2016-06-01

    To construct a database of published clinical drug trials suitable for use 1) as a research tool in accessing clinical trial information and 2) in evidence-based decision-making by regulatory professionals, clinical research investigators, and medical practitioners. Comprehensive information obtained from a search of design elements and results of clinical trials in peer reviewed journals using PubMed (http://www.ncbi.nlm.ih.gov/pubmed). The methodology to develop a structured database was devised by a panel composed of experts in medical, pharmaceutical, information technology, and members of Ministry of Food and Drug Safety (MFDS) using a step by step approach. A double-sided system consisting of user mode and manager mode served as the framework for the database; elements of interest from each trial were entered via secure manager mode enabling the input information to be accessed in a user-friendly manner (user mode). Information regarding methodology used and results of drug treatment were extracted as detail elements of each data set and then inputted into the web-based database system. Comprehensive information comprising 2,326 clinical trial records, 90 disease states, and 939 drugs entities and concerning study objectives, background, methods used, results, and conclusion could be extracted from published information on phase II/III drug intervention clinical trials appearing in SCI journals within the last 10 years. The extracted data was successfully assembled into a clinical drug trial database with easy access suitable for use as a research tool. The clinically most important therapeutic categories, i.e., cancer, cardiovascular, respiratory, neurological, metabolic, urogenital, gastrointestinal, psychological, and infectious diseases were covered by the database. Names of test and control drugs, details on primary and secondary outcomes and indexed keywords could also be retrieved and built into the database. The construction used in the database enables the user to sort and download targeted information as a Microsoft Excel spreadsheet. Because of the comprehensive and standardized nature of the clinical drug trial database and its ease of access it should serve as valuable information repository and research tool for accessing clinical trial information and making evidence-based decisions by regulatory professionals, clinical research investigators, and medical practitioners.

  10. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.

  11. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  12. Development of a Web-Enabled Informatics Platform for Manipulation of Gene Expression Data

    DTIC Science & Technology

    2004-12-01

    genomic platforms such as metabolomics and proteomics , and to federated databases for knowledge management. A successful SBIR Phase I completed...measurements that require sophisticated bioinformatic platforms for data archival, management, integration, and analysis if researchers are to derive...web-enabled bioinformatic platform consisting of a Laboratory Information Management System (LIMS), an Analysis Information Management System (AIMS

  13. Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.

    PubMed

    Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar

    2017-01-01

    Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.

  14. Wet weather highway accident analysis and skid resistance data management system (volume I).

    DOT National Transportation Integrated Search

    1992-06-01

    The objectives and scope of this research are to establish an effective methodology for wet weather accident analysis and to develop a database management system to facilitate information processing and storage for the accident analysis process, skid...

  15. An Integrated Korean Biodiversity and Genetic Information Retrieval System

    PubMed Central

    Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee

    2008-01-01

    Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing bio-resource utilization for conservation, management, research, education, industrial applications, and integration with other bioinformation data resources. It can be found at . PMID:19091024

  16. Dynamic taxonomies applied to a web-based relational database for geo-hydrological risk mitigation

    NASA Astrophysics Data System (ADS)

    Sacco, G. M.; Nigrelli, G.; Bosio, A.; Chiarle, M.; Luino, F.

    2012-02-01

    In its 40 years of activity, the Research Institute for Geo-hydrological Protection of the Italian National Research Council has amassed a vast and varied collection of historical documentation on landslides, muddy-debris flows, and floods in northern Italy from 1600 to the present. Since 2008, the archive resources have been maintained through a relational database management system. The database is used for routine study and research purposes as well as for providing support during geo-hydrological emergencies, when data need to be quickly and accurately retrieved. Retrieval speed and accuracy are the main objectives of an implementation based on a dynamic taxonomies model. Dynamic taxonomies are a general knowledge management model for configuring complex, heterogeneous information bases that support exploratory searching. At each stage of the process, the user can explore or browse the database in a guided yet unconstrained way by selecting the alternatives suggested for further refining the search. Dynamic taxonomies have been successfully applied to such diverse and apparently unrelated domains as e-commerce and medical diagnosis. Here, we describe the application of dynamic taxonomies to our database and compare it to traditional relational database query methods. The dynamic taxonomy interface, essentially a point-and-click interface, is considerably faster and less error-prone than traditional form-based query interfaces that require the user to remember and type in the "right" search keywords. Finally, dynamic taxonomy users have confirmed that one of the principal benefits of this approach is the confidence of having considered all the relevant information. Dynamic taxonomies and relational databases work in synergy to provide fast and precise searching: one of the most important factors in timely response to emergencies.

  17. Toward public volume database management: a case study of NOVA, the National Online Volumetric Archive

    NASA Astrophysics Data System (ADS)

    Fletcher, Alex; Yoo, Terry S.

    2004-04-01

    Public databases today can be constructed with a wide variety of authoring and management structures. The widespread appeal of Internet search engines suggests that public information be made open and available to common search strategies, making accessible information that would otherwise be hidden by the infrastructure and software interfaces of a traditional database management system. We present the construction and organizational details for managing NOVA, the National Online Volumetric Archive. As an archival effort of the Visible Human Project for supporting medical visualization research, archiving 3D multimodal radiological teaching files, and enhancing medical education with volumetric data, our overall database structure is simplified; archives grow by accruing information, but seldom have to modify, delete, or overwrite stored records. NOVA is being constructed and populated so that it is transparent to the Internet; that is, much of its internal structure is mirrored in HTML allowing internet search engines to investigate, catalog, and link directly to the deep relational structure of the collection index. The key organizational concept for NOVA is the Image Content Group (ICG), an indexing strategy for cataloging incoming data as a set structure rather than by keyword management. These groups are managed through a series of XML files and authoring scripts. We cover the motivation for Image Content Groups, their overall construction, authorship, and management in XML, and the pilot results for creating public data repositories using this strategy.

  18. From a Viewpoint of Clinical Settings: Pharmacoepidemiology as Reverse Translational Research (rTR).

    PubMed

    Kawakami, Junichi

    2017-01-01

    Clinical pharmacology and pharmacoepidemiology research may converge in practise. Pharmacoepidemiology is the study of pharmacotherapy and risk management in patient groups. For many drugs, adverse reaction(s) that were not seen and/or clarified during research and development stages have been reported in the real world. Pharmacoepidemiology can detect and verify adverse drug reactions as reverse translational research. Recently, development and effective use of medical information databases (MID) have been conducted in Japan and elsewhere for the purpose of post-marketing safety of drugs. The Ministry of Health, Labour and Welfare, Japan has been promoting the development of 10-million scale database in 10 hospitals and hospital groups as "the infrastructure project of medical information database (MID-NET)". This project enables estimation of the frequency of adverse reactions, the distinction between drug-induced reactions and basal health-condition changes, and usefulness verification of administrative measures of drug safety. However, because the database information is different from detailed medical records, construction of methodologies for the detection and evaluation of adverse reactions is required. We have been performing database research using medical information system in some hospitals to establish and demonstrate useful methods for post-marketing safety. In this symposium, we aim to discuss the possibility of reverse translational research from clinical settings and provide an introduction to our research.

  19. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takaya

    The author describes the progress in and present status of the information management system at the research laboratories as a R & D component of pharmaceutical industry. The system deals with three fundamental types of information, that is, graphic information, numeral information and textual information which includes the former two types of information. The author and others have constructed the system which enables to process these kinds of information integrally. The system is also featured by the fact that natural form of information in which Japanese words (2 byte type) and English (1 byte type) as culture of personal & word processing computers are mixed can be processed by large-size computers because Japanese language are eligible for computer processing. The system is originally for research administrators, but can be effective also for researchers. At present 7 databases are available including external databases. The system is always ready to accept other databases newly.

  20. The National Deep-Sea Coral and Sponge Database: A Comprehensive Resource for United States Deep-Sea Coral and Sponge Records

    NASA Astrophysics Data System (ADS)

    Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.

    2014-12-01

    Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.

  1. Unified Access Architecture for Large-Scale Scientific Datasets

    NASA Astrophysics Data System (ADS)

    Karna, Risav

    2014-05-01

    Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output data formats have been anticipated and considered during the design of the unified architecture. The research focuses on the feasibility of the designed coupling mechanism and the evaluation of the efficiency and benefits of our proposed unified access architecture. Zhang 2011: Zhang, Ying and Kersten, Martin and Ivanova, Milena and Nes, Niels, SciQL: Bridging the Gap Between Science and Relational DBMS, Proceedings of the 15th Symposium on International Database Engineering Applications, 2011. Baumann 98: Baumann, P., Dehmel, A., Furtado, P., Ritsch, R., Widmann, N., "The Multidimensional Database System RasDaMan", SIGMOD 1998, Proceedings ACM SIGMOD International Conference on Management of Data, June 2-4, 1998, Seattle, Washington, 1998. hadoop1: hadoop.apache.org, "Hadoop", http://hadoop.apache.org/, [Online; accessed 12-Jan-2014]. scalapack1: netlib.org/scalapack, "ScaLAPACK", http://www.netlib.org/scalapack,[Online; accessed 12-Jan-2014]. r1: r-project.org, "R", http://www.r-project.org/,[Online; accessed 12-Jan-2014]. matlab1: mathworks.com, "Matlab Documentation", http://www.mathworks.de/de/help/matlab/,[Online; accessed 12-Jan-2014]. scidbusr1: scidb.org, "SciDB User's Guide", http://scidb.org/HTMLmanual/13.6/scidb_ug,[Online; accessed 01-Dec-2013].

  2. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  3. Wet weather highway accident analysis and skid resistance data management system (volume II : user's manual).

    DOT National Transportation Integrated Search

    1992-06-01

    The objectives and scope of this research are to establish an effective methodology for wet weather accident analysis and to develop a database management system to facilitate information processing and storage for the accident analysis process, skid...

  4. Protein Bioinformatics Databases and Resources

    PubMed Central

    Chen, Chuming; Huang, Hongzhan; Wu, Cathy H.

    2017-01-01

    Many publicly available data repositories and resources have been developed to support protein related information management, data-driven hypothesis generation and biological knowledge discovery. To help researchers quickly find the appropriate protein related informatics resources, we present a comprehensive review (with categorization and description) of major protein bioinformatics databases in this chapter. We also discuss the challenges and opportunities for developing next-generation protein bioinformatics databases and resources to support data integration and data analytics in the Big Data era. PMID:28150231

  5. Application GIS on university planning: building a spatial database aided spatial decision

    NASA Astrophysics Data System (ADS)

    Miao, Lei; Wu, Xiaofang; Wang, Kun; Nong, Yu

    2007-06-01

    With the development of university and its size enlarging, kinds of resource need to effective management urgently. Spacial database is the right tool to assist administrator's spatial decision. And it's ready for digital campus with integrating existing OMS. It's researched about the campus planning in detail firstly. Following instanced by south china agriculture university it is practiced that how to build the geographic database of the campus building and house for university administrator's spatial decision.

  6. Improving Care And Research Electronic Data Trust Antwerp (iCAREdata): a research database of linked data on out-of-hours primary care.

    PubMed

    Colliers, Annelies; Bartholomeeusen, Stefaan; Remmen, Roy; Coenen, Samuel; Michiels, Barbara; Bastiaens, Hilde; Van Royen, Paul; Verhoeven, Veronique; Holmgren, Philip; De Ruyck, Bernard; Philips, Hilde

    2016-05-04

    Primary out-of-hours care is developing throughout Europe. High-quality databases with linked data from primary health services can help to improve research and future health services. In 2014, a central clinical research database infrastructure was established (iCAREdata: Improving Care And Research Electronic Data Trust Antwerp, www.icaredata.eu ) for primary and interdisciplinary health care at the University of Antwerp, linking data from General Practice Cooperatives, Emergency Departments and Pharmacies during out-of-hours care. Medical data are pseudonymised using the services of a Trusted Third Party, which encodes private information about patients and physicians before data is sent to iCAREdata. iCAREdata provides many new research opportunities in the fields of clinical epidemiology, health care management and quality of care. A key aspect will be to ensure the quality of data registration by all health care providers. This article describes the establishment of a research database and the possibilities of linking data from different primary out-of-hours care providers, with the potential to help to improve research and the quality of health care services.

  7. Enhancements to the Redmine Database Metrics Plug in

    DTIC Science & Technology

    2017-08-01

    management web application has been adopted within the US Army Research Laboratory’s Computational and Information Sciences Directorate as a database...Metrics Plug-in by Terry C Jameson Computational and Information Sciences Directorate, ARL Approved for public... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  8. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    PubMed

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  9. An Online Database for Informing Ecological Network Models: http://kelpforest.ucsc.edu

    PubMed Central

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H.; Tinker, Martin T.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui). PMID:25343723

  10. An online database for informing ecological network models: http://kelpforest.ucsc.edu

    USGS Publications Warehouse

    Beas-Luna, Rodrigo; Tinker, M. Tim; Novak, Mark; Carr, Mark H.; Black, August; Caselle, Jennifer E.; Hoban, Michael; Malone, Dan; Iles, Alison C.

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/data​baseui).

  11. National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.

    PubMed

    Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R

    2017-08-05

    Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons

  12. Integrating GIS, Archeology, and the Internet.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sera White; Brenda Ringe Pace; Randy Lee

    2004-08-01

    At the Idaho National Engineering and Environmental Laboratory's (INEEL) Cultural Resource Management Office, a newly developed Data Management Tool (DMT) is improving management and long-term stewardship of cultural resources. The fully integrated system links an archaeological database, a historical database, and a research database to spatial data through a customized user interface using ArcIMS and Active Server Pages. Components of the new DMT are tailored specifically to the INEEL and include automated data entry forms for historic and prehistoric archaeological sites, specialized queries and reports that address both yearly and project-specific documentation requirements, and unique field recording forms. The predictivemore » modeling component increases the DMT’s value for land use planning and long-term stewardship. The DMT enhances the efficiency of archive searches, improving customer service, oversight, and management of the large INEEL cultural resource inventory. In the future, the DMT will facilitate data sharing with regulatory agencies, tribal organizations, and the general public.« less

  13. Research of Manufacture Time Management System Based on PLM

    NASA Astrophysics Data System (ADS)

    Jing, Ni; Juan, Zhu; Liangwei, Zhong

    This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.

  14. Challenges and Experiences of Building Multidisciplinary Datasets across Cultures

    NASA Astrophysics Data System (ADS)

    Jamiyansharav, K.; Laituri, M.; Fernandez-Gimenez, M.; Fassnacht, S. R.; Venable, N. B. H.; Allegretti, A. M.; Reid, R.; Baival, B.; Jamsranjav, C.; Ulambayar, T.; Linn, S.; Angerer, J.

    2017-12-01

    Efficient data sharing and management are key challenges to multidisciplinary scientific research. These challenges are further complicated by adding a multicultural component. We address the construction of a complex database for social-ecological analysis in Mongolia. Funded by the National Science Foundation (NSF) Dynamics of Coupled Natural and Human (CNH) Systems, the Mongolian Rangelands and Resilience (MOR2) project focuses on the vulnerability of Mongolian pastoral systems to climate change and adaptive capacity. The MOR2 study spans over three years of fieldwork in 36 paired districts (Soum) from 18 provinces (Aimag) of Mongolia that covers steppe, mountain forest steppe, desert steppe and eastern steppe ecological zones. Our project team is composed of hydrologists, social scientists, geographers, and ecologists. The MOR2 database includes multiple ecological, social, meteorological, geospatial and hydrological datasets, as well as archives of original data and survey in multiple formats. Managing this complex database requires significant organizational skills, attention to detail and ability to communicate within collective team members from diverse disciplines and across multiple institutions in the US and Mongolia. We describe the database's rich content, organization, structure and complexity. We discuss lessons learned, best practices and recommendations for complex database management, sharing, and archiving in creating a cross-cultural and multi-disciplinary database.

  15. "The Research Assistant."

    ERIC Educational Resources Information Center

    Schuch, Dan

    2001-01-01

    "The Research Assistant," was developed to help graduate students and faculty manage the quantity of available information, to be able to read it, synthesize it, and create new insights and knowledge. "The Research Assistant" was designed using the Filemaker Pro relational database and can be set up in a networked environment to be used in…

  16. The Starkey habitat database for ungulate research: construction, documentation, and use.

    Treesearch

    Mary M. Rowland; Priscilla K. Coe; Rosemary J. Stussy; [and others].

    1998-01-01

    The Starkey Project, a large-scale, multidisciplinary research venture, began in 1987 in the Starkey Experimental Forest and Range in northeast Oregon. Researchers are studying effects of forest management on interactions and habitat use of mule deer (Odocoileus hemionus hemionus), elk (Cervus elaphus nelsoni), and cattle. A...

  17. Implementation of the CDC translational informatics platform--from genetic variants to the national Swedish Rheumatology Quality Register.

    PubMed

    Abugessaisa, Imad; Gomez-Cabrero, David; Snir, Omri; Lindblad, Staffan; Klareskog, Lars; Malmström, Vivianne; Tegnér, Jesper

    2013-04-02

    Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort.

  18. Implementation of the CDC translational informatics platform - from genetic variants to the national Swedish Rheumatology Quality Register

    PubMed Central

    2013-01-01

    Background Sequencing of the human genome and the subsequent analyses have produced immense volumes of data. The technological advances have opened new windows into genomics beyond the DNA sequence. In parallel, clinical practice generate large amounts of data. This represents an underused data source that has much greater potential in translational research than is currently realized. This research aims at implementing a translational medicine informatics platform to integrate clinical data (disease diagnosis, diseases activity and treatment) of Rheumatoid Arthritis (RA) patients from Karolinska University Hospital and their research database (biobanks, genotype variants and serology) at the Center for Molecular Medicine, Karolinska Institutet. Methods Requirements engineering methods were utilized to identify user requirements. Unified Modeling Language and data modeling methods were used to model the universe of discourse and data sources. Oracle11g were used as the database management system, and the clinical development center (CDC) was used as the application interface. Patient data were anonymized, and we employed authorization and security methods to protect the system. Results We developed a user requirement matrix, which provided a framework for evaluating three translation informatics systems. The implementation of the CDC successfully integrated biological research database (15172 DNA, serum and synovial samples, 1436 cell samples and 65 SNPs per patient) and clinical database (5652 clinical visit) for the cohort of 379 patients presents three profiles. Basic functionalities provided by the translational medicine platform are research data management, development of bioinformatics workflow and analysis, sub-cohort selection, and re-use of clinical data in research settings. Finally, the system allowed researchers to extract subsets of attributes from cohorts according to specific biological, clinical, or statistical features. Conclusions Research and clinical database integration is a real challenge and a road-block in translational research. Through this research we addressed the challenges and demonstrated the usefulness of CDC. We adhered to ethical regulations pertaining to patient data, and we determined that the existing software solutions cannot meet the translational research needs at hand. We used RA as a test case since we have ample data on active and longitudinal cohort. PMID:23548156

  19. United States Army Medical Materiel Development Activity: 1997 Annual Report.

    DTIC Science & Technology

    1997-01-01

    business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was

  20. 100 years of applied psychology research on individual careers: From career management to retirement.

    PubMed

    Wang, Mo; Wanberg, Connie R

    2017-03-01

    This article surveys 100 years of research on career management and retirement, with a primary focus on work published in the Journal of Applied Psychology. Research on career management took off in the 1920s, with most attention devoted to the development and validation of career interest inventories. Over time, research expanded to attend to broader issues such as the predictors and outcomes of career interests and choice; the nature of career success and who achieves it; career transitions and adaptability to change; retirement decision making and adjustment; and bridge employment. In this article, we provide a timeline for the evolution of the career management and retirement literature, review major theoretical perspectives and findings on career management and retirement, and discuss important future research directions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Large scale database scrubbing using object oriented software components.

    PubMed

    Herting, R L; Barnes, M R

    1998-01-01

    Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.

  2. A privacy-preserved analytical method for ehealth database with minimized information loss.

    PubMed

    Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun

    2012-01-01

    Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification.

  3. Scientific information repository assisting reflectance spectrometry in legal medicine.

    PubMed

    Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W

    2012-06-01

    Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.

  4. Design research about coastal zone planning and management information system based on GIS and database technologies

    NASA Astrophysics Data System (ADS)

    Huang, Pei; Wu, Sangyun; Feng, Aiping; Guo, Yacheng

    2008-10-01

    As littoral areas in possession of concentrated population, abundant resources, developed industry and active economy, the coastal areas are bound to become the forward positions and supported regions for marine exploitation. In the 21st century, the pressure that coastal zones are faced with is as follows: growth of population and urbanization, rise of sea level and coastal erosion, shortage of freshwater resource and deterioration of water resource, and degradation of fishery resource and so on. So the resources of coastal zones should be programmed and used reasonably for the sustainable development of economy and environment. This paper proposes a design research on the construction of coastal zone planning and management information system based on GIS and database technologies. According to this system, the planning results of coastal zones could be queried and displayed expediently through the system interface. It is concluded that the integrated application of GIS and database technologies provides a new modern method for the management of coastal zone resources, and makes it possible to ensure the rational development and utilization of the coastal zone resources, along with the sustainable development of economy and environment.

  5. Toward an open-access global database for mapping, control, and surveillance of neglected tropical diseases.

    PubMed

    Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina; Stensgaard, Anna-Sofie; Laserna de Himpsl, Maiti; Ziegelbauer, Kathrin; Laizer, Nassor; Camenzind, Lukas; Di Pasquale, Aurelio; Ekpo, Uwem F; Simoonga, Christopher; Mushinge, Gabriel; Saarnak, Christopher F L; Utzinger, Jürg; Kristensen, Thomas K; Vounatsou, Penelope

    2011-12-01

    After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and 'grey literature'), contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques). The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment and running of a global NTD database is feasible and should be expanded without delay.

  6. Ontology to relational database transformation for web application development and maintenance

    NASA Astrophysics Data System (ADS)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  7. International Soil Carbon Network (ISCN) Database v3-1

    DOE Data Explorer

    Nave, Luke [University of Michigan] (ORCID:0000000182588335); Johnson, Kris [USDA-Forest Service; van Ingen, Catharine [Microsoft Research; Agarwal, Deborah [Lawrence Berkeley National Laboratory] (ORCID:0000000150452396); Humphrey, Marty [University of Virginia; Beekwilder, Norman [University of Virginia

    2016-01-01

    The ISCN is an international scientific community devoted to the advancement of soil carbon research. The ISCN manages an open-access, community-driven soil carbon database. This is version 3-1 of the ISCN Database, released in December 2015. It gathers 38 separate dataset contributions, totalling 67,112 sites with data from 71,198 soil profiles and 431,324 soil layers. For more information about the ISCN, its scientific community and resources, data policies and partner networks visit: http://iscn.fluxdata.org/.

  8. NNDC Stand: Activities and Services of the National Nuclear Data Center

    NASA Astrophysics Data System (ADS)

    Pritychenko, B.; Arcilla, R.; Burrows, T. W.; Dunford, C. L.; Herman, M. W.; McLane, V.; Obložinský, P.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2005-05-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic nuclear research, applied nuclear technologies including energy, shielding, medical and homeland security. In 2004, to answer the needs of nuclear data users community, NNDC completed a project to modernize data storage and management of its databases and began offering new nuclear data Web services. The principles of database and Web application development as well as related nuclear reaction and structure database services are briefly described.

  9. Dynamic tables: an architecture for managing evolving, heterogeneous biomedical data in relational database management systems.

    PubMed

    Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis

    2007-01-01

    Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.

  10. Surviving the Glut: The Management of Event Streams in Cyberphysical Systems

    NASA Astrophysics Data System (ADS)

    Buchmann, Alejandro

    Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de

  11. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  12. The Research of Spatial-Temporal Analysis and Decision-Making Assistant System for Disabled Person Affairs Based on Mapworld

    NASA Astrophysics Data System (ADS)

    Zhang, J. H.; Yang, J.; Sun, Y. S.

    2015-06-01

    This system combines the Mapworld platform and informationization of disabled person affairs, uses the basic information of disabled person as center frame. Based on the disabled person population database, the affairs management system and the statistical account system, the data were effectively integrated and the united information resource database was built. Though the data analysis and mining, the system provides powerful data support to the decision making, the affairs managing and the public serving. It finally realizes the rationalization, normalization and scientization of disabled person affairs management. It also makes significant contributions to the great-leap-forward development of the informationization of China Disabled Person's Federation.

  13. Integrative medicine for managing the symptoms of lupus nephritis: A protocol for systematic review and meta-analysis.

    PubMed

    Choi, Tae-Young; Jun, Ji Hee; Lee, Myeong Soo

    2018-03-01

    Integrative medicine is claimed to improve symptoms of lupus nephritis. No systematic reviews have been performed for the application of integrative medicine for lupus nephritis on patients with systemic lupus erythematosus (SLE). Thus, this review will aim to evaluate the current evidence on the efficacy of integrative medicine for the management of lupus nephritis in patients with SLE. The following electronic databases will be searched for studies published from their dates of inception February 2018: Medline, EMBASE and the Cochrane Central Register of Controlled Trials (CENTRAL), as well as 6 Korean medical databases (Korea Med, the Oriental Medicine Advanced Search Integrated System [OASIS], DBpia, the Korean Medical Database [KM base], the Research Information Service System [RISS], and the Korean Studies Information Services System [KISS]), and 1 Chinese medical database (the China National Knowledge Infrastructure [CNKI]). Study selection, data extraction, and assessment will be performed independently by 2 researchers. The risk of bias (ROB) will be assessed using the Cochrane ROB tool. This systematic review will be published in a peer-reviewed journal and disseminated both electronically and in print. The review will be updated to inform and guide healthcare practice and policy. PROSPERO 2018 CRD42018085205.

  14. Development of research priorities in paediatric pain and palliative care

    PubMed Central

    Liossi, Christina; Anderson, Anna-Karenia; Howard, Richard F

    2016-01-01

    Priority setting for healthcare research is as important as conducting the research itself because rigorous and systematic processes of priority setting can make an important contribution to the quality of research. This project aimed to prioritise clinical therapeutic uncertainties in paediatric pain and palliative care in order to encourage and inform the future research agenda and raise the profile of paediatric pain and palliative care in the United Kingdom. Clinical therapeutic uncertainties were identified and transformed into patient, intervention, comparison and outcome (PICO) format and prioritised using a modified Nominal Group Technique. Members of the Clinical Studies Group in Pain and Palliative Care within National Institute for Health Research (NIHR) Clinical Research Network (CRN)-Children took part in the prioritisation exercise. There were 11 clinically active professionals spanning across a wide range of paediatric disciplines and one parent representative. The top three research priorities related to establishing the safety and efficacy of (1) gabapentin in the management of chronic pain with neuropathic characteristics, (2) intravenous non-steroidal anti-inflammatory drugs in the management of post-operative pain in pre-schoolers and (3) different opioid formulations in the management of acute pain in children while at home. Questions about the long-term effect of psychological interventions in the management of chronic pain and various pharmacological interventions to improve pain and symptom management in palliative care were among the ‘top 10’ priorities. The results of prioritisation were included in the UK Database of Uncertainties about the Effects of Treatments (DUETS) database. Increased awareness of priorities and priority-setting processes should encourage clinicians and other stakeholders to engage in such exercises in the future. PMID:28386399

  15. Improving agricultural knowledge management: The AgTrials experience

    PubMed Central

    Hyman, Glenn; Espinosa, Herlin; Camargo, Paola; Abreu, David; Devare, Medha; Arnaud, Elizabeth; Porter, Cheryl; Mwanzia, Leroy; Sonder, Kai; Traore, Sibiry

    2017-01-01

    Background: Opportunities to use data and information to address challenges in international agricultural research and development are expanding rapidly. The use of agricultural trial and evaluation data has enormous potential to improve crops and management practices. However, for a number of reasons, this potential has yet to be realized. This paper reports on the experience of the AgTrials initiative, an effort to build an online database of agricultural trials applying principles of interoperability and open access. Methods: Our analysis evaluates what worked and what did not work in the development of the AgTrials information resource. We analyzed data on our users and their interaction with the platform. We also surveyed our users to gauge their perceptions of the utility of the online database. Results: The study revealed barriers to participation and impediments to interaction, opportunities for improving agricultural knowledge management and a large potential for the use of trial and evaluation data.  Conclusions: Technical and logistical mechanisms for developing interoperable online databases are well advanced.  More effort will be needed to advance organizational and institutional work for these types of databases to realize their potential. PMID:28580127

  16. The personal receiving document management and the realization of email function in OAS

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao

    2017-05-01

    This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.

  17. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  18. Director of Duke Institute Wants To Make Medicine More of a Science.

    ERIC Educational Resources Information Center

    Wheeler, David L.

    1998-01-01

    The director of the Duke Clinical Research Institute (North Carolina) is committed to making better use of patient information for medical research, and is building a database from the institute's clinical trials. His approach is to provide biomedical researchers with daily involvement in medicine rather than management. (MSE)

  19. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    NASA Astrophysics Data System (ADS)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  20. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    PubMed Central

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C.L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-01-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research. PMID:27023900

  1. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    PubMed

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  2. Solutions in radiology services management: a literature review.

    PubMed

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. In the databases, 565 papers - 120 out of them, pdf free - were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies.

  3. Identification and management of elder physical abuse in the routine of dentistry - a systematic review.

    PubMed

    Silva, Luanderson O; Souza-Silva, Bianca N; de Alcântara Rodrigues, José Lucas; Rigo, Lilian; Cericato, Graziela O; Franco, Ademir; Paranhos, Luiz R

    2017-03-01

    To perform a systematic search in the literature in order to verify whether the dentists are able to identify and manage cases elder physical abuse. Dentists may play an important legal role contributing to the management of abused patients through the identification of injuries in their face, head and neck. The present systematic review was performed following the PRISMA Statement and was registered in the PROSPERO database. A search was conducted in the following electronic databases: PubMed, LILACS, SciELO, Embase, Web of Science, OpenGrey, Google Scholar. Specifically, the last two databases were used to search the 'grey literature'. The research question was based on the PVO strategy for systematic exploratory review. Two examiners determined the eligibility criteria for selecting the studies and performed all the research steps. The initial search resulted in 842 studies, from which eight were considered eligible. Six studies used questionnaires to assess the perception, knowledge and attitudes of dentists towards the identification and management of cases of elder abuse, while two studies assessed this information through personal interviews. Two studies were rated as high quality, while six studies reached moderate quality. Male and female dentists were assessed separately in six studies. Only three studies specified the aggressor. The dentists revealed insufficient knowledge on elder abuse. Most of the dentists are not able to identify and manage these cases in the clinical routine. © 2016 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  4. Big issues, small systems: managing with information in medical research.

    PubMed

    Jones, J; Preston, H

    2000-08-01

    This subject of this article is the design of a database system for handling files related to the work of the Molecular Genetics Department of the International Blood Group Reference Laboratory. It examines specialist information needs identified within this organization and it indicates how the design of the Rhesus Information Tracking System was able to meet current needs. Rapid Applications Development prototyping forms the basis of the investigation, linked to interview, questionnaire, and observation techniques in order to establish requirements for interoperability. In particular, the place of this specialist database within the much broader information strategy of the National Blood Service will be examined. This unique situation is analogous to management activities in broader environments and a number of generic issues are highlighted by the research.

  5. Economic analysis of centralized vs. decentralized electronic data capture in multi-center clinical studies.

    PubMed

    Walden, Anita; Nahm, Meredith; Barnett, M Edwina; Conde, Jose G; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E; Eisenstein, Eric L

    2011-01-01

    New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs.

  6. Economic Analysis of Centralized vs. Decentralized Electronic Data Capture in Multi-Center Clinical Studies

    PubMed Central

    Walden, Anita; Nahm, Meredith; Barnett, M. Edwina; Conde, Jose G.; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E.; Eisenstein, Eric L.

    2012-01-01

    Background New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. Methods We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. Main Outcome Measures The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Results Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Conclusion Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs. PMID:21335692

  7. Information Management of Web Application Based Environmental Performance Management in Concentrating Division of PTFI

    NASA Astrophysics Data System (ADS)

    Susanto, Arif; Mulyono, Nur Budi

    2018-02-01

    The changes of environmental management system standards into the latest version, i.e. ISO 14001:2015, may cause a change on a data and information need in decision making and achieving the objectives in the organization coverage. Information management is the organization's responsibility to ensure that effectiveness and efficiency start from its creating, storing, processing and distribution processes to support operations and effective decision making activity in environmental performance management. The objective of this research was to set up an information management program and to adopt the technology as the supporting component of the program which was done by PTFI Concentrating Division so that it could be in line with the desirable organization objective in environmental management based on ISO 14001:2015 environmental management system standards. Materials and methods used covered technical aspects in information management, i.e. with web-based application development by using usage centered design. The result of this research showed that the use of Single Sign On gave ease to its user to interact further on the use of the environmental management system. Developing a web-based through creating entity relationship diagram (ERD) and information extraction by conducting information extraction which focuses on attributes, keys, determination of constraints. While creating ERD is obtained from relational database scheme from a number of database from environmental performances in Concentrating Division.

  8. The Development of a Korean Drug Dosing Database

    PubMed Central

    Kim, Sun Ah; Kim, Jung Hoon; Jang, Yoo Jin; Jeon, Man Ho; Hwang, Joong Un; Jeong, Young Mi; Choi, Kyung Suk; Lee, Iyn Hyang; Jeon, Jin Ok; Lee, Eun Sook; Lee, Eun Kyung; Kim, Hong Bin; Chin, Ho Jun; Ha, Ji Hye; Kim, Young Hoon

    2011-01-01

    Objectives This report describes the development process of a drug dosing database for ethical drugs approved by the Korea Food & Drug Administration (KFDA). The goal of this study was to develop a computerized system that supports physicians' prescribing decisions, particularly in regards to medication dosing. Methods The advisory committee, comprised of doctors, pharmacists, and nurses from the Seoul National University Bundang Hospital, pharmacists familiar with drug databases, KFDA officials, and software developers from the BIT Computer Co. Ltd. analyzed approved KFDA drug dosing information, defined the fields and properties of the information structure, and designed a management program used to enter dosing information. The management program was developed using a web based system that allows multiple researchers to input drug dosing information in an organized manner. The whole process was improved by adding additional input fields and eliminating the unnecessary existing fields used when the dosing information was entered, resulting in an improved field structure. Results A total of 16,994 drugs sold in the Korean market in July 2009, excluding the exclusion criteria (e.g., radioactivity drugs, X-ray contrast medium), usage and dosing information were made into a database. Conclusions The drug dosing database was successfully developed and the dosing information for new drugs can be continually maintained through the management mode. This database will be used to develop the drug utilization review standards and to provide appropriate dosing information. PMID:22259729

  9. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  10. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador

    PubMed Central

    Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. Discussion A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. Conclusion The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research. PMID:22859644

  11. Privacy protection and public goods: building a genetic database for health research in Newfoundland and Labrador.

    PubMed

    Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton

    2013-01-01

    To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research.

  12. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    PubMed

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  13. PRAIRIEMAP: A GIS database for prairie grassland management in western North America

    USGS Publications Warehouse

    ,

    2003-01-01

    The USGS Forest and Rangeland Ecosystem Science Center, Snake River Field Station (SRFS) maintains a database of spatial information, called PRAIRIEMAP, which is needed to address the management of prairie grasslands in western North America. We identify and collect spatial data for the region encompassing the historical extent of prairie grasslands (Figure 1). State and federal agencies, the primary entities responsible for management of prairie grasslands, need this information to develop proactive management strategies to prevent prairie-grassland wildlife species from being listed as Endangered Species, or to develop appropriate responses if listing does occur. Spatial data are an important component in documenting current habitat and other environmental conditions, which can be used to identify areas that have undergone significant changes in land cover and to identify underlying causes. Spatial data will also be a critical component guiding the decision processes for restoration of habitat in the Great Plains. As such, the PRAIRIEMAP database will facilitate analyses of large-scale and range-wide factors that may be causing declines in grassland habitat and populations of species that depend on it for their survival. Therefore, development of a reliable spatial database carries multiple benefits for land and wildlife management. The project consists of 3 phases: (1) identify relevant spatial data, (2) assemble, document, and archive spatial data on a computer server, and (3) develop and maintain the web site (http://prairiemap.wr.usgs.gov) for query and transfer of GIS data to managers and researchers.

  14. Ten Years Experience In Geo-Databases For Linear Facilities Risk Assessment (Lfra)

    NASA Astrophysics Data System (ADS)

    Oboni, F.

    2003-04-01

    Keywords: geo-environmental, database, ISO14000, management, decision-making, risk, pipelines, roads, railroads, loss control, SAR, hazard identification ABSTRACT: During the past decades, characterized by the development of the Risk Management (RM) culture, a variety of different RM models have been proposed by governmental agencies in various parts of the world. The most structured models appear to have originated in the field of environmental RM. These models are briefly reviewed in the first section of the paper focusing the attention on the difference between Hazard Management and Risk Management and the need to use databases in order to allow retrieval of specific information and effective updating. The core of the paper reviews a number of different RM approaches, based on extensions of geo-databases, specifically developed for linear facilities (LF) in transportation corridors since the early 90s in Switzerland, Italy, Canada, the US and South America. The applications are compared in terms of methodology, capabilities and resources necessary to their implementation. The paper then focuses the attention on the level of detail that applications and related data have to attain. Common pitfalls related to decision making based on hazards rather than on risks are discussed. The paper focuses the last sections on the description of the next generation of linear facility RA application, including examples of results and discussion of future methodological research. It is shown that geo-databases should be linked to loss control and accident reports in order to maximize their benefits. The links between RA and ISO 14000 (environmental management code) are explicitly considered.

  15. Emissions & Measurements - Black Carbon | Science ...

    EPA Pesticide Factsheets

    Emissions and Measurement (EM) research activities performed within the National Risk Management Research Lab NRMRL) of EPA's Office of Research and Development (ORD) support measurement and laboratory analysis approaches to accurately characterize source emissions, and near source concentrations of air pollutants. They also support integrated Agency research programs (e.g., source to health outcomes) and the development of databases and inventories that assist Federal, state, and local air quality managers and industry implement and comply with air pollution standards. EM research underway in NRMRL supports the Agency's efforts to accurately characterize, analyze, measure and manage sources of air pollution. This pamphlet focuses on the EM research that NRMRL researchers conduct related to black carbon (BC). Black Carbon is a pollutant of concern to EPA due to its potential impact on human health and climate change. There are extensive uncertainties in emissions of BC from stationary and mobile sources. Emissions and Measurement (EM) research activities performed within the National Risk Management Research Lab NRMRL) of EPA's Office of Research and Development (ORD)

  16. A Conceptual Framework for Systematic Reviews of Research in Educational Leadership and Management

    ERIC Educational Resources Information Center

    Hallinger, Philip

    2013-01-01

    Purpose: The purpose of this paper is to present a framework for scholars carrying out reviews of research that meet international standards for publication. Design/methodology/approach: This is primarily a conceptual paper focusing on the methodology of conducting systematic reviews of research. However, the paper draws on a database of reviews…

  17. Protocol for developing a Database of Zoonotic disease Research in India (DoZooRI).

    PubMed

    Chatterjee, Pranab; Bhaumik, Soumyadeep; Chauhan, Abhimanyu Singh; Kakkar, Manish

    2017-12-10

    Zoonotic and emerging infectious diseases (EIDs) represent a public health threat that has been acknowledged only recently although they have been on the rise for the past several decades. On an average, every year since the Second World War, one pathogen has emerged or re-emerged on a global scale. Low/middle-income countries such as India bear a significant burden of zoonotic and EIDs. We propose that the creation of a database of published, peer-reviewed research will open up avenues for evidence-based policymaking for targeted prevention and control of zoonoses. A large-scale systematic mapping of the published peer-reviewed research conducted in India will be undertaken. All published research will be included in the database, without any prejudice for quality screening, to broaden the scope of included studies. Structured search strategies will be developed for priority zoonotic diseases (leptospirosis, rabies, anthrax, brucellosis, cysticercosis, salmonellosis, bovine tuberculosis, Japanese encephalitis and rickettsial infections), and multiple databases will be searched for studies conducted in India. The database will be managed and hosted on a cloud-based platform called Rayyan. Individual studies will be tagged based on key preidentified parameters (disease, study design, study type, location, randomisation status and interventions, host involvement and others, as applicable). The database will incorporate already published studies, obviating the need for additional ethical clearances. The database will be made available online, and in collaboration with multisectoral teams, domains of enquiries will be identified and subsequent research questions will be raised. The database will be queried for these and resulting evidence will be analysed and published in peer-reviewed journals. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Review of telehealth stuttering management.

    PubMed

    Lowe, Robyn; O'Brian, Sue; Onslow, Mark

    2013-01-01

    Telehealth is the use of communication technology to provide health care services by means other than typical in-clinic attendance models. Telehealth is increasingly used for the management of speech, language and communication disorders. The aim of this article is to review telehealth applications to stuttering management. We conducted a search of peer-reviewed literature for the past 20 years using the Institute for Scientific Information Web of Science database, PubMed: The Bibliographic Database and a search for articles by hand. Outcomes for telehealth stuttering treatment were generally positive, but there may be a compromise of treatment efficiency with telehealth treatment of young children. Our search found no studies dealing with stuttering assessment procedures using telehealth models. No economic analyses of this delivery model have been reported. This review highlights the need for continued research about telehealth for stuttering management. Evidence from research is needed to inform the efficacy of assessment procedures using telehealth methods as well as guide the development of improved treatment procedures. Clinical and technical guidelines are urgently needed to ensure that the evolving and continued use of telehealth to manage stuttering does not compromise the standards of care afforded with standard in-clinic models.

  19. Research on spatio-temporal database techniques for spatial information service

    NASA Astrophysics Data System (ADS)

    Zhao, Rong; Wang, Liang; Li, Yuxiang; Fan, Rongshuang; Liu, Ping; Li, Qingyuan

    2007-06-01

    Geographic data should be described by spatial, temporal and attribute components, but the spatio-temporal queries are difficult to be answered within current GIS. This paper describes research into the development and application of spatio-temporal data management system based upon GeoWindows GIS software platform which was developed by Chinese Academy of Surveying and Mapping (CASM). Faced the current and practical requirements of spatial information application, and based on existing GIS platform, one kind of spatio-temporal data model which integrates vector and grid data together was established firstly. Secondly, we solved out the key technique of building temporal data topology, successfully developed a suit of spatio-temporal database management system adopting object-oriented methods. The system provides the temporal data collection, data storage, data management and data display and query functions. Finally, as a case study, we explored the application of spatio-temporal data management system with the administrative region data of multi-history periods of China as the basic data. With all the efforts above, the GIS capacity of management and manipulation in aspect of time and attribute of GIS has been enhanced, and technical reference has been provided for the further development of temporal geographic information system (TGIS).

  20. [Technical improvement of cohort constitution in administrative health databases: Providing a tool for integration and standardization of data applicable in the French National Health Insurance Database (SNIIRAM)].

    PubMed

    Ferdynus, C; Huiart, L

    2016-09-01

    Administrative health databases such as the French National Heath Insurance Database - SNIIRAM - are a major tool to answer numerous public health research questions. However the use of such data requires complex and time-consuming data management. Our objective was to develop and make available a tool to optimize cohort constitution within administrative health databases. We developed a process to extract, transform and load (ETL) data from various heterogeneous sources in a standardized data warehouse. This data warehouse is architected as a star schema corresponding to an i2b2 star schema model. We then evaluated the performance of this ETL using data from a pharmacoepidemiology research project conducted in the SNIIRAM database. The ETL we developed comprises a set of functionalities for creating SAS scripts. Data can be integrated into a standardized data warehouse. As part of the performance assessment of this ETL, we achieved integration of a dataset from the SNIIRAM comprising more than 900 million lines in less than three hours using a desktop computer. This enables patient selection from the standardized data warehouse within seconds of the request. The ETL described in this paper provides a tool which is effective and compatible with all administrative health databases, without requiring complex database servers. This tool should simplify cohort constitution in health databases; the standardization of warehouse data facilitates collaborative work between research teams. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  1. An object-oriented approach to the management of meteorological and hydrological data

    NASA Technical Reports Server (NTRS)

    Graves, S. J.; Williams, S. F.; Criswell, E. A.

    1990-01-01

    An interface to several meteorological and hydrological databases have been developed that enables researchers efficiently to access and interrelate data through a customized menu system. By extending a relational database system with object-oriented concepts, each user or group of users may have different 'views' of the data to allow user access to data in customized ways without altering the organization of the database. An application to COHMEX and WetNet, two earth science projects within NASA Marshall Space Flight Center's Earth Science and Applications Division, are described.

  2. United States Air Force Summer Research Program -- 1993. Volume 4. Rome Laboratory

    DTIC Science & Technology

    1993-12-01

    H., eds., Object-Oriented Concepts, Databases , and Applications, Addison-Wesley, Reading, MA, 1989. [Lano9l] Lano, K., "Z++, An Object-Orientated...1433 46.92 60 TCP janus.rl.af.mil mensa.rl.af.mil 1433 2611 The Target Filter Manager responds to requests for data and accesses the target database . A...2.5 2- 1.5- 28 -3 -2 -10 12 3 AZIMUTH (OE(3) Figure 12. Contour plot of antenna pattern, QC2 algorithm 5-32 UPDATING PROBABILISTIC DATABASES Michael A

  3. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality.

    PubMed

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.

  4. Global Coordination and Standardisation in Marine Biodiversity through the World Register of Marine Species (WoRMS) and Related Databases

    PubMed Central

    Bouchet, Philippe; Boxshall, Geoff; Fauchald, Kristian; Gordon, Dennis; Hoeksema, Bert W.; Poore, Gary C. B.; van Soest, Rob W. M.; Stöhr, Sabine; Walter, T. Chad; Vanhoorne, Bart; Decock, Wim

    2013-01-01

    The World Register of Marine Species is an over 90% complete open-access inventory of all marine species names. Here we illustrate the scale of the problems with species names, synonyms, and their classification, and describe how WoRMS publishes online quality assured information on marine species. Within WoRMS, over 100 global, 12 regional and 4 thematic species databases are integrated with a common taxonomy. Over 240 editors from 133 institutions and 31 countries manage the content. To avoid duplication of effort, content is exchanged with 10 external databases. At present WoRMS contains 460,000 taxonomic names (from Kingdom to subspecies), 368,000 species level combinations of which 215,000 are currently accepted marine species names, and 26,000 related but non-marine species. Associated information includes 150,000 literature sources, 20,000 images, and locations of 44,000 specimens. Usage has grown linearly since its launch in 2007, with about 600,000 unique visitors to the website in 2011, and at least 90 organisations from 12 countries using WoRMS for their data management. By providing easy access to expert-validated content, WoRMS improves quality control in the use of species names, with consequent benefits to taxonomy, ecology, conservation and marine biodiversity research and management. The service manages information on species names that would otherwise be overly costly for individuals, and thus minimises errors in the application of nomenclature standards. WoRMS' content is expanding to include host-parasite relationships, additional literature sources, locations of specimens, images, distribution range, ecological, and biological data. Species are being categorised as introduced (alien, invasive), of conservation importance, and on other attributes. These developments have a multiplier effect on its potential as a resource for biodiversity research and management. As a consequence of WoRMS, we are witnessing improved communication within the scientific community, and anticipate increased taxonomic efficiency and quality control in marine biodiversity research and management. PMID:23505408

  5. Global coordination and standardisation in marine biodiversity through the World Register of Marine Species (WoRMS) and related databases.

    PubMed

    Costello, Mark J; Bouchet, Philippe; Boxshall, Geoff; Fauchald, Kristian; Gordon, Dennis; Hoeksema, Bert W; Poore, Gary C B; van Soest, Rob W M; Stöhr, Sabine; Walter, T Chad; Vanhoorne, Bart; Decock, Wim; Appeltans, Ward

    2013-01-01

    The World Register of Marine Species is an over 90% complete open-access inventory of all marine species names. Here we illustrate the scale of the problems with species names, synonyms, and their classification, and describe how WoRMS publishes online quality assured information on marine species. Within WoRMS, over 100 global, 12 regional and 4 thematic species databases are integrated with a common taxonomy. Over 240 editors from 133 institutions and 31 countries manage the content. To avoid duplication of effort, content is exchanged with 10 external databases. At present WoRMS contains 460,000 taxonomic names (from Kingdom to subspecies), 368,000 species level combinations of which 215,000 are currently accepted marine species names, and 26,000 related but non-marine species. Associated information includes 150,000 literature sources, 20,000 images, and locations of 44,000 specimens. Usage has grown linearly since its launch in 2007, with about 600,000 unique visitors to the website in 2011, and at least 90 organisations from 12 countries using WoRMS for their data management. By providing easy access to expert-validated content, WoRMS improves quality control in the use of species names, with consequent benefits to taxonomy, ecology, conservation and marine biodiversity research and management. The service manages information on species names that would otherwise be overly costly for individuals, and thus minimises errors in the application of nomenclature standards. WoRMS' content is expanding to include host-parasite relationships, additional literature sources, locations of specimens, images, distribution range, ecological, and biological data. Species are being categorised as introduced (alien, invasive), of conservation importance, and on other attributes. These developments have a multiplier effect on its potential as a resource for biodiversity research and management. As a consequence of WoRMS, we are witnessing improved communication within the scientific community, and anticipate increased taxonomic efficiency and quality control in marine biodiversity research and management.

  6. Perceived Self-Efficacy: A Concept Analysis for Symptom Management in Patients With Cancer
.

    PubMed

    White, Lynn L; Cohen, Marlene Z; Berger, Ann M; Kupzyk, Kevin A; Swore-Fletcher, Barbara A; Bierman, Philip J

    2017-12-01

    Perceived self-efficacy (PSE) for symptom management plays a key role in outcomes for patients with cancer, such as quality of life, functional status, symptom distress, and healthcare use. Definition of the concept is necessary for use in research and to guide the development of interventions to facilitate PSE for symptom management in patients with cancer.
. This analysis will describe the concept of PSE for symptom management in patients with cancer.
. A database search was performed for related publications from 2006-2016. Landmark publications published prior to 2006 that informed the concept analysis were included.
. Greater PSE for symptom management predicts improved performance outcomes, including functional health status, cognitive function, and disease status. Clarification of the concept of PSE for symptom management will accelerate the progress of self-management research and allow for comparison of research data and intervention development.

  7. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases

    PubMed Central

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-01-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  8. Managers' Support for Employee Wellness Programs: An Integrative Review.

    PubMed

    Passey, Deborah G; Brown, Meagan C; Hammerback, Kristen; Harris, Jeffrey R; Hannon, Peggy A

    2018-01-01

    The aim of this integrative literature review is to synthesize the existing evidence regarding managers' support for employee wellness programs. The search utilized multiple electronic databases and libraries. Inclusion criteria comprised peer-reviewed research published in English, between 1990 and 2016, and examining managers' support in the context of a worksite intervention. The final sample included 21 articles for analysis. Two researchers extracted and described results from each of the included articles using a content analysis. Two researchers independently rated the quality of the included articles. Researchers synthesized data into a summary table by study design, sample, data collected, key findings, and quality rating. Factors that may influence managers' support include their organization's management structure, senior leadership support, their expected roles, training on health topics, and their beliefs and attitudes toward wellness programs and employee health. Managers' support may influence the organizational culture, employees' perception of support, and employees' behaviors. When designing interventions, health promotion practitioners and researchers should consider strategies that target senior, middle, and line managers' support. Interventions need to include explicit measures of managers' support as part of the evaluation plan.

  9. The unified database for the fixed target experiment BM@N

    NASA Astrophysics Data System (ADS)

    Gertsenberger, K. V.

    2016-09-01

    The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.

  10. Pathobiology and management of laboratory rodents administered CDC category A agents.

    PubMed

    He, Yongqun; Rush, Howard G; Liepman, Rachel S; Xiang, Zuoshuang; Colby, Lesley A

    2007-02-01

    The Centers for Disease Control and Prevention Category A infectious agents include Bacillus anthracis (anthrax), Clostridium botulinum toxin (botulism), Yersinia pestis (plague), variola major virus (smallpox), Francisella tularensis (tularemia), and the filoviruses and arenaviruses that induce viral hemorrhagic fevers. These agents are regarded as having the greatest potential for adverse impact on public health and therefore are a focus of renewed attention in infectious disease research. Frequently rodent models are used to study the pathobiology of these agents. Although much is known regarding naturally occurring infections in humans, less is documented on the sources of exposures and potential risks of infection to researchers and animal care personnel after the administration of these hazardous substances to laboratory animals. Failure to appropriately manage the animals can result both in the creation of workplace hazards if human exposures occur and in disruption of the research if unintended animal exposures occur. Here we review representative Category A agents, with a focus on comparing the biologic effects in naturally infected humans and rodent models and on considerations specific to the management of infected rodent subjects. The information reviewed for each agent has been curated manually and stored in a unique Internet-based database system called HazARD (Hazards in Animal Research Database, http://helab.bioinformatics.med.umich.edu/hazard/) that is designed to assist researchers, administrators, safety officials, Institutional Biosafety Committees, and veterinary personnel seeking information on the management of risks associated with animal studies involving hazardous substances.

  11. Data management for community research projects: A JGOFS case study

    NASA Technical Reports Server (NTRS)

    Lowry, Roy K.

    1992-01-01

    Since the mid 1980s, much of the marine science research effort in the United Kingdom has been focused into large scale collaborative projects involving public sector laboratories and university departments, termed Community Research Projects. Two of these, the Biogeochemical Ocean Flux Study (BOFS) and the North Sea Project incorporated large scale data collection to underpin multidisciplinary modeling efforts. The challenge of providing project data sets to support the science was met by a small team within the British Oceanographic Data Centre (BODC) operating as a topical data center. The role of the data center was to both work up the data from the ship's sensors and to combine these data with sample measurements into online databases. The working up of the data was achieved by a unique symbiosis between data center staff and project scientists. The project management, programming and data processing skills of the data center were combined with the oceanographic experience of the project communities to develop a system which has produced quality controlled, calibrated data sets from 49 research cruises in 3.5 years of operation. The data center resources required to achieve this were modest and far outweighed by the time liberated in the scientific community by the removal of the data processing burden. Two online project databases have been assembled containing a very high proportion of the data collected. As these are under the control of BODC their long term availability as part of the UK national data archive is assured. The success of the topical data center model for UK Community Research Project data management has been founded upon the strong working relationships forged between the data center and project scientists. These can only be established by frequent personal contact and hence the relatively small size of the UK has been a critical factor. However, projects covering a larger, even international scale could be successfully supported by a network of topical data centers managing online databases which are interconnected by object oriented distributed data management systems over wide area networks.

  12. Models, Tools, and Databases for Land and Waste Management Research

    EPA Pesticide Factsheets

    These publicly available resources can be used for such tasks as simulating biodegradation or remediation of contaminants such as hydrocarbons, measuring sediment accumulation at superfund sites, or assessing toxicity and risk.

  13. Construction and management of ARDS/sepsis registry with REDCap.

    PubMed

    Pang, Xiaoqing; Kozlowski, Natascha; Wu, Sulong; Jiang, Mei; Huang, Yongbo; Mao, Pu; Liu, Xiaoqing; He, Weiqun; Huang, Chaoyi; Li, Yimin; Zhang, Haibo

    2014-09-01

    The study aimed to construct and manage an acute respiratory distress syndrome (ARDS)/sepsis registry that can be used for data warehousing and clinical research. The workflow methodology and software solution of research electronic data capture (REDCap) was used to construct the ARDS/sepsis registry. Clinical data from ARDS and sepsis patients registered to the intensive care unit (ICU) of our hospital formed the registry. These data were converted to the electronic case report form (eCRF) format used in REDCap by trained medical staff. Data validation, quality control, and database management were conducted to ensure data integrity. The clinical data of 67 patients registered to the ICU between June 2013 and December 2013 were analyzed. Of the 67 patients, 45 (67.2%) were classified as sepsis, 14 (20.9%) as ARDS, and eight (11.9%) as sepsis-associated ARDS. The patients' information, comprising demographic characteristics, medical history, clinical interventions, daily assessment, clinical outcome, and follow-up data, was properly managed and safely stored in the ARDS/sepsis registry. Data efficiency was guaranteed by performing data collection and data entry twice weekly and every two weeks, respectively. The ARDS/sepsis database that we constructed and manage with REDCap in the ICU can provide a solid foundation for translational research on the clinical data of interest, and a model for development of other medical registries in the future.

  14. The development of a prototype intelligent user interface subsystem for NASA's scientific database systems

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.

    1987-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.

  15. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  16. NASA's computer science research program

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  17. The opportunities and obstacles in developing a vascular birthmark database for clinical and research use.

    PubMed

    Sharma, Vishal K; Fraulin, Frankie Og; Harrop, A Robertson; McPhalen, Donald F

    2011-01-01

    Databases are useful tools in clinical settings. The authors review the benefits and challenges associated with the development and implementation of an efficient electronic database for the multidisciplinary Vascular Birthmark Clinic at the Alberta Children's Hospital, Calgary, Alberta. The content and structure of the database were designed using the technical expertise of a data analyst from the Calgary Health Region. Relevant clinical and demographic data fields were included with the goal of documenting ongoing care of individual patients, and facilitating future epidemiological studies of this patient population. After completion of this database, 10 challenges encountered during development were retrospectively identified. Practical solutions for these challenges are presented. THE CHALLENGES IDENTIFIED DURING THE DATABASE DEVELOPMENT PROCESS INCLUDED: identification of relevant data fields; balancing simplicity and user-friendliness with complexity and comprehensive data storage; database expertise versus clinical expertise; software platform selection; linkage of data from the previous spreadsheet to a new data management system; ethics approval for the development of the database and its utilization for research studies; ensuring privacy and limited access to the database; integration of digital photographs into the database; adoption of the database by support staff in the clinic; and maintaining up-to-date entries in the database. There are several challenges involved in the development of a useful and efficient clinical database. Awareness of these potential obstacles, in advance, may simplify the development of clinical databases by others in various surgical settings.

  18. The Evolution of DEOMI

    DTIC Science & Technology

    2017-09-15

    technology opens the world to information in the computer database to all learners without the use of a human teacher other than the controller or manager ...THE EVOLUTION DE MI DEFENSE EQU AL OPPORTU NITY MANAG EMENT INST ITUTE IDENTITY TITLE: Dr. G · NAME: William Ga ry Mc u1re RACE: White NDER...The Evolution of DEOMI Defense Equal Opportunity Management Institute Research Directorate Written by William Gary McGuire, PhD

  19. Building Connections among Lands, People and Communities: A Case Study of Benefits-Based Management Plan Development for the Gunnison Gorge National Conservation Area

    Treesearch

    Richard C. Knopf; Kathleen L. Andereck; Karen Tucker; Bill Bottomly; Randy J. Virden

    2004-01-01

    Purpose of Study This paper demonstrates how a Benefits-Based Management paradigm has been useful in guiding management plan development for an internationally significant natural resource – the Gunnison Gorge National Conservation Area (GGNCA) in Colorado. Through a program of survey research, a database on benefits desired by various stakeholder groups was created....

  20. Integrative medicine for managing the symptoms of lupus nephritis

    PubMed Central

    Choi, Tae-Young; Jun, Ji Hee; Lee, Myeong Soo

    2018-01-01

    Abstract Background: Integrative medicine is claimed to improve symptoms of lupus nephritis. No systematic reviews have been performed for the application of integrative medicine for lupus nephritis on patients with systemic lupus erythematosus (SLE). Thus, this review will aim to evaluate the current evidence on the efficacy of integrative medicine for the management of lupus nephritis in patients with SLE. Methods and analyses: The following electronic databases will be searched for studies published from their dates of inception February 2018: Medline, EMBASE and the Cochrane Central Register of Controlled Trials (CENTRAL), as well as 6 Korean medical databases (Korea Med, the Oriental Medicine Advanced Search Integrated System [OASIS], DBpia, the Korean Medical Database [KM base], the Research Information Service System [RISS], and the Korean Studies Information Services System [KISS]), and 1 Chinese medical database (the China National Knowledge Infrastructure [CNKI]). Study selection, data extraction, and assessment will be performed independently by 2 researchers. The risk of bias (ROB) will be assessed using the Cochrane ROB tool. Dissemination: This systematic review will be published in a peer-reviewed journal and disseminated both electronically and in print. The review will be updated to inform and guide healthcare practice and policy. Trial registration number: PROSPERO 2018 CRD42018085205 PMID:29595669

  1. Negative Effects of Learning Spreadsheet Management on Learning Database Management

    ERIC Educational Resources Information Center

    Vágner, Anikó; Zsakó, László

    2015-01-01

    A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…

  2. Efficacy of Noninvasive Stellate Ganglion Blockade Performed Using Physical Agent Modalities in Patients with Sympathetic Hyperactivity-Associated Disorders: A Systematic Review and Meta-Analysis.

    PubMed

    Liao, Chun-De; Tsauo, Jau-Yih; Liou, Tsan-Hon; Chen, Hung-Chou; Rau, Chi-Lun

    2016-01-01

    Stellate ganglion blockade (SGB) is mainly used to relieve symptoms of neuropathic pain in conditions such as complex regional pain syndrome and has several potential complications. Noninvasive SGB performed using physical agent modalities (PAMs), such as light irradiation and electrical stimulation, can be clinically used as an alternative to conventional invasive SGB. However, its application protocols vary and its clinical efficacy remains controversial. This study investigated the use of noninvasive SGB for managing neuropathic pain or other disorders associated with sympathetic hyperactivity. We performed a comprehensive search of the following online databases: Medline, PubMed, Excerpta Medica Database, Cochrane Library Database, Ovid MEDLINE, Europe PubMed Central, EBSCOhost Research Databases, CINAHL, ProQuest Research Library, Physiotherapy Evidence Database, WorldWideScience, BIOSIS, and Google Scholar. We identified and included quasi-randomized or randomized controlled trials reporting the efficacy of SGB performed using therapeutic ultrasound, transcutaneous electrical nerve stimulation, light irradiation using low-level laser therapy, or xenon light or linearly polarized near-infrared light irradiation near or over the stellate ganglion region in treating complex regional pain syndrome or disorders requiring sympatholytic management. The included articles were subjected to a meta-analysis and risk of bias assessment. Nine randomized and four quasi-randomized controlled trials were included. Eleven trials had good methodological quality with a Physiotherapy Evidence Database (PEDro) score of ≥6, whereas the remaining two trials had a PEDro score of <6. The meta-analysis results revealed that the efficacy of noninvasive SGB on 100-mm visual analog pain score is higher than that of a placebo or active control (weighted mean difference, -21.59 mm; 95% CI, -34.25, -8.94; p = 0.0008). Noninvasive SGB performed using PAMs effectively relieves pain of various etiologies, making it a valuable addition to the contemporary pain management armamentarium. However, this evidence is limited by the potential risk of bias.

  3. Software Sharing Enables Smarter Content Management

    NASA Technical Reports Server (NTRS)

    2007-01-01

    In 2004, NASA established a technology partnership with Xerox Corporation to develop high-tech knowledge management systems while providing new tools and applications that support the Vision for Space Exploration. In return, NASA provides research and development assistance to Xerox to progress its product line. The first result of the technology partnership was a new system called the NX Knowledge Network (based on Xerox DocuShare CPX). Created specifically for NASA's purposes, this system combines Netmark-practical database content management software created by the Intelligent Systems Division of NASA's Ames Research Center-with complementary software from Xerox's global research centers and DocuShare. NX Knowledge Network was tested at the NASA Astrobiology Institute, and is widely used for document management at Ames, Langley Research Center, within the Mission Operations Directorate at Johnson Space Center, and at the Jet Propulsion Laboratory, for mission-related tasks.

  4. Disaster management and the critical thinking skills of local emergency managers: correlations with age, gender, education, and years in occupation.

    PubMed

    Peerbolte, Stacy L; Collins, Matthew Lloyd

    2013-01-01

    Emergency managers must be able to think critically in order to identify and anticipate situations, solve problems, make judgements and decisions effectively and efficiently, and assume and manage risk. Heretofore, a critical thinking skills assessment of local emergency managers had yet to be conducted that tested for correlations among age, gender, education, and years in occupation. An exploratory descriptive research design, using the Watson-Glaser Critical Thinking Appraisal-Short Form (WGCTA-S), was employed to determine the extent to which a sample of 54 local emergency managers demonstrated the critical thinking skills associated with the ability to assume and manage risk as compared to the critical thinking scores of a group of 4,790 peer-level managers drawn from an archival WGCTA-S database. This exploratory design suggests that the local emergency managers, surveyed in this study, had lower WGCTA-S critical thinking scores than their equivalents in the archival database with the exception of those in the high education and high experience group. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  5. Ethical management in the constitution of a European database for leukodystrophies rare diseases.

    PubMed

    Duchange, Nathalie; Darquy, Sylviane; d'Audiffret, Diane; Callies, Ingrid; Lapointe, Anne-Sophie; Loeve, Boris; Boespflug-Tanguy, Odile; Moutel, Grégoire

    2014-09-01

    The EU LeukoTreat program aims to connect, enlarge and improve existing national databases for leukodystrophies (LDs) and other genetic diseases affecting the white matter of the brain. Ethical issues have been placed high on the agenda by pairing the participating LD expert research teams with experts in medical ethics and LD patient families and associations. The overarching goal is to apply core ethics principles to specific project needs and ensure patient rights and protection in research addressing the context of these rare diseases. This paper looks at how ethical issues were identified and handled at project management level when setting up an ethics committee. Through a work performed as a co-construction between health professionals, ethics experts, and patient representatives, we expose the major ethical issues identified. The committee acts as the forum for tackling specific issues tied to data sharing and patient participation: the thin line between care and research, the need for a charter establishing the commitments binding health professionals and the information items to be delivered. Ongoing feedback on the database, including delivering global results in a broad-audience format, emerged as a key recommendation. Information should be available to all patients in the partner countries developing the database and should be scaled to different patient profiles. This work led to a number of recommendations for ensuring transparency and optimizing the partnership between scientists and patients. Copyright © 2014 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  6. Coordination and standardization of federal sedimentation activities

    USGS Publications Warehouse

    Glysson, G. Douglas; Gray, John R.

    1997-01-01

    - precipitation information critical to water resources management. Memorandum M-92-01 covers primarily freshwater bodies and includes activities, such as "development and distribution of consensus standards, field-data collection and laboratory analytical methods, data processing and interpretation, data-base management, quality control and quality assurance, and water- resources appraisals, assessments, and investigations." Research activities are not included.

  7. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  8. Development of an electronic database for Acute Pain Service outcomes

    PubMed Central

    Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH

    2012-01-01

    BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364

  9. Fine-grained policy control in U.S. Army Research Laboratory (ARL) multimodal signatures database

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly; Grueneberg, Keith; Wood, David; Calo, Seraphin

    2014-06-01

    The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) consists of a number of colocated relational databases representing a collection of data from various sensors. Role-based access to this data is granted to external organizations such as DoD contractors and other government agencies through a client Web portal. In the current MMSDB system, access control is only at the database and firewall level. In order to offer finer grained security, changes to existing user profile schemas and authentication mechanisms are usually needed. In this paper, we describe a software middleware architecture and implementation that allows fine-grained access control to the MMSDB at a dataset, table, and row level. Result sets from MMSDB queries issued in the client portal are filtered with the use of a policy enforcement proxy, with minimal changes to the existing client software and database. Before resulting data is returned to the client, policies are evaluated to determine if the user or role is authorized to access the data. Policies can be authored to filter data at the row, table or column level of a result set. The system uses various technologies developed in the International Technology Alliance in Network and Information Science (ITA) for policy-controlled information sharing and dissemination1. Use of the Policy Management Library provides a mechanism for the management and evaluation of policies to support finer grained access to the data in the MMSDB system. The GaianDB is a policy-enabled, federated database that acts as a proxy between the client application and the MMSDB system.

  10. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    PubMed

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  11. [A brief review of research on chronic disease management based on collaborative care model in China].

    PubMed

    Li, Huayan; Fuller, Jeffrey; Sun, Mei; Wang, Yong; Xu, Shuang; Feng, Hui

    2014-11-01

    To evaluate the situation for chronic disease management in China, and to seek the method for improving the collaborative management for chronic diseases in community. We searched literature between January 2008 and November 2013 from the Database, such as China Academic Journal Full-Text Database, and PubMed. The screening was strictly in accordance with the inclusion and exclusion criteria and a summary was made among the selected literature based on a collaboration model. We got 698 articles after rough screen and finally selected 33. All studies were involved in patient's self-management support, but only 9 studies mentioned the communication within the team, and 11 showed a clear team division of labor. Chronic disease community management in China displays some disadvantages. It really needs a general service team with clear roles and responsibilities for team members to improve the service ability of team members and provide patients with various forms of self management services.

  12. Toward an Open-Access Global Database for Mapping, Control, and Surveillance of Neglected Tropical Diseases

    PubMed Central

    Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina; Stensgaard, Anna-Sofie; Laserna de Himpsl, Maiti; Ziegelbauer, Kathrin; Laizer, Nassor; Camenzind, Lukas; Di Pasquale, Aurelio; Ekpo, Uwem F.; Simoonga, Christopher; Mushinge, Gabriel; Saarnak, Christopher F. L.; Utzinger, Jürg; Kristensen, Thomas K.; Vounatsou, Penelope

    2011-01-01

    Background After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. Methodology With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and ‘grey literature’), contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques). The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. Principal Findings At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. Conclusions An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment and running of a global NTD database is feasible and should be expanded without delay. PMID:22180793

  13. A Multi-Purpose Data Dissemination Infrastructure for the Marine-Earth Observations

    NASA Astrophysics Data System (ADS)

    Hanafusa, Y.; Saito, H.; Kayo, M.; Suzuki, H.

    2015-12-01

    To open the data from a variety of observations, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has developed a multi-purpose data dissemination infrastructure. Although many observations have been made in the earth science, all the data are not opened completely. We think data centers may provide researchers with a universal data dissemination service which can handle various kinds of observation data with little effort. For this purpose JAMSTEC Data Management Office has developed the "Information Catalog Infrastructure System (Catalog System)". This is a kind of catalog management system which can create, renew and delete catalogs (= databases) and has following features, - The Catalog System does not depend on data types or granularity of data records. - By registering a new metadata schema to the system, a new database can be created on the same system without sytem modification. - As web pages are defined by the cascading style sheets, databases have different look and feel, and operability. - The Catalog System provides databases with basic search tools; search by text, selection from a category tree, and selection from a time line chart. - For domestic users it creates the Japanese and English pages at the same time and has dictionary to control terminology and proper noun. As of August 2015 JAMSTEC operates 7 databases on the Catalog System. We expect to transfer existing databases to this system, or create new databases on it. In comparison with a dedicated database developed for the specific dataset, the Catalog System is suitable for the dissemination of small datasets, with minimum cost. Metadata held in the catalogs may be transfered to other metadata schema to exchange global databases or portals. Examples: JAMSTEC Data Catalog: http://www.godac.jamstec.go.jp/catalog/data_catalog/metadataList?lang=enJAMSTEC Document Catalog: http://www.godac.jamstec.go.jp/catalog/doc_catalog/metadataList?lang=en&tab=categoryResearch Information and Data Access Site of TEAMS: http://www.i-teams.jp/catalog/rias/metadataList?lang=en&tab=list

  14. The Government Finance Database: A Common Resource for Quantitative Research in Public Financial Analysis

    PubMed Central

    Pierson, Kawika; Hand, Michael L.; Thompson, Fred

    2015-01-01

    Quantitative public financial management research focused on local governments is limited by the absence of a common database for empirical analysis. While the U.S. Census Bureau distributes government finance data that some scholars have utilized, the arduous process of collecting, interpreting, and organizing the data has led its adoption to be prohibitive and inconsistent. In this article we offer a single, coherent resource that contains all of the government financial data from 1967-2012, uses easy to understand natural-language variable names, and will be extended when new data is available. PMID:26107821

  15. The Government Finance Database: A Common Resource for Quantitative Research in Public Financial Analysis.

    PubMed

    Pierson, Kawika; Hand, Michael L; Thompson, Fred

    2015-01-01

    Quantitative public financial management research focused on local governments is limited by the absence of a common database for empirical analysis. While the U.S. Census Bureau distributes government finance data that some scholars have utilized, the arduous process of collecting, interpreting, and organizing the data has led its adoption to be prohibitive and inconsistent. In this article we offer a single, coherent resource that contains all of the government financial data from 1967-2012, uses easy to understand natural-language variable names, and will be extended when new data is available.

  16. Data Mining Research with the LSST

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.

    2007-12-01

    The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.

  17. Prevention of MSD within OHSMS/IMS: a systematic review of risk assessment strategies.

    PubMed

    Yazdani, Amin; Wells, Richard

    2012-01-01

    The purpose of this systematic review was to identify and summarize the research evidence on prevention of Musculoskeletal Disorders (MSD) within Occupational Health and Safety Management Systems (OHSMS) and Integrated Management Systems (IMS). Databases in business, management, engineering and health and safety were systematically searched and relevant publications were synthesized. The number of papers that could address the research questions was small. However, the review revealed that many of the techniques to address MSD hazards require substantial background knowledge and training. This may limit employees' involvement in the technical aspects of the risk assessment process. Also these techniques did not usually fit into techniques used by companies to address other risk factors within their management systems. This could result in MSD prevention becoming a separate issue that cannot be managed with company-wide tools. In addition, this review also suggested that there is a research gap concerning the MSD prevention within companies' management systems.

  18. CAPR - Theresa Guerin | Center for Cancer Research

    Cancer.gov

    Theresa Guerin oversees animal colony management and provides support in breeding experimental animal cohort, preparing documentation for CAPR preclinical studies, as well as assistance in designing drug treatment plans. She also maintains multiple database resources. Expertise

  19. 32 CFR 240.4 - Policy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...

  20. 32 CFR 240.4 - Policy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...

  1. 32 CFR 240.4 - Policy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...

  2. pLog enterprise-enterprise GIS-based geotechnical data management system enhancements.

    DOT National Transportation Integrated Search

    2015-12-01

    Recent eorts by the Louisiana Department of Transportation and Development (DOTD) and the : Louisiana Transportation Research Center (LTRC) have developed a Geotechnical Information : Database, with a Geographic Information System (GIS) interface....

  3. Development of an open source laboratory information management system for 2-D gel electrophoresis-based proteomics workflow

    PubMed Central

    Morisawa, Hiraku; Hirota, Mikako; Toda, Tosifusa

    2006-01-01

    Background In the post-genome era, most research scientists working in the field of proteomics are confronted with difficulties in management of large volumes of data, which they are required to keep in formats suitable for subsequent data mining. Therefore, a well-developed open source laboratory information management system (LIMS) should be available for their proteomics research studies. Results We developed an open source LIMS appropriately customized for 2-D gel electrophoresis-based proteomics workflow. The main features of its design are compactness, flexibility and connectivity to public databases. It supports the handling of data imported from mass spectrometry software and 2-D gel image analysis software. The LIMS is equipped with the same input interface for 2-D gel information as a clickable map on public 2DPAGE databases. The LIMS allows researchers to follow their own experimental procedures by reviewing the illustrations of 2-D gel maps and well layouts on the digestion plates and MS sample plates. Conclusion Our new open source LIMS is now available as a basic model for proteome informatics, and is accessible for further improvement. We hope that many research scientists working in the field of proteomics will evaluate our LIMS and suggest ways in which it can be improved. PMID:17018156

  4. 76 FR 59170 - Hartford Financial Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, CT...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-23

    ... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, CT; Notice of Negative... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, Connecticut (The Hartford, Corporate/EIT/CTO Database Management Division). The negative determination was issued on August 19, 2011...

  5. BRISK--research-oriented storage kit for biology-related data.

    PubMed

    Tan, Alan; Tripp, Ben; Daley, Denise

    2011-09-01

    In genetic science, large-scale international research collaborations represent a growing trend. These collaborations have demanding and challenging database, storage, retrieval and communication needs. These studies typically involve demographic and clinical data, in addition to the results from numerous genomic studies (omics studies) such as gene expression, eQTL, genome-wide association and methylation studies, which present numerous challenges, thus the need for data integration platforms that can handle these complex data structures. Inefficient methods of data transfer and access control still plague research collaboration. As science becomes more and more collaborative in nature, the need for a system that adequately manages data sharing becomes paramount. Biology-Related Information Storage Kit (BRISK) is a package of several web-based data management tools that provide a cohesive data integration and management platform. It was specifically designed to provide the architecture necessary to promote collaboration and expedite data sharing between scientists. The software, documentation, Java source code and demo are available at http://genapha.icapture.ubc.ca/brisk/index.jsp. BRISK was developed in Java, and tested on an Apache Tomcat 6 server with a MySQL database. denise.daley@hli.ubc.ca.

  6. Strategies to promote adherence to treatment by pulmonary tuberculosis patients: a systematic review.

    PubMed

    Suwankeeree, Wongduan; Picheansathian, Wilawan

    2014-03-01

    The objective of this study is to review and synthesise the best available research evidence that investigates the effectiveness of strategies to promote adherence to treatment by patients with newly diagnosed pulmonary tuberculosis (TB). The search sought to find published and unpublished studies. The search covered articles published from 1990 to 2010 in English and Thai. The database search included Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Cochrane Library, PubMed, Science Direct, Current Content Connect, Thai Nursing Research Database, Thai thesis database, Digital Library of Thailand Research Fund, Research of National Research Council of Thailand and Database of Office of Higher Education Commission. Studies were additionally identified from reference lists of all studies retrieved. Eligible studies were randomised controlled trials that explored different strategies to promote adherence to TB treatment of patients with newly diagnosed pulmonary TB and also included quasiexperimental studies. Two of the investigators independently assessed the studies and then extracted and summarised data from eligible studies. Extracted data were entered into Review Manager software and analysed. A total of 7972 newly diagnosed pulmonary TB patients participated in 10 randomised controlled trials and eight quasiexperimental studies. The studies reported on the effectiveness of a number of specific interventions to improve adherence to TB treatment among newly diagnosed pulmonary TB patients. These interventions included directly observed treatment (DOT) coupled with alternative patient supervision options, case management with DOT, short-course directly observed treatment, the intensive triad-model programme and an intervention package aimed at improved counselling and communication, decentralisation of treatment, patient choice of a DOT supporter and reinforcement of supervision activities. This review found evidence of beneficial effects from the DOT with regard to the medication adherence among TB patients in terms of cure rate and success rate. However, no beneficial effect was found from DOT intervention with increasing completion rate. In addition, the combined interventions to improve adherence to tuberculosis treatment included case management with directly observed treatment short-course program, the intensive triad-model programme and intervention package. These interventions should be implemented by healthcare providers and tailored to local contexts and circumstances, wherever appropriate.

  7. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315

  8. BioWarehouse: a bioinformatics database warehouse toolkit.

    PubMed

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  9. Management and Maintenance of a Very Large Small Mammal Database in a 25 Year Live-Trapping Study in the Chilean Semiarid Zone

    EPA Science Inventory

    Long-term ecological research programs represent tremendous investments in human labor and capital. The amount of data generated is staggering and potentially beyond the capacity of most research teams to fully explore. Since the funding of these programs comes predominately fr...

  10. A data and information system for processing, archival, and distribution of data for global change research

    NASA Technical Reports Server (NTRS)

    Graves, Sara J.

    1994-01-01

    Work on this project was focused on information management techniques for Marshall Space Flight Center's EOSDIS Version 0 Distributed Active Archive Center (DAAC). The centerpiece of this effort has been participation in EOSDIS catalog interoperability research, the result of which is a distributed Information Management System (IMS) allowing the user to query the inventories of all the DAAC's from a single user interface. UAH has provided the MSFC DAAC database server for the distributed IMS, and has contributed to definition and development of the browse image display capabilities in the system's user interface. Another important area of research has been in generating value-based metadata through data mining. In addition, information management applications for local inventory and archive management, and for tracking data orders were provided.

  11. Integrated Geo Hazard Management System in Cloud Computing Technology

    NASA Astrophysics Data System (ADS)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  12. National information network and database system of hazardous waste management in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma Hongchang

    1996-12-31

    Industries in China generate large volumes of hazardous waste, which makes it essential for the nation to pay more attention to hazardous waste management. National laws and regulations, waste surveys, and manifest tracking and permission systems have been initiated. Some centralized hazardous waste disposal facilities are under construction. China`s National Environmental Protection Agency (NEPA) has also obtained valuable information on hazardous waste management from developed countries. To effectively share this information with local environmental protection bureaus, NEPA developed a national information network and database system for hazardous waste management. This information network will have such functions as information collection, inquiry,more » and connection. The long-term objective is to establish and develop a national and local hazardous waste management information network. This network will significantly help decision makers and researchers because it will be easy to obtain information (e.g., experiences of developed countries in hazardous waste management) to enhance hazardous waste management in China. The information network consists of five parts: technology consulting, import-export management, regulation inquiry, waste survey, and literature inquiry.« less

  13. The intelligent user interface for NASA's advanced information management systems

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas, Jr.; Rolofs, Larry H.; Wattawa, Scott L.

    1987-01-01

    NASA has initiated the Intelligent Data Management Project to design and develop advanced information management systems. The project's primary goal is to formulate, design and develop advanced information systems that are capable of supporting the agency's future space research and operational information management needs. The first effort of the project was the development of a prototype Intelligent User Interface to an operational scientific database, using expert systems and natural language processing technologies. An overview of Intelligent User Interface formulation and development is given.

  14. Data dictionary and discussion for the midnite mine GIS database. Report of investigations/1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peters, D.C.; Smith, M.A.; Ferderer, D.A.

    1996-01-18

    A geographic information system (GIS) database has been developed by the U.S. Bureau of Mines (USBM) for the Midnite Mine and surroundings in northeastern Washington State (Stevens County) on the Spokane Indian Reservation. The GIS database was compiled to serve as a repository and source of historical and research information on the mine site. The database also will be used by the Bureau of Land Management and the Bureau of Indian Affairs (as well as others) for environmental assessment and reclamation planning for future remediation and reclamation of the site. This report describes the data in the GIS database andmore » their characteristics. The report also discusses known backgrounds on the data sets and any special considerations encountered by the USBM in developing the database.« less

  15. Solutions in radiology services management: a literature review*

    PubMed Central

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    Objective The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Materials and Methods Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. Results In the databases, 565 papers – 120 out of them, pdf free – were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Conclusion Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies. PMID:26543281

  16. TWRS technical baseline database manager definition document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acree, C.D.

    1997-08-13

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.

  17. Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database

    NASA Technical Reports Server (NTRS)

    Mizukami, Masahi

    2004-01-01

    An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.

  18. NeuPAT: an intranet database supporting translational research in neuroblastic tumors.

    PubMed

    Villamón, Eva; Piqueras, Marta; Meseguer, Javier; Blanquer, Ignacio; Berbegall, Ana P; Tadeo, Irene; Hernández, Vicente; Navarro, Samuel; Noguera, Rosa

    2013-03-01

    Translational research in oncology is directed mainly towards establishing a better risk stratification and searching for appropriate therapeutic targets. This research generates a tremendous amount of complex clinical and biological data needing speedy and effective management. The authors describe the design, implementation and early experiences of a computer-aided system for the integration and management of data for neuroblastoma patients. NeuPAT facilitates clinical and translational research, minimizes the workload in consolidating the information, reduces errors and increases correlation of data through extensive coding. This design can also be applied to other tumor types. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Bioinformatics for Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  20. Geothopica and the interactive analysis and visualization of the updated Italian National Geothermal Database

    NASA Astrophysics Data System (ADS)

    Trumpy, Eugenio; Manzella, Adele

    2017-02-01

    The Italian National Geothermal Database (BDNG), is the largest collection of Italian Geothermal data and was set up in the 1980s. It has since been updated both in terms of content and management tools: information on deep wells and thermal springs (with temperature > 30 °C) are currently organized and stored in a PostgreSQL relational database management system, which guarantees high performance, data security and easy access through different client applications. The BDNG is the core of the Geothopica web site, whose webGIS tool allows different types of user to access geothermal data, to visualize multiple types of datasets, and to perform integrated analyses. The webGIS tool has been recently improved by two specially designed, programmed and implemented visualization tools to display data on well lithology and underground temperatures. This paper describes the contents of the database and its software and data update, as well as the webGIS tool including the new tools for data lithology and temperature visualization. The geoinformation organized in the database and accessible through Geothopica is of use not only for geothermal purposes, but also for any kind of georesource and CO2 storage project requiring the organization of, and access to, deep underground data. Geothopica also supports project developers, researchers, and decision makers in the assessment, management and sustainable deployment of georesources.

  1. A centralized informatics infrastructure for the National Institute on Drug Abuse Clinical Trials Network.

    PubMed

    Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F

    2009-02-01

    Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality.

  2. [Quality management and participation into clinical database].

    PubMed

    Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi

    2013-07-01

    Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients.

  3. Geoscience information integration and visualization research of Shandong Province, China based on ArcGIS engine

    NASA Astrophysics Data System (ADS)

    Xu, Mingzhu; Gao, Zhiqiang; Ning, Jicai

    2014-10-01

    To improve the access efficiency of geoscience data, efficient data model and storage solutions should be used. Geoscience data is usually classified by format or coordinate system in existing storage solutions. When data is large, it is not conducive to search the geographic features. In this study, a geographical information integration system of Shandong province, China was developed based on the technology of ArcGIS Engine, .NET, and SQL Server. It uses Geodatabase spatial data model and ArcSDE to organize and store spatial and attribute data and establishes geoscience database of Shangdong. Seven function modules were designed: map browse, database and subject management, layer control, map query, spatial analysis and map symbolization. The system's characteristics of can be browsed and managed by geoscience subjects make the system convenient for geographic researchers and decision-making departments to use the data.

  4. PlantDB – a versatile database for managing plant research

    PubMed Central

    Exner, Vivien; Hirsch-Hoffmann, Matthias; Gruissem, Wilhelm; Hennig, Lars

    2008-01-01

    Background Research in plant science laboratories often involves usage of many different species, cultivars, ecotypes, mutants, alleles or transgenic lines. This creates a great challenge to keep track of the identity of experimental plants and stored samples or seeds. Results Here, we describe PlantDB – a Microsoft® Office Access database – with a user-friendly front-end for managing information relevant for experimental plants. PlantDB can hold information about plants of different species, cultivars or genetic composition. Introduction of a concise identifier system allows easy generation of pedigree trees. In addition, all information about any experimental plant – from growth conditions and dates over extracted samples such as RNA to files containing images of the plants – can be linked unequivocally. Conclusion We have been using PlantDB for several years in our laboratory and found that it greatly facilitates access to relevant information. PMID:18182106

  5. Geoinformatics in the public service: building a cyberinfrastructure across the geological surveys

    USGS Publications Warehouse

    Allison, M. Lee; Gundersen, Linda C.; Richard, Stephen M.; Keller, G. Randy; Baru, Chaitanya

    2011-01-01

    Advanced information technology infrastructure is increasingly being employed in the Earth sciences to provide researchers with efficient access to massive central databases and to integrate diversely formatted information from a variety of sources. These geoinformatics initiatives enable manipulation, modeling and visualization of data in a consistent way, and are helping to develop integrated Earth models at various scales, and from the near surface to the deep interior. This book uses a series of case studies to demonstrate computer and database use across the geosciences. Chapters are thematically grouped into sections that cover data collection and management; modeling and community computational codes; visualization and data representation; knowledge management and data integration; and web services and scientific workflows. Geoinformatics is a fascinating and accessible introduction to this emerging field for readers across the solid Earth sciences and an invaluable reference for researchers interested in initiating new cyberinfrastructure projects of their own.

  6. Research on Design Information Management System for Leather Goods

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Peng, Wen-li

    The idea of setting up a design information management system of leather goods was put forward to solve the problems existed in current information management of leather goods. Working principles of the design information management system for leather goods were analyzed in detail. Firstly, the acquiring approach of design information of leather goods was introduced. Secondly, the processing methods of design information were introduced. Thirdly, the management of design information in database was studied. Finally, the application of the system was discussed by taking the shoes products as an example.

  7. Nuclear science abstracts (NSA) database 1948--1974 (on the Internet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Nuclear Science Abstracts (NSA) is a comprehensive abstract and index collection of the International Nuclear Science and Technology literature for the period 1948 through 1976. Included are scientific and technical reports of the US Atomic Energy Commission, US Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Coverage of the literature since 1976 is provided by Energy Science and Technology Database. Approximately 25% of the records in the file contain abstracts. These are from the following volumes of the print Nuclear Science Abstracts: Volumes 12--18, Volume 29, and Volume 33. The database containsmore » over 900,000 bibliographic records. All aspects of nuclear science and technology are covered, including: Biomedical Sciences; Metals, Ceramics, and Other Materials; Chemistry; Nuclear Materials and Waste Management; Environmental and Earth Sciences; Particle Accelerators; Engineering; Physics; Fusion Energy; Radiation Effects; Instrumentation; Reactor Technology; Isotope and Radiation Source Technology. The database includes all records contained in Volume 1 (1948) through Volume 33 (1976) of the printed version of Nuclear Science Abstracts (NSA). This worldwide coverage includes books, conference proceedings, papers, patents, dissertations, engineering drawings, and journal literature. This database is now available for searching through the GOV. Research Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less

  8. TheHiveDB image data management and analysis framework.

    PubMed

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-06

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.

  9. TheHiveDB image data management and analysis framework

    PubMed Central

    Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew

    2014-01-01

    The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000

  10. Overcoming barriers to a research-ready national commercial claims database.

    PubMed

    Newman, David; Herrera, Carolina-Nicole; Parente, Stephen T

    2014-11-01

    Billions of dollars have been spent on the goal of making healthcare data available to clinicians and researchers in the hopes of improving healthcare and lowering costs. However, the problems of data governance, distribution, and accessibility remain challenges for the healthcare system to overcome. In this study, we discuss some of the issues around holding, reporting, and distributing data, including the newest "big data" challenge: making the data accessible to researchers and policy makers. This article presents a case study in "big healthcare data" involving the Health Care Cost Institute (HCCI). HCCI is a nonprofit, nonpartisan, independent research institute that serves as a voluntary repository of national commercial healthcare claims data. Governance of large healthcare databases is complicated by the data-holding model and further complicated by issues related to distribution to research teams. For multi-payer healthcare claims databases, the 2 most common models of data holding (mandatory and voluntary) have different data security requirements. Furthermore, data transport and accessibility may require technological investment. HCCI's efforts offer insights from which other data managers and healthcare leaders may benefit when contemplating a data collaborative.

  11. Adolescent Asthma Self-Management: A Concept Analysis and Operational Definition.

    PubMed

    Mammen, Jennifer; Rhee, Hyekyun

    2012-12-01

    BACKGROUND: Adolescents with asthma have a higher risk of morbidity and mortality than other age groups. Asthma self-management has been shown to improve outcomes; however, the concept of asthma self-management is not explicitly defined. METHODS: We use the Norris method of concept clarification to delineate what constitutes the concept of asthma self-management in adolescents. Five databases were searched to identify components of the concept of adolescent asthma self-management, and lists of relevant subconcepts were compiled and categorized. RESULTS: Analysis revealed 4 specific domains of self-management behaviors: (1) symptom prevention; (2) symptom monitoring; (3) acute symptom management; and (4) communication with important others. These domains of self-management were mediated by intrapersonal/cognitive and interpersonal/contextual factors. CONCLUSIONS: Based on the analysis, we offer a research-based operational definition for adolescent asthma self-management and a preliminary model that can serve as a conceptual base for further research.

  12. The AMMA information system

    NASA Astrophysics Data System (ADS)

    Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile

    2013-04-01

    The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .

  13. TAMEE: data management and analysis for tissue microarrays.

    PubMed

    Thallinger, Gerhard G; Baumgartner, Kerstin; Pirklbauer, Martin; Uray, Martina; Pauritsch, Elke; Mehes, Gabor; Buck, Charles R; Zatloukal, Kurt; Trajanoski, Zlatko

    2007-03-07

    With the introduction of tissue microarrays (TMAs) researchers can investigate gene and protein expression in tissues on a high-throughput scale. TMAs generate a wealth of data calling for extended, high level data management. Enhanced data analysis and systematic data management are required for traceability and reproducibility of experiments and provision of results in a timely and reliable fashion. Robust and scalable applications have to be utilized, which allow secure data access, manipulation and evaluation for researchers from different laboratories. TAMEE (Tissue Array Management and Evaluation Environment) is a web-based database application for the management and analysis of data resulting from the production and application of TMAs. It facilitates storage of production and experimental parameters, of images generated throughout the TMA workflow, and of results from core evaluation. Database content consistency is achieved using structured classifications of parameters. This allows the extraction of high quality results for subsequent biologically-relevant data analyses. Tissue cores in the images of stained tissue sections are automatically located and extracted and can be evaluated using a set of predefined analysis algorithms. Additional evaluation algorithms can be easily integrated into the application via a plug-in interface. Downstream analysis of results is facilitated via a flexible query generator. We have developed an integrated system tailored to the specific needs of research projects using high density TMAs. It covers the complete workflow of TMA production, experimental use and subsequent analysis. The system is freely available for academic and non-profit institutions from http://genome.tugraz.at/Software/TAMEE.

  14. A National Virtual Specimen Database for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy

    2003-01-01

    Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system

  15. Short Fiction on Film: A Relational DataBase.

    ERIC Educational Resources Information Center

    May, Charles

    Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…

  16. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...

  17. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...

  18. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...

  19. Strategic Plan 2011 to 2016

    DTIC Science & Technology

    2011-02-01

    search capability for Air Force Research Information Management System (AFRIMS) data as a part of federated search under DTIC Online Access...provide vetted requests to dataset owners. • Develop a federated search capability for databases containing limited distribution material. • Deploy

  20. Survey data and metadata modelling using document-oriented NoSQL

    NASA Astrophysics Data System (ADS)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  1. Improving Forsyth Technical Community College's Ability to Develop and Maintain Partnerships: Leveraging Technology to Develop Partnerships

    ERIC Educational Resources Information Center

    Murdock, Alan K.

    2017-01-01

    Forsyth Technical Community College (FTCC) face a shortage of funding to meet the demands of students, faculty, staff and businesses. Through this practitioner research, the utilization of the college's current customer relationship management (CRM) database advanced. By leveraging technology, the researcher assisted the college in meeting the…

  2. Managing Content in a Matter of Minutes

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA software created to help scientists expeditiously search and organize their research documents is now aiding compliance personnel, law enforcement investigators, and the general public in their efforts to search, store, manage, and retrieve documents more efficiently. Developed at Ames Research Center, NETMARK software was designed to manipulate vast amounts of unstructured and semi-structured NASA documents. NETMARK is both a relational and object-oriented technology built on an Oracle enterprise-wide database. To ensure easy user access, Ames constructed NETMARK as a Web-enabled platform utilizing the latest in Internet technology. One of the significant benefits of the program was its ability to store and manage mission-critical data.

  3. YPED: An Integrated Bioinformatics Suite and Database for Mass Spectrometry-based Proteomics Research

    PubMed Central

    Colangelo, Christopher M.; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L.; Carriero, Nicholas J.; Gulcicek, Erol E.; Lam, TuKiet T.; Wu, Terence; Bjornson, Robert D.; Bruce, Can; Nairn, Angus C.; Rinehart, Jesse; Miller, Perry L.; Williams, Kenneth R.

    2015-01-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography–tandem mass spectrometry (LC–MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED’s database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. PMID:25712262

  4. YPED: an integrated bioinformatics suite and database for mass spectrometry-based proteomics research.

    PubMed

    Colangelo, Christopher M; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L; Carriero, Nicholas J; Gulcicek, Erol E; Lam, TuKiet T; Wu, Terence; Bjornson, Robert D; Bruce, Can; Nairn, Angus C; Rinehart, Jesse; Miller, Perry L; Williams, Kenneth R

    2015-02-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography-tandem mass spectrometry (LC-MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED's database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  5. Fleet-Wide Prognostic and Health Management Suite: Asset Fault Signature Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: (1) Diagnostic Advisor, (2) Asset Fault Signature (AFS) Database, (3) Remaining Useful Life Advisor, and (4) Remaining Useful Life Database. The paper focuses on the AFS Database of the FW-PHM Suite, which is used to catalog asset fault signatures. A fault signature is a structured representation ofmore » the information that an expert would use to first detect and then verify the occurrence of a specific type of fault. The fault signatures developed to assess the health status of generator step-up transformers are described in the paper. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  6. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality

    PubMed Central

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R.; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure (https://hubzero.org), an open source software platform. The hub database components support: (1) data management – the “databases” component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection – the “forms” component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration – the “dataviewer” component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child–caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team. PMID:26616131

  7. Coordination and Data Management of the International Arctic Buoy Program

    DTIC Science & Technology

    1997-09-30

    which can drive sea ice models , and for input into climate change studies. Recent research using the IABP databases includes back and forward trajectory...present. Figure 2 shows the mean annual field of ice motion and sea level pressure. APPROACH Coordination of the IABP falls into the categories of...products of the IABP are now also available on the World Wide Web. Our recent efforts to improve the database have been directed towards producing a

  8. Database Design Methodology and Database Management System for Computer-Aided Structural Design Optimization.

    DTIC Science & Technology

    1984-12-01

    52242 Prepared for the AIR FORCE OFFICE OF SCIENTIFIC RESEARCH Under Grant No. AFOSR 82-0322 December 1984 ~ " ’w Unclassified SECURITY CLASSIFICATION4...OF THIS PAGE REPORT DOCUMENTATION PAGE is REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified None 20 SECURITY CLASSIFICATION...designer .and computer- are 20 DIiRIBUTION/AVAILABI LIT Y 0P ABSTR4ACT 21 ABSTRACT SECURITY CLASSIFICA1ONr UNCLASSIFIED/UNLIMITED SAME AS APT OTIC USERS

  9. WOVOdat: A New Tool for Managing and Accessing Data of Worldwide Volcanic Unrest

    NASA Astrophysics Data System (ADS)

    Venezky, D. Y.; Malone, S. D.; Newhall, C. G.

    2002-12-01

    WOVOdat (World Organization of Volcano Observatories database of volcanic unrest) will for the first time bring together data of worldwide volcanic seismicity, ground deformation, fumarolic activity, and other changes within or adjacent to a volcanic system. Although a large body of data and experience has been built over the past century, currently, we have no means of accessing that collective experience for use during crises and for research. WOVOdat will be the central resource of a data management system; other components will include utilities for data input and archiving, structured data retrieval, and data mining; educational modules; and links to institutional databases such as IRIS (global seismicity), UNAVCO (global GPS coordinates and strain vectors), and Smithsonian's Global Volcanism Program (historical eruptions). Data will be geospatially and time-referenced, to provide four dimensional images of how volcanic systems respond to magma intrusion, regional strain, and other disturbances prior to and during eruption. As part of the design phase, a small WOVOdat team is currently collecting information from observatories about their data types, formats, and local data management. The database schema is being designed such that responses to common, yet complex, queries are rapid (e.g., where else has similar unrest occurred and what was the outcome?) while also allowing for more detailed research analysis of relationships between various parameters (e.g., what do temporal relations between long-period earthquakes, transient deformation, and spikes in gas emission tell us about the geometry and physical properties of magma and a volcanic edifice?). We are excited by the potential of WOVOdat, and we invite participation in its design and development. Next steps involve formalizing and testing the design, and, developing utilities for translating data of various formats into common formats. The large job of populating the database will follow, and eventually we will have a great new tool for eruption forecasting and research.

  10. Integrated Electronic Health Record Database Management System: A Proposal.

    PubMed

    Schiza, Eirini C; Panos, George; David, Christiana; Petkov, Nicolai; Schizas, Christos N

    2015-01-01

    eHealth has attained significant importance as a new mechanism for health management and medical practice. However, the technological growth of eHealth is still limited by technical expertise needed to develop appropriate products. Researchers are constantly in a process of developing and testing new software for building and handling Clinical Medical Records, being renamed to Electronic Health Record (EHR) systems; EHRs take full advantage of the technological developments and at the same time provide increased diagnostic and treatment capabilities to doctors. A step to be considered for facilitating this aim is to involve more actively the doctor in building the fundamental steps for creating the EHR system and database. A global clinical patient record database management system can be electronically created by simulating real life medical practice health record taking and utilizing, analyzing the recorded parameters. This proposed approach demonstrates the effective implementation of a universal classic medical record in electronic form, a procedure by which, clinicians are led to utilize algorithms and intelligent systems for their differential diagnosis, final diagnosis and treatment strategies.

  11. Evolution of a Patient Information Management System in a Local Area Network Environment at Loyola University of Chicago Medical Center

    PubMed Central

    Price, Ronald N; Chandrasekhar, Arcot J; Tamirisa, Balaji

    1990-01-01

    The Department of Medicine at Loyola University Medical Center (LUMC) of Chicago has implemented a local area network (LAN) based Patient Information Management System (PIMS) as part of its integrated departmental database management system. PIMS consists of related database applications encompassing demographic information, current medications, problem lists, clinical data, prior events, and on-line procedure results. Integration into the existing departmental database system permits PIMS to capture and manipulate data in other departmental applications. Standardization of clinical data is accomplished through three data tables that verify diagnosis codes, procedures codes and a standardized set of clinical data elements. The modularity of the system, coupled with standardized data formats, allowed the development of a Patient Information Protocol System (PIPS). PIPS, a userdefinable protocol processor, provides physicians with individualized data entry or review screens customized for their specific research protocols or practice habits. Physician feedback indicates that the PIMS/PIPS combination enhances their ability to collect and review specific patient information by filtering large amount of clinical data.

  12. Microcomputer Database Management Systems for Bibliographic Data.

    ERIC Educational Resources Information Center

    Pollard, Richard

    1986-01-01

    Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)

  13. The Data Base and Decision Making in Public Schools.

    ERIC Educational Resources Information Center

    Hedges, William D.

    1984-01-01

    Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…

  14. The Crisis in Scholarly Communication, Open Access, and Open Data Policies: The Libraries' Perspective

    NASA Astrophysics Data System (ADS)

    Besara, Rachel

    2015-03-01

    For years the cost of STEM databases have exceeded the rate of inflation. Libraries have reallocated funds for years to continue to provide support to their scientific communities, but they are reaching a point at many institutions where they are no longer able to provide access to many databases considered standard to support research. A possible or partial alleviation to this problem is the federal open access mandate. However, this shift challenges the current model of publishing and data management in the sciences. This talk will discuss these topics from the perspective of research libraries supporting physics and the STEM disciplines.

  15. An architecture for a brain-image database

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.

    2000-01-01

    The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.

  16. Online Monitoring of Induction Motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Timothy R.; Agarwal, Vivek; Lybeck, Nancy Jean

    2016-01-01

    The online monitoring of active components project, under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program, researched diagnostic and prognostic models for alternating current induction motors (IM). Idaho National Laboratory (INL) worked with the Electric Power Research Institute (EPRI) to augment and revise the fault signatures previously implemented in the Asset Fault Signature Database of EPRI’s Fleet Wide Prognostic and Health Management (FW PHM) Suite software. Induction Motor diagnostic models were researched using the experimental data collected by Idaho State University. Prognostic models were explored in the set of literature and through amore » limited experiment with 40HP to seek the Remaining Useful Life Database of the FW PHM Suite.« less

  17. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    NASA Astrophysics Data System (ADS)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford

  18. 47 CFR 52.101 - General definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...

  19. 47 CFR 52.101 - General definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...

  20. 47 CFR 52.101 - General definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...

  1. 47 CFR 52.101 - General definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...

  2. 47 CFR 52.101 - General definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...

  3. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... individual database managers; and to perform other functions as needed for the administration of the TV bands... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers...

  4. "Hyperstat": an educational and working tool in epidemiology.

    PubMed

    Nicolosi, A

    1995-01-01

    The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.

  5. Research Models Used in Doctoral Theses on Sport Management in Turkey: A Content Analysis

    ERIC Educational Resources Information Center

    Atalay, Ahmet

    2018-01-01

    The aim of this study was to examine the methodological tendencies in the doctorate theses which were prepared in the field of Sports Management in Turkish between 2007 and 2016 and which were open to access in the database of the Council of Higher Education (CHE) National Theses Center. In this context, 111 doctorate theses prepared in the last…

  6. Coordination and Data Management of the International Arctic Buoy Programme (IABP)

    DTIC Science & Technology

    1998-01-01

    estimate the mean surface wind, which can drive sea ice models , and for input into climate change studies. Recent research using the IABP databases includes...Coordination and Data Management of the International Arctic Buoy Programme ( IABP ) Ignatius G. Rigor Polar Science Center, Applied Physics Laboratory...the National Center for Environmental Projection underlayed. APPROACH Coordination of the IABP involves distribution of information, resource

  7. [Discussion on developing a data management plan and its key factors in clinical study based on electronic data capture system].

    PubMed

    Li, Qing-na; Huang, Xiu-ling; Gao, Rui; Lu, Fang

    2012-08-01

    Data management has significant impact on the quality control of clinical studies. Every clinical study should have a data management plan to provide overall work instructions and ensure that all of these tasks are completed according to the Good Clinical Data Management Practice (GCDMP). Meanwhile, the data management plan (DMP) is an auditable document requested by regulatory inspectors and must be written in a manner that is realistic and of high quality. The significance of DMP, the minimum standards and the best practices provided by GCDMP, the main contents of DMP based on electronic data capture (EDC) and some key factors of DMP influencing the quality of clinical study were elaborated in this paper. Specifically, DMP generally consists of 15 parts, namely, the approval page, the protocol summary, role and training, timelines, database design, creation, maintenance and security, data entry, data validation, quality control and quality assurance, the management of external data, serious adverse event data reconciliation, coding, database lock, data management reports, the communication plan and the abbreviated terms. Among them, the following three parts are regarded as the key factors: designing a standardized database of the clinical study, entering data in time and cleansing data efficiently. In the last part of this article, the authors also analyzed the problems in clinical research of traditional Chinese medicine using the EDC system and put forward some suggestions for improvement.

  8. User Interface on the World Wide Web: How to Implement a Multi-Level Program Online

    NASA Technical Reports Server (NTRS)

    Cranford, Jonathan W.

    1995-01-01

    The objective of this Langley Aerospace Research Summer Scholars (LARSS) research project was to write a user interface that utilizes current World Wide Web (WWW) technologies for an existing computer program written in C, entitled LaRCRisk. The project entailed researching data presentation and script execution on the WWW and than writing input/output procedures for the database management portion of LaRCRisk.

  9. Marketing Secondary Information Services: How and to Whom.

    ERIC Educational Resources Information Center

    Wolinsky, Carol Baker

    1983-01-01

    Discussion of the marketing of bibliographic databases focuses on defining the market, the purchasing process, and the purchase decision process for researchers, managers, and librarians. The application of marketing concepts to the purchase of online information services is noted. (EJS)

  10. Database Management Systems: New Homes for Migrating Bibliographic Records.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Bierbaum, Esther G.

    1987-01-01

    Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…

  11. 23 CFR 970.204 - Management systems requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...

  12. 23 CFR 970.204 - Management systems requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...

  13. 23 CFR 970.204 - Management systems requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...

  14. Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.

    ERIC Educational Resources Information Center

    Pieska, K. A. O.

    1986-01-01

    Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)

  15. 23 CFR 970.204 - Management systems requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...

  16. G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.

    PubMed

    Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H

    2009-01-01

    Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.

  17. Climatic Data Integration and Analysis - Regional Approaches to Climate Change for Pacific Northwest Agriculture (REACCH PNA)

    NASA Astrophysics Data System (ADS)

    Seamon, E.; Gessler, P. E.; Flathers, E.; Sheneman, L.; Gollberg, G.

    2013-12-01

    The Regional Approaches to Climate Change for Pacific Northwest Agriculture (REACCH PNA) is a five-year USDA/NIFA-funded coordinated agriculture project to examine the sustainability of cereal crop production systems in the Pacific Northwest, in relationship to ongoing climate change. As part of this effort, an extensive data management system has been developed to enable researchers, students, and the public, to upload, manage, and analyze various data. The REACCH PNA data management team has developed three core systems to encompass cyberinfrastructure and data management needs: 1) the reacchpna.org portal (https://www.reacchpna.org) is the entry point for all public and secure information, with secure access by REACCH PNA members for data analysis, uploading, and informational review; 2) the REACCH PNA Data Repository is a replicated, redundant database server environment that allows for file and database storage and access to all core data; and 3) the REACCH PNA Libraries which are functional groupings of data for REACCH PNA members and the public, based on their access level. These libraries are accessible thru our https://www.reacchpna.org portal. The developed system is structured in a virtual server environment (data, applications, web) that includes a geospatial database/geospatial web server for web mapping services (ArcGIS Server), use of ESRI's Geoportal Server for data discovery and metadata management (under the ISO 19115-2 standard), Thematic Realtime Environmental Distributed Data Services (THREDDS) for data cataloging, and Interactive Python notebook server (IPython) technology for data analysis. REACCH systems are housed and maintained by the Northwest Knowledge Network project (www.northwestknowledge.net), which provides data management services to support research. Initial project data harvesting and meta-tagging efforts have resulted in the interrogation and loading of over 10 terabytes of climate model output, regional entomological data, agricultural and atmospheric information, as well as imagery, publications, videos, and other soft content. In addition, the outlined data management approach has focused on the integration and interconnection of hard data (raw data output) with associated publications, presentations, or other narrative documentation - through metadata lineage associations. This harvest-and-consume data management methodology could additionally be applied to other research team environments that involve large and divergent data.

  18. Implementation of Remaining Useful Lifetime Transformer Models in the Fleet-Wide Prognostic and Health Management Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Lybeck, Nancy J.; Pham, Binh

    Research and development efforts are required to address aging and reliability concerns of the existing fleet of nuclear power plants. As most plants continue to operate beyond the license life (i.e., towards 60 or 80 years), plant components are more likely to incur age-related degradation mechanisms. To assess and manage the health of aging plant assets across the nuclear industry, the Electric Power Research Institute has developed a web-based Fleet-Wide Prognostic and Health Management (FW-PHM) Suite for diagnosis and prognosis. FW-PHM is a set of web-based diagnostic and prognostic tools and databases, comprised of the Diagnostic Advisor, the Asset Faultmore » Signature Database, the Remaining Useful Life Advisor, and the Remaining Useful Life Database, that serves as an integrated health monitoring architecture. The main focus of this paper is the implementation of prognostic models for generator step-up transformers in the FW-PHM Suite. One prognostic model discussed is based on the functional relationship between degree of polymerization, (the most commonly used metrics to assess the health of the winding insulation in a transformer) and furfural concentration in the insulating oil. The other model is based on thermal-induced degradation of the transformer insulation. By utilizing transformer loading information, established thermal models are used to estimate the hot spot temperature inside the transformer winding. Both models are implemented in the Remaining Useful Life Database of the FW-PHM Suite. The Remaining Useful Life Advisor utilizes the implemented prognostic models to estimate the remaining useful life of the paper winding insulation in the transformer based on actual oil testing and operational data.« less

  19. [The future of clinical laboratory database management system].

    PubMed

    Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y

    1999-09-01

    To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.

  20. Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)

    NASA Technical Reports Server (NTRS)

    Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.

    2005-01-01

    The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.

  1. Impact of IPAD on CAD/CAM database university research

    NASA Technical Reports Server (NTRS)

    Leach, L. M.; Wozny, M. J.

    1984-01-01

    IPAD program has provided direction, focus and software products which impacted on CAD/CAM data base research and follow-on research. The relationship of IPAD to the research projects which involve the storage of geometric data in common data ase facilities such as data base machines, the exchange of data between heterogeneous data bases, the development of IGES processors, the migration of lrge CAD/CAM data base management systems to noncompatible hosts, and the value of RIM as a research tool is described.

  2. 23 CFR 971.204 - Management systems requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...

  3. 23 CFR 971.204 - Management systems requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...

  4. 23 CFR 971.204 - Management systems requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...

  5. 23 CFR 971.204 - Management systems requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...

  6. Design of a decentralized reusable research database architecture to support data acquisition in large research projects.

    PubMed

    Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning

    2007-01-01

    The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.

  7. To what extent do site-based training, mentoring, and operational research improve district health system management and leadership in low- and middle-income countries: a systematic review protocol.

    PubMed

    Belrhiti, Zakaria; Booth, Andrew; Marchal, Bruno; Verstraeten, Roosmarijn

    2016-04-27

    District health managers play a key role in the effectiveness of decentralized health systems in low- and middle-income countries. Inadequate management and leadership skills often hamper their ability to improve quality of care and effectiveness of health service delivery. Nevertheless, significant investments have been made in capacity-building programmes based on site-based training, mentoring, and operational research. This systematic review aims to review the effectiveness of site-based training, mentoring, and operational research (or action research) on the improvement of district health system management and leadership. Our secondary objectives are to assess whether variations in composition or intensity of the intervention influence its effectiveness and to identify enabling and constraining contexts and underlying mechanisms. We will search the following databases: MEDLINE, PsycInfo, Cochrane Library, CRD database (DARE), Cochrane Effective Practice and Organisation of Care (EPOC) group, ISI Web of Science, Health Evidence.org, PDQ-Evidence, ERIC, EMBASE, and TRIP. Complementary search will be performed (hand-searching journals and citation and reference tracking). Studies that meet the following PICO (Population, Intervention, Comparison, Outcome) criteria will be included: P: professionals working at district health management level; I: site-based training with or without mentoring, or operational research; C: normal institutional arrangements; and O: district health management functions. We will include cluster randomized controlled trials, controlled before-and-after studies, interrupted time series analysis, quasi-experimental designs, and cohort and longitudinal studies. Qualitative research will be included to contextualize findings and identify barriers and facilitators. Primary outcomes that will be reported are district health management and leadership functions. We will assess risk of bias with the Cochrane Collaboration's tools for randomized controlled trials (RCT) and non RCT studies and Critical Appraisal Skills Programme checklists for qualitative studies. We will assess strength of recommendations with the GRADE tool for quantitative studies, and the CERQual approach for qualitative studies. Synthesis of quantitative studies will be performed through meta-analysis when appropriate. Best fit framework synthesis will be used to synthesize qualitative studies. This protocol paper describes a systematic review assessing the effectiveness of site-based training (with or without mentoring programmes or operational research) on the improvement of district health system management and leadership. PROSPERO CRD42015032351.

  8. Efficacy of Noninvasive Stellate Ganglion Blockade Performed Using Physical Agent Modalities in Patients with Sympathetic Hyperactivity-Associated Disorders: A Systematic Review and Meta-Analysis

    PubMed Central

    Liao, Chun-De; Tsauo, Jau-Yih; Liou, Tsan-Hon

    2016-01-01

    Background Stellate ganglion blockade (SGB) is mainly used to relieve symptoms of neuropathic pain in conditions such as complex regional pain syndrome and has several potential complications. Noninvasive SGB performed using physical agent modalities (PAMs), such as light irradiation and electrical stimulation, can be clinically used as an alternative to conventional invasive SGB. However, its application protocols vary and its clinical efficacy remains controversial. This study investigated the use of noninvasive SGB for managing neuropathic pain or other disorders associated with sympathetic hyperactivity. Materials and Methods We performed a comprehensive search of the following online databases: Medline, PubMed, Excerpta Medica Database, Cochrane Library Database, Ovid MEDLINE, Europe PubMed Central, EBSCOhost Research Databases, CINAHL, ProQuest Research Library, Physiotherapy Evidence Database, WorldWideScience, BIOSIS, and Google Scholar. We identified and included quasi-randomized or randomized controlled trials reporting the efficacy of SGB performed using therapeutic ultrasound, transcutaneous electrical nerve stimulation, light irradiation using low-level laser therapy, or xenon light or linearly polarized near-infrared light irradiation near or over the stellate ganglion region in treating complex regional pain syndrome or disorders requiring sympatholytic management. The included articles were subjected to a meta-analysis and risk of bias assessment. Results Nine randomized and four quasi-randomized controlled trials were included. Eleven trials had good methodological quality with a Physiotherapy Evidence Database (PEDro) score of ≥6, whereas the remaining two trials had a PEDro score of <6. The meta-analysis results revealed that the efficacy of noninvasive SGB on 100-mm visual analog pain score is higher than that of a placebo or active control (weighted mean difference, −21.59 mm; 95% CI, −34.25, −8.94; p = 0.0008). Conclusions Noninvasive SGB performed using PAMs effectively relieves pain of various etiologies, making it a valuable addition to the contemporary pain management armamentarium. However, this evidence is limited by the potential risk of bias. PMID:27911934

  9. The development of an intelligent user interface for NASA's scientific databases

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Roelofs, Larry H.

    1986-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI effort is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. This paper presents the design concepts, development approach and evaluation of performance of a prototype Intelligent User Interface Subsystem (IUIS) supporting an operational database.

  10. Struggling with Excellence in All We Do: Is the Lure of New Technology Affecting How We Process Out Members’ Information

    DTIC Science & Technology

    2016-02-01

    Approved for public release: distribution unlimited. ii Disclaimer The views expressed in this academic research paper are those of the author...is managed today is far too complex and riddled with risk. Why is a members’ information duplicated across multiple disparate databases ? To better... databases . The purpose of this paper is to provide a viable solution within a given set of constrains that the Air Force can implement. Utilizing the

  11. Distributed Access View Integrated Database (DAVID) system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  12. Roadmap for the development of the University of North Carolina at Chapel Hill Genitourinary OncoLogy Database--UNC GOLD.

    PubMed

    Gallagher, Sarah A; Smith, Angela B; Matthews, Jonathan E; Potter, Clarence W; Woods, Michael E; Raynor, Mathew; Wallen, Eric M; Rathmell, W Kimryn; Whang, Young E; Kim, William Y; Godley, Paul A; Chen, Ronald C; Wang, Andrew; You, Chaochen; Barocas, Daniel A; Pruthi, Raj S; Nielsen, Matthew E; Milowsky, Matthew I

    2014-01-01

    The management of genitourinary malignancies requires a multidisciplinary care team composed of urologists, medical oncologists, and radiation oncologists. A genitourinary (GU) oncology clinical database is an invaluable resource for patient care and research. Although electronic medical records provide a single web-based record used for clinical care, billing, and scheduling, information is typically stored in a discipline-specific manner and data extraction is often not applicable to a research setting. A GU oncology database may be used for the development of multidisciplinary treatment plans, analysis of disease-specific practice patterns, and identification of patients for research studies. Despite the potential utility, there are many important considerations that must be addressed when developing and implementing a discipline-specific database. The creation of the GU oncology database including prostate, bladder, and kidney cancers with the identification of necessary variables was facilitated by meetings of stakeholders in medical oncology, urology, and radiation oncology at the University of North Carolina (UNC) at Chapel Hill with a template data dictionary provided by the Department of Urologic Surgery at Vanderbilt University Medical Center. Utilizing Research Electronic Data Capture (REDCap, version 4.14.5), the UNC Genitourinary OncoLogy Database (UNC GOLD) was designed and implemented. The process of designing and implementing a discipline-specific clinical database requires many important considerations. The primary consideration is determining the relationship between the database and the Institutional Review Board (IRB) given the potential applications for both clinical and research uses. Several other necessary steps include ensuring information technology security and federal regulation compliance; determination of a core complete dataset; creation of standard operating procedures; standardizing entry of free text fields; use of data exports, queries, and de-identification strategies; inclusion of individual investigators' data; and strategies for prioritizing specific projects and data entry. A discipline-specific database requires a buy-in from all stakeholders, meticulous development, and data entry resources to generate a unique platform for housing information that may be used for clinical care and research with IRB approval. The steps and issues identified in the development of UNC GOLD provide a process map for others interested in developing a GU oncology database. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Investigation of DBMS for Use in a Research Environment. Rand Paper Series 7002.

    ERIC Educational Resources Information Center

    Rosenfeld, Pilar N.

    This investigation of the use of database management systems (DBMS) in a research environment used the Rand Corporation as a case study. After a general introduction in section 1, eight sections present the major components of the study. Section 2 contains an overview of DBMS terminology and concepts, followed in section 3 by a general dsecription…

  14. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    ERIC Educational Resources Information Center

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  15. 78 FR 28756 - Defense Federal Acquisition Regulation Supplement: System for Award Management Name Changes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... Excluded Parties Listing System (EPLS) databases into the System for Award Management (SAM) database. DATES... combined the functional capabilities of the CCR, ORCA, and EPLS procurement systems into the SAM database... identification number and the type of organization from the System for Award Management database. 0 3. Revise the...

  16. An Examination of Selected Software Testing Tools: 1992

    DTIC Science & Technology

    1992-12-01

    Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows

  17. LIFE CYCLE MANAGEMENT OF MUNICIPAL SOLID WASTE

    EPA Science Inventory

    This is a large, complex project in which a number of different research activities are taking place concurrently to collect data, develop cost and LCI methodologies, construct a database and decision support tool, and conduct case studies with communities to support the life cyc...

  18. The Advent of Portals.

    ERIC Educational Resources Information Center

    Jackson, Mary E.

    2002-01-01

    Explains portals as tools that gather a variety of electronic information resources, including local library resources, into a single Web page. Highlights include cross-database searching; integration with university portals and course management software; the ARL (Association of Research Libraries) Scholars Portal Initiative; and selected vendors…

  19. Evaluation and implementation of BMPs for NCDOT's highway and industrial facilities : final report, May 2006.

    DOT National Transportation Integrated Search

    2006-05-01

    This research has provided NCDOT with (1) scientific observations to validate the pollutant removal : performance of selected structural BMPs, (2) a database management option for BMP monitoring and : non-monitoring sites, (3) pollution prevention pl...

  20. Developing a ubiquitous health management system with healthy diet control for metabolic syndrome healthcare in Taiwan.

    PubMed

    Kan, Yao-Chiang; Chen, Kai-Hong; Lin, Hsueh-Chun

    2017-06-01

    Self-management in healthcare can allow patients managing their health data anytime and everywhere for prevention of chronic diseases. This study established a prototype of ubiquitous health management system (UHMS) with healthy diet control (HDC) for people who need services of metabolic syndrome healthcare in Taiwan. System infrastructure comprises of three portals and a database tier with mutually supportive components to achieve functionality of diet diaries, nutrition guides, and health risk assessments for self-health management. With the diet, nutrition, and personal health database, the design enables the analytical diagrams on the interactive interface to support a mobile application for diet diary, a Web-based platform for health management, and the modules of research and development for medical care. For database integrity, dietary data can be stored at offline mode prior to transformation between mobile device and server site at online mode. The UHMS-HDC was developed by open source technology for ubiquitous health management with personalized dietary criteria. The system integrates mobile, internet, and electronic healthcare services with the diet diary functions to manage healthy diet behaviors of users. The virtual patients were involved to simulate the self-health management procedure. The assessment functions were approved by capturing the screen snapshots in the procedure. The proposed system development was capable for practical intervention. This approach details the expandable framework with collaborative components regarding the self-developed UHMS-HDC. The multi-disciplinary applications for self-health management can support the healthcare professionals to reduce medical resources and improve healthcare effects for the patient who requires monitoring personal health condition with diet control. The proposed system can be practiced for intervention in the hospital. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A review of consumer involvement in evaluations of case management: consistency with a recovery paradigm.

    PubMed

    Marshall, Sarah L; Crowe, Trevor P; Oades, Lindsay G; Deane, Frank F; Kavanagh, David J

    2007-03-01

    This Open Forum examines research on case management that draws on consumer perspectives. It clarifies the extent of consumer involvement and whether evaluations were informed by recovery perspectives. Searches of three databases revealed 13 studies that sought to investigate consumer perspectives. Only one study asked consumers about experiences of recovery. Most evaluations did not adequately assess consumers' views, and active consumer participation in research was rare. Supporting an individual's recovery requires commitment to a recovery paradigm that incorporates traditional symptom reduction and improved functioning, with broader recovery principles, and a shift in focus from illness to well-being. It also requires greater involvement of consumers in the implementation of case management and ownership of their own recovery process, not just in research that evaluates the practice.

  2. USGS Nonindigenous Aquatic Species database with a focus on the introduced fishes of the lower Tennessee and Cumberland drainages

    USGS Publications Warehouse

    Fuller, Pamela L.; Cannister, Matthew; Johansen, Rebecca; Estes, L. Dwayne; Hamilton, Steven W.; Barrass, Andrew N.

    2013-01-01

    The Nonindigenous Aquatic Species (NAS) database (http://nas.er.usgs.gov) functions as a national repository and clearinghouse for occurrence data for introduced species within the United States. Included is locality information on over 1,100 species of vertebrates, invertebrates, and vascular plants introduced as early as 1850. Taxa include foreign (exotic) species and species native to North America that have been transported outside of their natural range. Locality data are obtained from published and unpublished literature, state, federal and local monitoring programs, museum accessions, on-line databases, websites, professional communications and on-line reporting forms. The NAS web site provides immediate access to new occurrence records through a real-time interface with the NAS database. Visitors to the web site are presented with a set of pre-defined queries that generate lists of species according to state or hydrologic basin of interest. Fact sheets, distribution maps, and information on new occurrences are updated as new records and information become available. The NAS database allows resource managers to learn of new introductions reported in their region or nearby regions, improving response time. Conversely, managers are encouraged to report their observations of new occurrences to the NAS database so information can be disseminated to other managers, researchers, and the public. In May 2004, the NAS database incorporated an Alert System to notify registered users of new introductions as part of a national early detection/rapid response system. Users can register to receive alerts based on geographic or taxonomic criteria. The NAS database was used to identify 23 fish species introduced into the lower Tennessee and Cumberland drainages. Most of these are sport fish stocked to support fisheries, but the list also includes accidental and illegal introductions such as Asian Carps, clupeids, various species popular in the aquarium trade, and Atlantic Needlefish (Strongylura marina) that was introduced via the newly-constructed Tennessee-Tombigbee Canal.

  3. Database Searching by Managers.

    ERIC Educational Resources Information Center

    Arnold, Stephen E.

    Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…

  4. Technology transfer at NASA - A librarian's view

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1991-01-01

    The NASA programs, publications, and services promoting the transfer and utilization of aerospace technology developed by and for NASA are briefly surveyed. Topics addressed include the corporate sources of NASA technical information and its interest for corporate users of information services; the IAA and STAR abstract journals; NASA/RECON, NTIS, and the AIAA Aerospace Database; the RECON Space Commercialization file; the Computer Software Management and Information Center file; company information in the RECON database; and services to small businesses. Also discussed are the NASA publications Tech Briefs and Spinoff, the Industrial Applications Centers, NASA continuing bibliographies on management and patent abstracts (indexed using the NASA Thesaurus), the Index to NASA News Releases and Speeches, and the Aerospace Research Information Network (ARIN).

  5. Mineral resources management based on GIS and RS: a case study of the Laozhaiwan Gold Mine

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Hua, Xianghong; Wang, Xinzhou; Ma, Liguang; Yuan, Yanbin

    2005-10-01

    With the development of digital information technology in mining industry, the concept of DM (Digital Mining) and MGIS (Mining Geographical Information System) are becoming the research focus but not perfect. How to effectively manage the dataset of geological, surveying and mineral products grade is the key point that concerned the sustainable development and standardized management in mining industry. Based on the existing combined GIS and remote sensing technology, we propose a model named DMMIS (Digital Mining Management Information System), which is composed of the database layer, the ActiveX layer and the user interface layer. The system is used in Laozhaiwan Gold Mine, Yunnan Province of China, which is shown to demonstrate the feasibility of the research and development achievement stated in this paper. Finally, some conclusions and constructive advices for future research work are given.

  6. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  7. DoD Security Assistance Management Manual

    DTIC Science & Technology

    1988-10-01

    IDSS Administrator for U.S. Army Training Activities: TSASS Database Manager SATFA Attn: ATFA-I 2017 Cunningham Drive, 4th Floor Hampton VA 23666 DSN...Depot, Chambersburg, PA J. School of Engineering and Logistics, Red River Army Depot, Texarkana , "TX K. Lone Star Ammunition Plant, Texarkana , TX L...Electronics Command, Ft. Monmouth, NJ U. Red River Army Depot, Texarkana , TX V. Army Aviation Research and Development Command, St. Louis, MO W

  8. Data management in clinical research: An overview

    PubMed Central

    Krishnankutty, Binny; Bellary, Shantala; Kumar, Naveen B.R.; Moodahadu, Latha S.

    2012-01-01

    Clinical Data Management (CDM) is a critical phase in clinical research, which leads to generation of high-quality, reliable, and statistically sound data from clinical trials. This helps to produce a drastic reduction in time from drug development to marketing. Team members of CDM are actively involved in all stages of clinical trial right from inception to completion. They should have adequate process knowledge that helps maintain the quality standards of CDM processes. Various procedures in CDM including Case Report Form (CRF) designing, CRF annotation, database designing, data-entry, data validation, discrepancy management, medical coding, data extraction, and database locking are assessed for quality at regular intervals during a trial. In the present scenario, there is an increased demand to improve the CDM standards to meet the regulatory requirements and stay ahead of the competition by means of faster commercialization of product. With the implementation of regulatory compliant data management tools, CDM team can meet these demands. Additionally, it is becoming mandatory for companies to submit the data electronically. CDM professionals should meet appropriate expectations and set standards for data quality and also have a drive to adapt to the rapidly changing technology. This article highlights the processes involved and provides the reader an overview of the tools and standards adopted as well as the roles and responsibilities in CDM. PMID:22529469

  9. ANTI-VIRAL EFFECTS OF MEDICINAL PLANTS IN THE MANAGEMENT OF DENGUE: A SYSTEMATIC REVIEW.

    PubMed

    Frederico, Éric Heleno Freira Ferreira; Cardoso, André Luiz Bandeira Dionísio; Moreira-Marconi, Eloá; de Sá-Caputo, Danúbia da Cunha; Guimarães, Carlos Alberto Sampaio; Dionello, Carla da Fontoura; Morel, Danielle Soares; Paineiras-Domingos, Laisa Liane; de Souza, Patricia Lopes; Brandão-Sobrinho-Neto, Samuel; Carvalho-Lima, Rafaelle Pacheco; Guedes-Aguiar, Eliane de Oliveira; Costa-Cavalcanti, Rebeca Graça; Kutter, Cristiane Ribeiro; Bernardo-Filho, Mario

    2017-01-01

    Dengue is considered as an important arboviral disease. Safe, low-cost, and effective drugs that possess inhibitory activity against dengue virus (DENV) are mostly needed to try to combat the dengue infection worldwide. Medicinal plants have been considered as an important alternative to manage several diseases, such as dengue. As authors have demonstrated the antiviral effect of medicinal plants against DENV, the aim of this study was to review systematically the published research concerning the use of medicinal plants in the management of dengue using the PubMed database. Search and selection of publications were made using the PubMed database following the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA statement). Six publications met the inclusion criteria and were included in the final selection after thorough analysis. It is suggested that medicinal plants' products could be used as potential anti-DENV agents.

  10. The effect of visualizing healthy eaters and mortality reminders on nutritious grocery purchases: an integrative terror management and prototype willingness analysis.

    PubMed

    McCabe, Simon; Arndt, Jamie; Goldenberg, Jamie L; Vess, Matthew; Vail, Kenneth E; Gibbons, Frederick X; Rogers, Ross

    2015-03-01

    To use insights from an integration of the terror management health model and the prototype willingness model to inform and improve nutrition-related behavior using an ecologically valid outcome. Prior to shopping, grocery shoppers were exposed to a reminder of mortality (or pain) and then visualized a healthy (vs. neutral) prototype. Receipts were collected postshopping and food items purchased were coded using a nutrition database. Compared with those in the control conditions, participants who received the mortality reminder and who were led to visualize a healthy eater prototype purchased more nutritious foods. The integration of the terror management health model and the prototype willingness model has the potential for both basic and applied advances and offers a generative ground for future research. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  11. SeedStor: A Germplasm Information Management System and Public Database

    PubMed Central

    Horler, RSP; Turner, AS; Fretter, P; Ambrose, M

    2018-01-01

    Abstract SeedStor (https://www.seedstor.ac.uk) acts as the publicly available database for the seed collections held by the Germplasm Resources Unit (GRU) based at the John Innes Centre, Norwich, UK. The GRU is a national capability supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The GRU curates germplasm collections of a range of temperate cereal, legume and Brassica crops and their associated wild relatives, as well as precise genetic stocks, near-isogenic lines and mapping populations. With >35,000 accessions, the GRU forms part of the UK’s plant conservation contribution to the Multilateral System (MLS) of the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA) for wheat, barley, oat and pea. SeedStor is a fully searchable system that allows our various collections to be browsed species by species through to complicated multipart phenotype criteria-driven queries. The results from these searches can be downloaded for later analysis or used to order germplasm via our shopping cart. The user community for SeedStor is the plant science research community, plant breeders, specialist growers, hobby farmers and amateur gardeners, and educationalists. Furthermore, SeedStor is much more than a database; it has been developed to act internally as a Germplasm Information Management System that allows team members to track and process germplasm requests, determine regeneration priorities, handle cost recovery and Material Transfer Agreement paperwork, manage the Seed Store holdings and easily report on a wide range of the aforementioned tasks. PMID:29228298

  12. GCE Data Toolbox for MATLAB - a software framework for automating environmental data processing, quality control and documentation

    NASA Astrophysics Data System (ADS)

    Sheldon, W.; Chamblee, J.; Cary, R. H.

    2013-12-01

    Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.

  13. Managed care and inpatient mortality in adults: effect of primary payer.

    PubMed

    Hines, Anika L; Raetzman, Susan O; Barrett, Marguerite L; Moy, Ernest; Andrews, Roxanne M

    2017-02-08

    Because managed care is increasingly prevalent in health care finance and delivery, it is important to ascertain its effects on health care quality relative to that of fee-for-service plans. Some stakeholders are concerned that basing gatekeeping, provider selection, and utilization management on cost may lower quality of care. To date, research on this topic has been inconclusive, largely because of variation in research methods and covariates. Patient age has been the only consistently evaluated outcome predictor. This study provides a comprehensive assessment of the association between managed care and inpatient mortality for Medicare and privately insured patients. A cross-sectional design was used to examine the association between managed care and inpatient mortality for four common inpatient conditions. Data from the 2009 Healthcare Cost and Utilization Project State Inpatient Databases for 11 states were linked to data from the American Hospital Association Annual Survey Database. Hospital discharges were categorized as managed care or fee for service. A phased approach to multivariate logistic modeling examined the likelihood of inpatient mortality when adjusting for individual patient and hospital characteristics and for county fixed effects. Results showed different effects of managed care for Medicare and privately insured patients. Privately insured patients in managed care had an advantage over their fee-for-service counterparts in inpatient mortality for acute myocardial infarction, stroke, pneumonia, and congestive heart failure; no such advantage was found for the Medicare managed care population. To the extent that the study showed a protective effect of privately insured managed care, it was driven by individuals aged 65 years and older, who had consistently better outcomes than their non-managed care counterparts. Privately insured patients in managed care plans, especially older adults, had better outcomes than those in fee-for-service plans. Patients in Medicare managed care had outcomes similar to those in Medicare FFS. Additional research is needed to understand the role of patient selection, hospital quality, and differences among county populations in the decreased odds of inpatient mortality among patients in private managed care and to determine why this result does not hold for Medicare.

  14. Prognostic and health management of active assets in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Lybeck, Nancy; Pham, Binh T.

    This study presents the development of diagnostic and prognostic capabilities for active assets in nuclear power plants (NPPs). The research was performed under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program. Idaho National Laboratory researched, developed, implemented, and demonstrated diagnostic and prognostic models for generator step-up transformers (GSUs). The Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software developed by the Electric Power Research Institute was used to perform diagnosis and prognosis. As part of the research activity, Idaho National Laboratory implemented 22 GSU diagnostic models in the Asset Fault Signature Database and twomore » wellestablished GSU prognostic models for the paper winding insulation in the Remaining Useful Life Database of the FW-PHM Suite. The implemented models along with a simulated fault data stream were used to evaluate the diagnostic and prognostic capabilities of the FW-PHM Suite. Knowledge of the operating condition of plant asset gained from diagnosis and prognosis is critical for the safe, productive, and economical long-term operation of the current fleet of NPPs. This research addresses some of the gaps in the current state of technology development and enables effective application of diagnostics and prognostics to nuclear plant assets.« less

  15. Prognostic and health management of active assets in nuclear power plants

    DOE PAGES

    Agarwal, Vivek; Lybeck, Nancy; Pham, Binh T.; ...

    2015-06-04

    This study presents the development of diagnostic and prognostic capabilities for active assets in nuclear power plants (NPPs). The research was performed under the Advanced Instrumentation, Information, and Control Technologies Pathway of the Light Water Reactor Sustainability Program. Idaho National Laboratory researched, developed, implemented, and demonstrated diagnostic and prognostic models for generator step-up transformers (GSUs). The Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software developed by the Electric Power Research Institute was used to perform diagnosis and prognosis. As part of the research activity, Idaho National Laboratory implemented 22 GSU diagnostic models in the Asset Fault Signature Database and twomore » wellestablished GSU prognostic models for the paper winding insulation in the Remaining Useful Life Database of the FW-PHM Suite. The implemented models along with a simulated fault data stream were used to evaluate the diagnostic and prognostic capabilities of the FW-PHM Suite. Knowledge of the operating condition of plant asset gained from diagnosis and prognosis is critical for the safe, productive, and economical long-term operation of the current fleet of NPPs. This research addresses some of the gaps in the current state of technology development and enables effective application of diagnostics and prognostics to nuclear plant assets.« less

  16. A global building inventory for earthquake loss estimation and risk management

    USGS Publications Warehouse

    Jaiswal, K.; Wald, D.; Porter, K.

    2010-01-01

    We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.

  17. World Ocean Database and the Global Temperature and Salinity Profile Program Database: Synthesis of historical and near real-time ocean profile data

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Sun, L.; Locarnini, R. A.; Mishonov, A. V.; Hall, N.; Ouellet, M.

    2016-02-01

    The World Ocean Database (WOD) contains systematically quality controlled historical and recent ocean profile data (temperature, salinity, oxygen, nutrients, carbon cycle variables, biological variables) ranging from Captain Cooks second voyage (1773) to this year's Argo floats. The US National Centers for Environmental Information (NCEI) also hosts the Global Temperature and Salinity Profile Program (GTSPP) Continuously Managed Database (CMD) which provides quality controlled near-real time ocean profile data and higher level quality controlled temperature and salinity profiles from 1990 to present. Both databases are used extensively for ocean and climate studies. Synchronization of these two databases will allow easier access and use of comprehensive regional and global ocean profile data sets for ocean and climate studies. Synchronizing consists of two distinct phases: 1) a retrospective comparison of data in WOD and GTSPP to ensure that the most comprehensive and highest quality data set is available to researchers without the need to individually combine and contrast the two datasets and 2) web services to allow the constantly accruing near-real time data in the GTSPP CMD and the continuous addition and quality control of historical data in WOD to be made available to researchers together, seamlessly.

  18. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099

  19. Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.

    PubMed

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  20. A multidisciplinary database for geophysical time series management

    NASA Astrophysics Data System (ADS)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  1. Global capacity, potentials and trends of solid waste research and management.

    PubMed

    Nwachukwu, Michael A; Ronald, Mersky; Feng, Huan

    2017-09-01

    In this study, United States, China, India, United Kingdom, Nigeria, Egypt, Brazil, Italy, Germany, Taiwan, Australia, Canada and Mexico were selected to represent the global community. This enabled an overview of solid waste management worldwide and between developed and developing countries. These are countries that feature most in the International Conference on Solid Waste Technology and Management (ICSW) over the past 20 years. A total of 1452 articles directly on solid waste management and technology were reviewed and credited to their original country of research. Results show significant solid waste research potentials globally, with the United States leading by 373 articles, followed by India with 230 articles. The rest of the countries are ranked in the order of: UK > Taiwan > Brazil > Nigeria > Italy > Japan > China > Canada > Germany >Mexico > Egypt > Australia. Global capacity in solid waste management options is in the order of: Waste characterisation-management > waste biotech/composting > waste to landfill > waste recovery/reduction > waste in construction > waste recycling > waste treatment-reuse-storage > waste to energy > waste dumping > waste education/public participation/policy. It is observed that the solid waste research potential is not a measure of solid waste management capacity. The results show more significant research impacts on solid waste management in developed countries than in developing countries where economy, technology and society factors are not strong. This article is targeted to motivate similar study in each country, using solid waste research articles from other streamed databases to measure research impacts on solid waste management.

  2. An Online Library Catalogue.

    ERIC Educational Resources Information Center

    Alloro, Giovanna; Ugolini, Donatella

    1992-01-01

    Describes the implementation of an online catalog in the library of the National Institute for Cancer Research and the Clinical and Experimental Oncology Institute of the University of Genoa. Topics addressed include automation of various library functions, software features, database management, training, and user response. (10 references) (MES)

  3. epiPATH: an information system for the storage and management of molecular epidemiology data from infectious pathogens.

    PubMed

    Amadoz, Alicia; González-Candelas, Fernando

    2007-04-20

    Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.

  4. Toward Soil Spatial Information Systems (SSIS) for global modeling and ecosystem management

    NASA Technical Reports Server (NTRS)

    Baumgardner, Marion F.

    1995-01-01

    The general objective is to conduct research to contribute toward the realization of a world soils and terrain (SOTER) database, which can stand alone or be incorporated into a more complete and comprehensive natural resources digital information system. The following specific objectives are focussed on: (1) to conduct research related to (a) translation and correlation of different soil classification systems to the SOTER database legend and (b) the inferfacing of disparate data sets in support of the SOTER Project; (2) to examine the potential use of AVHRR (Advanced Very High Resolution Radiometer) data for delineating meaningful soils and terrain boundaries for small scale soil survey (range of scale: 1:250,000 to 1:1,000,000) and terrestrial ecosystem assessment and monitoring; and (3) to determine the potential use of high dimensional spectral data (220 reflectance bands with 10 m spatial resolution) for delineating meaningful soils boundaries and conditions for the purpose of detailed soil survey and land management.

  5. Insect barcode information system.

    PubMed

    Pratheepa, Maria; Jalali, Sushil Kumar; Arokiaraj, Robinson Silvester; Venkatesan, Thiruvengadam; Nagesh, Mandadi; Panda, Madhusmita; Pattar, Sharath

    2014-01-01

    Insect Barcode Information System called as Insect Barcode Informática (IBIn) is an online database resource developed by the National Bureau of Agriculturally Important Insects, Bangalore. This database provides acquisition, storage, analysis and publication of DNA barcode records of agriculturally important insects, for researchers specifically in India and other countries. It bridges a gap in bioinformatics by integrating molecular, morphological and distribution details of agriculturally important insects. IBIn was developed using PHP/My SQL by using relational database management concept. This database is based on the client- server architecture, where many clients can access data simultaneously. IBIn is freely available on-line and is user-friendly. IBIn allows the registered users to input new information, search and view information related to DNA barcode of agriculturally important insects.This paper provides a current status of insect barcode in India and brief introduction about the database IBIn. http://www.nabg-nbaii.res.in/barcode.

  6. An online database for IHN virus in Pacific Salmonid fish: MEAP-IHNV

    USGS Publications Warehouse

    Kurath, Gael

    2012-01-01

    The MEAP-IHNV database provides access to detailed data for anyone interested in IHNV molecular epidemiology, such as fish health professionals, fish culture facility managers, and academic researchers. The flexible search capabilities enable the user to generate various output formats, including tables and maps, which should assist users in developing and testing hypotheses about how IHNV moves across landscapes and changes over time. The MEAP-IHNV database is available online at http://gis.nacse.org/ihnv/ (fig. 1). The database contains records that provide background information and genetic sequencing data for more than 1,000 individual field isolates of the fish virus Infectious hematopoietic necrosis virus (IHNV), and is updated approximately annually. It focuses on IHNV isolates collected throughout western North America from 1966 to the present. The database also includes a small number of IHNV isolates from Eastern Russia. By engaging the expertise of the broader community of colleagues interested in IHNV, our goal is to enhance the overall understanding of IHNV epidemiology, including defining sources of disease outbreaks and viral emergence events, identifying virus traffic patterns and potential reservoirs, and understanding how human management of salmonid fish culture affects disease. Ultimately, this knowledge can be used to develop new strategies to reduce the effect of IHN disease in cultured and wild fish.

  7. USGS cold-water coral geographic database-Gulf of Mexico and western North Atlantic Ocean, version 1.0

    USGS Publications Warehouse

    Scanlon, Kathryn M.; Waller, Rhian G.; Sirotek, Alexander R.; Knisel, Julia M.; O'Malley, John; Alesandrini, Stian

    2010-01-01

    The USGS Cold-Water Coral Geographic Database (CoWCoG) provides a tool for researchers and managers interested in studying, protecting, and/or utilizing cold-water coral habitats in the Gulf of Mexico and western North Atlantic Ocean.  The database makes information about the locations and taxonomy of cold-water corals available to the public in an easy-to-access form while preserving the scientific integrity of the data.  The database includes over 1700 entries, mostly from published scientific literature, museum collections, and other databases.  The CoWCoG database is easy to search in a variety of ways, and data can be quickly displayed in table form and on a map by using only the software included with this publication.  Subsets of the database can be selected on the basis of geographic location, taxonomy, or other criteria and exported to one of several available file formats.  Future versions of the database are being planned to cover a larger geographic area and additional taxa.

  8. Translation from the collaborative OSM database to cartography

    NASA Astrophysics Data System (ADS)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  9. Database citation in supplementary data linked to Europe PubMed Central full text biomedical articles.

    PubMed

    Kafkas, Şenay; Kim, Jee-Hyub; Pi, Xingjun; McEntyre, Johanna R

    2015-01-01

    In this study, we present an analysis of data citation practices in full text research articles and their corresponding supplementary data files, made available in the Open Access set of articles from Europe PubMed Central. Our aim is to investigate whether supplementary data files should be considered as a source of information for integrating the literature with biomolecular databases. Using text-mining methods to identify and extract a variety of core biological database accession numbers, we found that the supplemental data files contain many more database citations than the body of the article, and that those citations often take the form of a relatively small number of articles citing large collections of accession numbers in text-based files. Moreover, citation of value-added databases derived from submission databases (such as Pfam, UniProt or Ensembl) is common, demonstrating the reuse of these resources as datasets in themselves. All the database accession numbers extracted from the supplementary data are publicly accessible from http://dx.doi.org/10.5281/zenodo.11771. Our study suggests that supplementary data should be considered when linking articles with data, in curation pipelines, and in information retrieval tasks in order to make full use of the entire research article. These observations highlight the need to improve the management of supplemental data in general, in order to make this information more discoverable and useful.

  10. Nuclear Energy Infrastructure Database Description and User’s Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less

  11. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  12. Factors contributing to managerial competence of first-line nurse managers: A systematic review.

    PubMed

    Gunawan, Joko; Aungsuroch, Yupin; Fisher, Mary L

    2018-02-01

    To determine the factors contributing to managerial competence of first-line nurse managers. Understanding factors affecting managerial competence of nurse managers remains important to increase the performance of organizations; however, there is sparse research examining factors that influence managerial competence of first-line nurse managers. Systematic review. The search strategy was conducted from April to July 2017 that included 6 electronic databases: Science Direct, PROQUEST Dissertations and Theses, MEDLINE, CINAHL, EMBASE, and Google Scholar for the years 2000 to 2017 with full text in English. Quantitative and qualitative research papers that examined relationships among managerial competence and antecedent factors were included. Quality assessment, data extractions, and analysis were completed on all included studies. Content analysis was used to categorize factors into themes. Eighteen influencing factors were examined and categorized into 3 themes-organizational factors, characteristics and personality traits of individual managers, and role factors. Findings suggest that managerial competence of first-line nurse managers is multifactorial. Further research is needed to develop strategies to develop managerial competence of first-line nurse managers. © 2017 John Wiley & Sons Australia, Ltd.

  13. Building Databases for Education. ERIC Digest.

    ERIC Educational Resources Information Center

    Klausmeier, Jane A.

    This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…

  14. MyMolDB: a micromolecular database solution with open source and free components.

    PubMed

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  15. Correlates of quality of life among individuals with epilepsy enrolled in self-management research: From the US Centers for Disease Control and Prevention Managing Epilepsy Well Network.

    PubMed

    Sajatovic, Martha; Tatsuoka, Curtis; Welter, Elisabeth; Friedman, Daniel; Spruill, Tanya M; Stoll, Shelley; Sahoo, Satya S; Bukach, Ashley; Bamps, Yvan A; Valdez, Joshua; Jobst, Barbara C

    2017-04-01

    Epilepsy is a chronic neurological condition that causes substantial burden on patients and families. Quality of life may be reduced due to the stress of coping with epilepsy. For nearly a decade, the Centers for Disease Control (CDC) Prevention Research Center's Managing Epilepsy Well (MEW) Network has been conducting research on epilepsy self-management to address research and practice gaps. Studies have been conducted by independent centers across the U.S. Recently, the MEW Network sites, collaboratively, began compiling an integrated database to facilitate aggregate secondary analysis of completed and ongoing studies. In this preliminary analysis, correlates of quality of life in people with epilepsy (PWE) were analyzed from pooled baseline data from the MEW Network. For this analysis, data originated from 6 epilepsy studies conducted across 4 research sites and comprised 459 PWE. Descriptive comparisons assessed common data elements that included gender, age, ethnicity, race, education, employment, income, seizure frequency, quality of life, and depression. Standardized rating scales were used for quality of life (QOLIE-10) and for depression (Patient Health Questionnaire, PHQ-9). While not all datasets included all common data elements, baseline descriptive analysis found a mean age of 42 (SD 13.22), 289 women (63.0%), 59 African Americans (13.7%), and 58 Hispanics (18.5%). Most, 422 (92.8%), completed at least high school, while 169 (61.7%) were unmarried, divorced/separated, or widowed. Median 30-day seizure frequency was 0.71 (range 0-308). Depression at baseline was common, with a mean PHQ-9 score of 8.32 (SD 6.04); 69 (29.0%) had depression in the mild range (PHQ-9 score 5-9) and 92 (38.7%) had depression in the moderate to severe range (PHQ-9 score >9). Lower baseline quality of life was associated with greater depressive severity (p<.001), more frequent seizures (p<.04) and lower income (p<.05). The MEW Network Integrated Database offers a unique opportunity for secondary analysis of data from multiple community-based epilepsy research studies. While findings must be tempered by potential sample bias, i.e. a relative under-representation of men and relatively small sample of some racial/ethnic subgroups, results of analyses derived from this first integrated epilepsy self-management database have potential to be useful to the field. Associations between depression severity and lower QOL in PWE are consistent with previous studies derived from clinical samples. Self-management efforts that focus on mental health comorbidity and seizure control may be one way to address modifiable factors that affect quality of life in PWE. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Validation of chronic obstructive pulmonary disease (COPD) diagnoses in healthcare databases: a systematic review protocol.

    PubMed

    Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro

    2016-06-01

    Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. EMR Database Upgrade from MUMPS to CACHE: Lessons Learned.

    PubMed

    Alotaibi, Abduallah; Emshary, Mshary; Househ, Mowafa

    2014-01-01

    Over the past few years, Saudi hospitals have been implementing and upgrading Electronic Medical Record Systems (EMRs) to ensure secure data transfer and exchange between EMRs.This paper focuses on the process and lessons learned in upgrading the MUMPS database to a the newer Caché database to ensure the integrity of electronic data transfer within a local Saudi hospital. This paper examines the steps taken by the departments concerned, their action plans and how the change process was managed. Results show that user satisfaction was achieved after the upgrade was completed. The system was stable and offered better healthcare quality to patients as a result of the data exchange. Hardware infrastructure upgrades improved scalability and software upgrades to Caché improved stability. The overall performance was enhanced and new functions were added (CPOE) during the upgrades. The essons learned were: 1) Involve higher management; 2) Research multiple solutions available in the market; 3) Plan for a variety of implementation scenarios.

  18. The STP (Solar-Terrestrial Physics) Semantic Web based on the RSS1.0 and the RDF

    NASA Astrophysics Data System (ADS)

    Kubo, T.; Murata, K. T.; Kimura, E.; Ishikura, S.; Shinohara, I.; Kasaba, Y.; Watari, S.; Matsuoka, D.

    2006-12-01

    In the Solar-Terrestrial Physics (STP), it is pointed out that circulation and utilization of observation data among researchers are insufficient. To archive interdisciplinary researches, we need to overcome this circulation and utilization problems. Under such a background, authors' group has developed a world-wide database that manages meta-data of satellite and ground-based observation data files. It is noted that retrieving meta-data from the observation data and registering them to database have been carried out by hand so far. Our goal is to establish the STP Semantic Web. The Semantic Web provides a common framework that allows a variety of data shared and reused across applications, enterprises, and communities. We also expect that the secondary information related with observations, such as event information and associated news, are also shared over the networks. The most fundamental issue on the establishment is who generates, manages and provides meta-data in the Semantic Web. We developed an automatic meta-data collection system for the observation data using the RSS (RDF Site Summary) 1.0. The RSS1.0 is one of the XML-based markup languages based on the RDF (Resource Description Framework), which is designed for syndicating news and contents of news-like sites. The RSS1.0 is used to describe the STP meta-data, such as data file name, file server address and observation date. To describe the meta-data of the STP beyond RSS1.0 vocabulary, we defined original vocabularies for the STP resources using the RDF Schema. The RDF describes technical terms on the STP along with the Dublin Core Metadata Element Set, which is standard for cross-domain information resource descriptions. Researchers' information on the STP by FOAF, which is known as an RDF/XML vocabulary, creates a machine-readable metadata describing people. Using the RSS1.0 as a meta-data distribution method, the workflow from retrieving meta-data to registering them into the database is automated. This technique is applied for several database systems, such as the DARTS database system and NICT Space Weather Report Service. The DARTS is a science database managed by ISAS/JAXA in Japan. We succeeded in generating and collecting the meta-data automatically for the CDF (Common data Format) data, such as Reimei satellite data, provided by the DARTS. We also create an RDF service for space weather report and real-time global MHD simulation 3D data provided by the NICT. Our Semantic Web system works as follows: The RSS1.0 documents generated on the data sites (ISAS and NICT) are automatically collected by a meta-data collection agent. The RDF documents are registered and the agent extracts meta-data to store them in the Sesame, which is an open source RDF database with support for RDF Schema inferencing and querying. The RDF database provides advanced retrieval processing that has considered property and relation. Finally, the STP Semantic Web provides automatic processing or high level search for the data which are not only for observation data but for space weather news, physical events, technical terms and researches information related to the STP.

  19. Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution

    NASA Astrophysics Data System (ADS)

    Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.

    2017-10-01

    Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.

  20. Managing equipment innovations in mining: A review.

    PubMed

    Trudel, Bryan; Nadeau, Sylvie; Zaras, Kazimierz; Deschamps, Isabelle

    2015-01-01

    Technological innovations in mining equipment have led to increased productivity and occupational health and safety (OHS) performance, but their introduction also brings new risks for workers. The aim of this study is to provide support for mining industry managers who are required to reconcile equipment choices with OHS and productivity. Examination of the literature through interdisciplinary digital databases. Databases were searched using specific combinations of keywords and limited to studies dating back no farther than 1992. The ``snowball'' technique was also used to examining the references listed in research articles initially identified with the databases. A total of 19 contextual factors were identified as having the potential to influence the OHS and productivity leverage of equipment innovations. The most often cited among these factors are the level of training provided to the equipment operators, operator experience and age, supervisor leadership abilities, and maintaining good relations within work crews. Interactions between these factors are not discussed in mining innovation literature. It would be helpful to use a systems thinking approach which incorporates interaction between relevant actors and factors to define properly the most sensitive aspects of innovation management as it applies to mining equipment.

  1. The structure and emerging trends of construction safety management research: a bibliometric review.

    PubMed

    Liang, Huakang; Zhang, Shoujian; Su, Yikun

    2018-03-29

    Recently, construction safety management (CSM) practices and systems have become important topics for stakeholders to take care of human resources. However, few studies have attempted to map the global research on CSM. A comprehensive bibliometric review was conducted in this study based on multiple methods. In total, 1172 CSM-related papers from the Web of Science Core Collection database were examined. The analyses focused on publication year, country-institute, publication source, author and research topics. The results indicated that the USA, China, Australia and the UK took leading positions in CSM research. Two branches of journals were identified, namely the branch of engineering science and that of safety science and social science. Additionally, seven themes together with 28 specific topics were detected to allow researchers to track the main structure and temporal evolution of CSM research. Finally, the main research trends and potential research directions were discussed to guide the future research.

  2. Database Management: Building, Changing and Using Databases. Collected Papers and Abstracts of the Mid-Year Meeting of the American Society for Information Science (15th, Portland, Oregon, May 1986).

    ERIC Educational Resources Information Center

    American Society for Information Science, Washington, DC.

    This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…

  3. Data Sharing in Astrobiology: the Astrobiology Habitable Environments Database (AHED)

    NASA Astrophysics Data System (ADS)

    Bristow, T.; Lafuente Valverde, B.; Keller, R.; Stone, N.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.

    2016-12-01

    Astrobiology is a multidisciplinary area of scientific research focused on studying the origins of life on Earth and the conditions under which life might have emerged elsewhere in the universe. The understanding of complex questions in astrobiology requires integration and analysis of data spanning a range of disciplines including biology, chemistry, geology, astronomy and planetary science. However, the lack of a centralized repository makes it difficult for astrobiology teams to share data and benefit from resultant synergies. Moreover, in recent years, federal agencies are requiring that results of any federally funded scientific research must be available and useful for the public and the science community. Astrobiology, as any other scientific discipline, needs to respond to these mandates. The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository designed to help the community by promoting the integration and sharing of all the data generated by these diverse disciplines. AHED provides public and open-access to astrobiology-related research data through a user-managed web portal implemented using the open-source software The Open Data Repository's (ODR) Data Publisher [1]. ODR-DP provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own databases or laboratory notebooks according to the characteristics of their data. AHED is then a collection of databases housed in the ODR framework that store information about samples, along with associated measurements, analyses, and contextual information about field sites where samples were collected, the instruments or equipment used for analysis, and people and institutions involved in their collection. Advanced graphics are implemented together with advanced online tools for data analysis (e.g. R, MATLAB, Project Jupyter-http://jupyter.org). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by SERA and NASA NNX11AP82A, MSL. [1] Stone et al. (2016) AGU, submitted.

  4. A Multimodal Database for a Home Remote Medical Care Application

    NASA Astrophysics Data System (ADS)

    Medjahed, Hamid; Istrate, Dan; Boudy, Jerome; Steenkeste, François; Baldinger, Jean-Louis; Dorizzi, Bernadette

    The home remote monitoring systems aim to make a protective contribution to the well being of individuals (patients, elderly persons) requiring moderate amounts of support for independent living spaces, and improving their everyday life. Existing researches of these systems suffer from lack of experimental data and a standard medical database intended for their validation and improvement. This paper presents a multi-sensors environment for acquiring and recording a multimodal medical database, which includes physiological data (cardiac frequency, activity or agitation, posture, fall), environment sounds and localization data. It provides graphical interface functions to manage, process and index these data. The paper focuses on the system implementation, its usage and it points out possibilities for future work.

  5. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  6. Knowledge production status of Iranian researchers in the gastric cancer area: based on the medline database.

    PubMed

    Ghojazadeh, Morteza; Naghavi-Behzad, Mohammad; Nasrolah-Zadeh, Raheleh; Bayat-Khajeh, Parvaneh; Piri, Reza; Mirnia, Keyvan; Azami-Aghdash, Saber

    2014-01-01

    Scientometrics is a useful method for management of financial and human resources and has been applied many times in medical sciences during recent years. The aim of this study was to investigate the status of science production by Iranian scientists in the gastric cancer field based on the Medline database. In this descriptive-cross sectional study Iranian science production concerning gastric cancer during 2000-2011 was investigated based on Medline. After two stages of searching, 121 articles were found, then we reviewed publication date, authors names, journal title, impact factor (IF), and cooperation coefficient between researchers. SPSS.19 was used for statistical analysis. There was a significant increase in published articles about gastric cancer by Iranian researchers in Medline database during 2006-2011. Mean cooperation coefficient between researchers was 6.14±3.29 person per article. Articles of this field were published in 19 countries and 56 journals. Those basex in Thailand, England, and America had the most published Iranian articles. Tehran University of Medical Sciences and Mohammadreza Zali had the most outstanding role in publishing scientific articles. According to results of this study, improving cooperation of researchers in conducting research and scientometric studies about other fields may have an important role in increasing both quality and quantity of published studies.

  7. DATABASE OF THE NONINDIGENOUS SPECIES IN THE ESTUARIES OF CALIFORNIA, OREGON, AND WASHINGTON

    EPA Science Inventory

    The number and composition of the native and nonindigenous species is a key component in invasive species risk assessments and regional prioritizations. The problem for both managers and researchers is that this information is scattered in the peer-reviewed literature, gray liter...

  8. Integration of the NRL Digital Library.

    ERIC Educational Resources Information Center

    King, James

    2001-01-01

    The Naval Research Laboratory (NRL) Library has identified six primary areas that need improvement: infrastructure, InfoWeb, TORPEDO Ultra, journal data management, classified data, and linking software. It is rebuilding InfoWeb and TORPEDO Ultra as database-driven Web applications, upgrading the STILAS library catalog, and creating other support…

  9. The crustal dynamics intelligent user interface anthology

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.

    1987-01-01

    The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bower, J.C.; Burford, M.J.; Downing, T.R.

    The Integrated Baseline System (IBS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Nuclear and Chemical Agency (USANCA). The IBS Data Management Guide provides the background, as well as the operations and procedures needed to generate and maintain a site-specific map database. Data and system managers use this guide to manage the data files and database that support the administrative, user-environment, database management, and operational capabilities of the IBS. This document provides a description of the data files and structures necessary for running the IBS software and using themore » site map database.« less

  11. Chemical and mineralogical data and processing methods management system prototype with application to study of the North Caucasus Blybsky Metamorphic Complexes metamorphism PT-condition

    NASA Astrophysics Data System (ADS)

    Ivanov, Stanislav; Kamzolkin, Vladimir; Konilov, Aleksandr; Aleshin, Igor

    2014-05-01

    There are many various methods of assessing the conditions of rocks formation based on determining the composition of the constituent minerals. Our objective was to create a universal tool for processing mineral's chemical analysis results and solving geothermobarometry problems by creating a database of existing sensors and providing a user-friendly standard interface. Similar computer assisted tools are based upon large collection of sensors (geothermometers and geobarometers) are known, for example, the project TPF (Konilov A.N., 1999) - text-based sensor collection tool written in PASCAL. The application contained more than 350 different sensors and has been used widely in petrochemical studies (see A.N. Konilov , A.A. Grafchikov, V.I. Fonarev 2010 for review). Our prototype uses the TPF project concept and is designed with modern application development techniques, which allows better flexibility. Main components of the designed system are 3 connected datasets: sensors collection (geothermometers, geobarometers, oxygen geobarometers, etc.), petrochemical data and modeling results. All data is maintained by special management and visualization tools and resides in sql database. System utilities allow user to import and export data in various file formats, edit records and plot graphs. Sensors database contains up to date collections of known methods. New sensors may be added by user. Measured database should be filled in by researcher. User friendly interface allows access to all available data and sensors, automates routine work, reduces the risk of common user mistakes and simplifies information exchange between research groups. We use prototype to evaluate peak pressure during the formation of garnet-amphibolite apoeclogites, gneisses and schists Blybsky metamorphic complex of the Front Range of the Northern Caucasus. In particular, our estimation of formation pressure range (18 ± 4 kbar) agrees on independent research results. The reported study was partially supported by RFBR, research project No. 14-05-00615.

  12. Student Research Projects

    NASA Technical Reports Server (NTRS)

    Yeske, Lanny A.

    1998-01-01

    Numerous FY1998 student research projects were sponsored by the Mississippi State University Center for Air Sea Technology. This technical note describes these projects which include research on: (1) Graphical User Interfaces, (2) Master Environmental Library, (3) Database Management Systems, (4) Naval Interactive Data Analysis System, (5) Relocatable Modeling Environment, (6) Tidal Models, (7) Book Inventories, (8) System Analysis, (9) World Wide Web Development, (10) Virtual Data Warehouse, (11) Enterprise Information Explorer, (12) Equipment Inventories, (13) COADS, and (14) JavaScript Technology.

  13. [Selected aspects of computer-assisted literature management].

    PubMed

    Reiss, M; Reiss, G

    1998-01-01

    We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.

  14. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    ERIC Educational Resources Information Center

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  15. Content Independence in Multimedia Databases.

    ERIC Educational Resources Information Center

    de Vries, Arjen P.

    2001-01-01

    Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…

  16. Use of a database for managing qualitative research data.

    PubMed

    Ross, B A

    1994-01-01

    In this article, a process for handling text data in qualitative research projects by using existing word-processing and database programs is described. When qualitative data are managed using this method, the information is more readily available and the coding and organization of the data are enhanced. Furthermore, the narrative always remains intact regardless of how it is arranged or re-arranged, and there is a concomitant time savings and increased accuracy. The author hopes that this article will inspire some readers to explore additional methods and processes for computer-aided, nonstatistical data management. The study referred to in this article (Ross, 1991) was a qualitative research project which sought to find out how teaching faculty in nursing and education used computers in their professional work. Ajzen and Fishbein's (1980) Theory of Reasoned Action formed the theoretical basis for this work. This theory proposes that behavior, in this study the use of computers, is the result of intentions and that intentions are the result of attitudes and social norms. The study found that although computer use was sometimes the result of attitudes, more often it seemed to be the result of subjective (perceived) norms or intervening variables. Teaching faculty apparently did not initially make reasoned judgments about the computers or the programs they used, but chose to use whatever was required or available.

  17. Multimedia Database at National Museum of Ethnology

    NASA Astrophysics Data System (ADS)

    Sugita, Shigeharu

    This paper describes the information management system at National Museum of Ethnology, Osaka, Japan. This museum is a kind of research center for cultural anthropology, and has many computer systems such as IBM 3090, VAX11/780, Fujitu M340R, etc. With these computers, distributed multimedia databases are constructed in which not only bibliographic data but also artifact image, slide image, book page image, etc. are stored. The number of data is now about 1.3 million items. These data can be retrieved and displayed on the multimedia workstation which has several displays.

  18. Beyond the online catalog: developing an academic information system in the sciences.

    PubMed Central

    Crawford, S; Halbrook, B; Kelly, E; Stucki, L

    1987-01-01

    The online public access catalog consists essentially of a machine-readable database with network capabilities. Like other computer-based information systems, it may be continuously enhanced by the addition of new capabilities and databases. It may also become a gateway to other information networks. This paper reports the evolution of the Bibliographic Access and Control System (BACS) of Washington University in end-user searching, current awareness services, information management, and administrative functions. Ongoing research and development and the future of the online catalog are also discussed. PMID:3315052

  19. Beyond the online catalog: developing an academic information system in the sciences.

    PubMed

    Crawford, S; Halbrook, B; Kelly, E; Stucki, L

    1987-07-01

    The online public access catalog consists essentially of a machine-readable database with network capabilities. Like other computer-based information systems, it may be continuously enhanced by the addition of new capabilities and databases. It may also become a gateway to other information networks. This paper reports the evolution of the Bibliographic Access and Control System (BACS) of Washington University in end-user searching, current awareness services, information management, and administrative functions. Ongoing research and development and the future of the online catalog are also discussed.

  20. Advanced telemedicine development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; George, J.E.; Gavrilov, E.M.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop a Java-based, electronic, medical-record system that can handle multimedia data and work over a wide-area network based on open standards, and that can utilize an existing database back end. The physician is to be totally unaware that there is a database behind the scenes and is only aware that he/she can access and manage the relevant information to treat the patient.

  1. Addition of a breeding database in the Genome Database for Rosaceae

    PubMed Central

    Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie

    2013-01-01

    Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox PMID:24247530

  2. Addition of a breeding database in the Genome Database for Rosaceae.

    PubMed

    Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie

    2013-01-01

    Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox.

  3. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  4. The research and development of water resources management information system based on ArcGIS

    NASA Astrophysics Data System (ADS)

    Cui, Weiqun; Gao, Xiaoli; Li, Yuzhi; Cui, Zhencai

    According to that there are large amount of data, complexity of data type and format in the water resources management, we built the water resources calculation model and established the water resources management information system based on the advanced ArcGIS and Visual Studio.NET development platform. The system can integrate the spatial data and attribute data organically, and manage them uniformly. It can analyze spatial data, inquire by map and data bidirectionally, provide various charts and report forms automatically, link multimedia information, manage database etc. . So it can provide spatial and static synthetical information services for study, management and decision of water resources, regional geology and eco-environment etc..

  5. Demonstration of SLUMIS: a clinical database and management information system for a multi organ transplant program.

    PubMed Central

    Kurtz, M.; Bennett, T.; Garvin, P.; Manuel, F.; Williams, M.; Langreder, S.

    1991-01-01

    Because of the rapid evolution of the heart, heart/lung, liver, kidney and kidney/pancreas transplant programs at our institution, and because of a lack of an existing comprehensive database, we were required to develop a computerized management information system capable of supporting both clinical and research requirements of a multifaceted transplant program. SLUMIS (ST. LOUIS UNIVERSITY MULTI-ORGAN INFORMATION SYSTEM) was developed for the following reasons: 1) to comply with the reporting requirements of various transplant registries, 2) for reporting to an increasing number of government agencies and insurance carriers, 3) to obtain updates of our operative experience at regular intervals, 4) to integrate the Histocompatibility and Immunogenetics Laboratory (HLA) for online test result reporting, and 5) to facilitate clinical investigation. PMID:1807741

  6. MetPetDB: A database for metamorphic geochemistry

    NASA Astrophysics Data System (ADS)

    Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather

    2009-12-01

    We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.

  7. NGS Catalog: A Database of Next Generation Sequencing Studies in Humans

    PubMed Central

    Xia, Junfeng; Wang, Qingguo; Jia, Peilin; Wang, Bing; Pao, William; Zhao, Zhongming

    2015-01-01

    Next generation sequencing (NGS) technologies have been rapidly applied in biomedical and biological research since its advent only a few years ago, and they are expected to advance at an unprecedented pace in the following years. To provide the research community with a comprehensive NGS resource, we have developed the database Next Generation Sequencing Catalog (NGS Catalog, http://bioinfo.mc.vanderbilt.edu/NGS/index.html), a continually updated database that collects, curates and manages available human NGS data obtained from published literature. NGS Catalog deposits publication information of NGS studies and their mutation characteristics (SNVs, small insertions/deletions, copy number variations, and structural variants), as well as mutated genes and gene fusions detected by NGS. Other functions include user data upload, NGS general analysis pipelines, and NGS software. NGS Catalog is particularly useful for investigators who are new to NGS but would like to take advantage of these powerful technologies for their own research. Finally, based on the data deposited in NGS Catalog, we summarized features and findings from whole exome sequencing, whole genome sequencing, and transcriptome sequencing studies for human diseases or traits. PMID:22517761

  8. Needed: Global Collaboration for Comparative Research on Cities and Health

    PubMed Central

    Gusmano, Michael K.; Rodwin, Victor G.

    2016-01-01

    Over half of the world’s population lives in cities and United Nations (UN) demographers project an increase of 2.5 billion more urban dwellers by 2050. Yet there is too little systematic comparative research on the practice of urban health policy and management (HPAM), particularly in the megacities of middle-income and developing nations. We make a case for creating a global database on cities, population health and healthcare systems. The expenses involved in data collection would be difficult to justify without some review of previous work, some agreement on indicators worth measuring, conceptual and methodological considerations to guide the construction of the global database, and a set of research questions and hypotheses to test. We, therefore, address these issues in a manner that we hope will stimulate further discussion and collaboration. PMID:27694667

  9. Needed: Global Collaboration for Comparative Research on Cities and Health.

    PubMed

    Gusmano, Michael K; Rodwin, Victor G

    2016-04-16

    Over half of the world's population lives in cities and United Nations (UN) demographers project an increase of 2.5 billion more urban dwellers by 2050. Yet there is too little systematic comparative research on the practice of urban health policy and management (HPAM), particularly in the megacities of middle-income and developing nations. We make a case for creating a global database on cities, population health and healthcare systems. The expenses involved in data collection would be difficult to justify without some review of previous work, some agreement on indicators worth measuring, conceptual and methodological considerations to guide the construction of the global database, and a set of research questions and hypotheses to test. We, therefore, address these issues in a manner that we hope will stimulate further discussion and collaboration. © 2016 by Kerman University of Medical Sciences.

  10. The Astrobiology Habitable Environments Database (AHED)

    NASA Astrophysics Data System (ADS)

    Lafuente, B.; Stone, N.; Downs, R. T.; Blake, D. F.; Bristow, T.; Fonda, M.; Pires, A.

    2015-12-01

    The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository for archiving and collaborative sharing of astrobiologically relevant data, including, morphological, textural and contextural images, chemical, biochemical, isotopic, sequencing, and mineralogical information. The aim of AHED is to foster long-term innovative research by supporting integration and analysis of diverse datasets in order to: 1) help understand and interpret planetary geology; 2) identify and characterize habitable environments and pre-biotic/biotic processes; 3) interpret returned data from present and past missions; 4) provide a citable database of NASA-funded published and unpublished data (after an agreed-upon embargo period). AHED uses the online open-source software "The Open Data Repository's Data Publisher" (ODR - http://www.opendatarepository.org) [1], which provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own database according to the characteristics of their data and the need to share data with collaborators or the broader scientific community. This platform can be also used as a laboratory notebook. The database will have the capability to import and export in a variety of standard formats. Advanced graphics will be implemented including 3D graphing, multi-axis graphs, error bars, and similar scientific data functions together with advanced online tools for data analysis (e. g. the statistical package, R). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, Mars Science Laboratory Investigations. [1] Nate et al. (2015) AGU, submitted.

  11. MAGA, a new database of gas natural emissions: a collaborative web environment for collecting data.

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Chiodini, Giovanni; Frigeri, Alessandro; Bagnato, Emanuela; Frondini, Francesco; Aiuppa, Alessandro

    2014-05-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, are incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various scales. A new and detailed web database (MAGA: MApping GAs emissions) has been developed, and recently improved, to collect data on carbon degassing form volcanic and non-volcanic environments. MAGA database allows researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and with the ingestion in to the database of the data from: i) a literature survey on publications on volcanic gas fluxes including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores, and ii) the revision and update of Googas database on non-volcanic emission of the Italian territory (Chiodini et al., 2008), in the framework of the Deep Earth Carbon Degassing (DECADE) research initiative of the Deep Carbon Observatory (DCO). For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of each site. In this phase data can be accessed on the network from a web interface, and data-driven web service, where software clients can request data directly from the database, are planned to be implemented shortly. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) could easily access the database, and data could be exchanged with other database. At the moment the database includes: i) more than 1000 flux data about volcanic plume degassing from Etna and Stromboli volcanoes, ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, several data on fumarolic emissions (~ 7 sites) with CO2 fluxes; iii) data from ~ 270 non volcanic gas emission site in Italy. We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates

  12. Development and operation of NEW-KOTIS : In-house technical information database of Nippon Kokan Corp.

    NASA Astrophysics Data System (ADS)

    Yagi, Yukio; Takahashi, Kaei

    The purpose of this report is to describe how the activities for managing technical information has been and is now being conducted by the Engineering department of Nippon Kokan Corp. In addition, as a practical example of database generation promoted by the department, this book gives whole aspects of the NEW-KOTIS (background of its development, history, features, functional details, control and operation method, use in search operations, and so forth). The NEW-KOTIS (3rd-term system) is an "in-house technical information database system," which started its operation on May, 1987. This database system now contains approximately 65,000 information items (research reports, investigation reports, technical reports, etc.) generated within the company, and this information is available to anyone in any department through the network connecting all the company's structures.

  13. A web-based, relational database for studying glaciers in the Italian Alps

    NASA Astrophysics Data System (ADS)

    Nigrelli, G.; Chiarle, M.; Nuzzi, A.; Perotti, L.; Torta, G.; Giardino, M.

    2013-02-01

    Glaciers are among the best terrestrial indicators of climate change and thus glacier inventories have attracted a growing, worldwide interest in recent years. In Italy, the first official glacier inventory was completed in 1925 and 774 glacial bodies were identified. As the amount of data continues to increase, and new techniques become available, there is a growing demand for computer tools that can efficiently manage the collected data. The Research Institute for Geo-hydrological Protection of the National Research Council, in cooperation with the Departments of Computer Science and Earth Sciences of the University of Turin, created a database that provides a modern tool for storing, processing and sharing glaciological data. The database was developed according to the need of storing heterogeneous information, which can be retrieved through a set of web search queries. The database's architecture is server-side, and was designed by means of an open source software. The website interface, simple and intuitive, was intended to meet the needs of a distributed public: through this interface, any type of glaciological data can be managed, specific queries can be performed, and the results can be exported in a standard format. The use of a relational database to store and organize a large variety of information about Italian glaciers collected over the last hundred years constitutes a significant step forward in ensuring the safety and accessibility of such data. Moreover, the same benefits also apply to the enhanced operability for handling information in the future, including new and emerging types of data formats, such as geographic and multimedia files. Future developments include the integration of cartographic data, such as base maps, satellite images and vector data. The relational database described in this paper will be the heart of a new geographic system that will merge data, data attributes and maps, leading to a complete description of Italian glacial environments.

  14. Application of advanced data collection and quality assurance methods in open prospective study - a case study of PONS project.

    PubMed

    Wawrzyniak, Zbigniew M; Paczesny, Daniel; Mańczuk, Marta; Zatoński, Witold A

    2011-01-01

    Large-scale epidemiologic studies can assess health indicators differentiating social groups and important health outcomes of the incidence and mortality of cancer, cardiovascular disease, and others, to establish a solid knowledgebase for the prevention management of premature morbidity and mortality causes. This study presents new advanced methods of data collection and data management systems with current data quality control and security to ensure high quality data assessment of health indicators in the large epidemiologic PONS study (The Polish-Norwegian Study). The material for experiment is the data management design of the large-scale population study in Poland (PONS) and the managed processes are applied into establishing a high quality and solid knowledge. The functional requirements of the PONS study data collection, supported by the advanced IT web-based methods, resulted in medical data of a high quality, data security, with quality data assessment, control process and evolution monitoring are fulfilled and shared by the IT system. Data from disparate and deployed sources of information are integrated into databases via software interfaces, and archived by a multi task secure server. The practical and implemented solution of modern advanced database technologies and remote software/hardware structure successfully supports the research of the big PONS study project. Development and implementation of follow-up control of the consistency and quality of data analysis and the processes of the PONS sub-databases have excellent measurement properties of data consistency of more than 99%. The project itself, by tailored hardware/software application, shows the positive impact of Quality Assurance (QA) on the quality of outcomes analysis results, effective data management within a shorter time. This efficiency ensures the quality of the epidemiological data and indicators of health by the elimination of common errors of research questionnaires and medical measurements.

  15. Transition Literature Review: Educational, Employment, and Independent Living Outcomes. Volume 3.

    ERIC Educational Resources Information Center

    Harnisch, Delwyn L.; Fisher, Adrian T.

    This review focuses on both published and unpublished literature in the areas of education, employment, and independent living outcomes across 13 handicapping conditions. Preliminary chapters describe the database system used to manage the literature identified, and discuss research methods in transition literature. Subsequent chapters then review…

  16. A Virtual "Hello": A Web-Based Orientation to the Library.

    ERIC Educational Resources Information Center

    Borah, Eloisa Gomez

    1997-01-01

    Describes the development of Web-based library services and resources available at the Rosenfeld Library of the Anderson Graduate School of Management at University of California at Los Angeles. Highlights include library orientation sessions; virtual tours of the library; a database of basic business sources; and research strategies, including…

  17. IRBAS: An online database to collate, analyze, and synthesize data on the biodiversity and ecology of intermittent rivers worldwide

    EPA Science Inventory

    Key questions dominating contemporary ecological research and management concern interactions between biodiversity, ecosystem processes, and ecosystem services provision in the face of global change. This is particularly salient for freshwater biodiversity and in the context of r...

  18. HYDROGEOLOGIC FOUNDATION IN SUPPORT OF ECOSYSTEM RESTORATION: BASE-FLOW LOADINGS OF NITRATE IN MID-ATLANTIC AGRICULTURAL WATERSHEDS

    EPA Science Inventory

    The study is a consortium between the U.S. Environmental Protection Agency (National Risk Management Research Laboratory) and the U.S. Geological Survey (Baltimore and Dover). The objectives of this study are: (1) to develop a geohydrological database for paired agricultural wate...

  19. Long-term ecosystem monitoring and change detection: the Sonoran initiative

    Treesearch

    Robert Lozar; Charles Ehlschlaeger

    2005-01-01

    Ecoregional Systems Heritage and Encroachment Monitoring (ESHEM) examines issues of land management at an ecosystem level using remote sensing. Engineer Research and Development Center (ERDC), in partnership with Western Illinois University, has developed an ecoregional database and monitoring capability covering the Sonoran region. The monitoring time horizon will...

  20. 1995 AAAS annual meeting and science innovation exposition: Unity in diversity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, M.S.; Heasley, C.

    1995-12-31

    Abstracts are presented from the 161st National Meeting of the American Association for the advancement of Science. Topics include environmental technologies, genetics, physical science research, information management, nuclear weapon issues, and education. Individual topics have been processed separately for the United States Department of Energy databases.

  1. Bioinformatics: A History of Evolution "In Silico"

    ERIC Educational Resources Information Center

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  2. Computer Security Products Technology Overview

    DTIC Science & Technology

    1988-10-01

    13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a

  3. An Introduction to Database Management Systems.

    ERIC Educational Resources Information Center

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  4. Coverage of Google Scholar, Scopus, and Web of Science: a case study of the h-index in nursing.

    PubMed

    De Groote, Sandra L; Raszewski, Rebecca

    2012-01-01

    This study compares the articles cited in CINAHL, Scopus, Web of Science (WOS), and Google Scholar and the h-index ratings provided by Scopus, WOS, and Google Scholar. The publications of 30 College of Nursing faculty at a large urban university were examined. Searches by author name were executed in Scopus, WOS, and POP (Publish or Perish, which searches Google Scholar), and the h-index for each author from each database was recorded. In addition, the citing articles of their published articles were imported into a bibliographic management program. This data was used to determine an aggregated h-index for each author. Scopus, WOS, and Google Scholar provided different h-index ratings for authors and each database found unique and duplicate citing references. More than one tool should be used to calculate the h-index for nursing faculty because one tool alone cannot be relied on to provide a thorough assessment of a researcher's impact. If researchers are interested in a comprehensive h-index, they should aggregate the citing references located by WOS and Scopus. Because h-index rankings differ among databases, comparisons between researchers should be done only within a specified database. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. A Comprehensive Data Architecture for Multi-Disciplinary Marine Mammal Research

    NASA Astrophysics Data System (ADS)

    Palacios, D. M.; Follett, T.; Winsor, M.; Mate, B. R.

    2016-02-01

    The Oregon State University Marine Mammal Institute (MMI) comprises five research laboratories, each with specific research objectives, technological approaches, and data requirements. Among the types of data under management are individual photo-ID and field observations, telemetry (e.g., locations, dive characteristics, temperature, acoustics), genetics (and relatedness), stable isotope and toxicology assays, and remotely sensed environmental data. Coordinating data management that facilitates collaboration and comparative exploration among different researchers has been a longstanding challenge for our groups as well as for the greater wildlife research community. Research data are commonly stored locally in flat files or spreadsheets, with copies made and analyses performed with various packages without any common standards for interoperability, becoming a potential source of error. Database design, where it exists, is frequently arrived at ad-hoc. New types of data are generally tacked on when technological advances present them. A data management solution that can address these issues should meet the following requirements: be scalable, modular (i.e., able to incorporate new types of data as they arise), incorporate spatiotemporal dimensions, and be compliant with existing data standards such as DarwinCore. The MMI has developed a data architecture that allows the incorporation of any type of animal-associated data into a modular and portable format that can be integrated with any other dataset sharing the core format. It allows browsing, querying and visualization across any of the attributes that can be associated with individual animals, groups, sensors, or environmental datasets. We have implemented this architecture in an open-source geo-enabled relational database system (PostgreSQL, PostGIS), and have designed a suite of software tools (Python, R) to load, preprocess, visualize, analyze, and export data. This architecture could benefit organizations with similar data challenges.

  6. The HyMeX database

    NASA Astrophysics Data System (ADS)

    Brissebrat, Guillaume; Mastrorillo, Laurence; Ramage, Karim; Boichard, Jean-Luc; Cloché, Sophie; Fleury, Laurence; Klenov, Ludmila; Labatut, Laurent; Mière, Arnaud

    2013-04-01

    The international HyMeX (HYdrological cycle in the Mediterranean EXperiment) project aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean, with emphasis on high-impact weather events, inter-annual to decadal variability of the Mediterranean coupled system, and associated trends in the context of global change. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data, modelling studies, as well as post event field surveys and value-added products processing. Therefore HyMeX database incorporates various dataset types from different disciplines, either operational or research. The database relies on a strong collaboration between OMP and IPSL data centres. Field data, which are 1D time series, maps or pictures, are managed by OMP team while gridded data (satellite products, model outputs, radar data...) are managed by IPSL team. At present, the HyMeX database contains about 150 datasets, including 80 hydrological, meteorological, ocean and soil in situ datasets, 30 radar datasets, 15 satellite products, 15 atmosphere, ocean and land surface model outputs from operational (re-)analysis or forecasts and from research simulations, and 5 post event survey datasets. The data catalogue complies with international standards (ISO 19115; INSPIRE; Directory Interchange Format; Global Change Master Directory Thesaurus). It includes all the datasets stored in the HyMeX database, as well as external datasets relevant for the project. All the data, whatever the type is, are accessible through a single gateway. The database website http://mistrals.sedoo.fr/HyMeX offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - Forms to document observations or products that will be provided to the database. - A shopping-cart web interface to order in situ data files. - Ftp facilities to access gridded data. The website will soon propose new facilities. Many in situ datasets have been homogenized and inserted in a relational database yet, in order to enable more accurate data selection and download of different datasets in a shared format. Interoperability between the two data centres will be enhanced by the OpenDAP communication protocol associated with the Thredds catalogue software, which may also be implemented in other data centres that manage data of interest for the HyMeX project. In order to meet the operational needs for the HyMeX 2012 campaigns, a day-to-day quick look and report display website has been developed too: http://sop.hymex.org. It offers a convenient way to browse meteorological conditions and data during the campaign periods.

  7. Lessons learned while building the Deepwater Horizon Database: Toward improved data sharing in coastal science

    NASA Astrophysics Data System (ADS)

    Thessen, Anne E.; McGinnis, Sean; North, Elizabeth W.

    2016-02-01

    Process studies and coupled-model validation efforts in geosciences often require integration of multiple data types across time and space. For example, improved prediction of hydrocarbon fate and transport is an important societal need which fundamentally relies upon synthesis of oceanography and hydrocarbon chemistry. Yet, there are no publically accessible databases which integrate these diverse data types in a georeferenced format, nor are there guidelines for developing such a database. The objective of this research was to analyze the process of building one such database to provide baseline information on data sources and data sharing and to document the challenges and solutions that arose during this major undertaking. The resulting Deepwater Horizon Database was approximately 2.4 GB in size and contained over 8 million georeferenced data points collected from industry, government databases, volunteer networks, and individual researchers. The major technical challenges that were overcome were reconciliation of terms, units, and quality flags which were necessary to effectively integrate the disparate data sets. Assembling this database required the development of relationships with individual researchers and data managers which often involved extensive e-mail contacts. The average number of emails exchanged per data set was 7.8. Of the 95 relevant data sets that were discovered, 38 (40%) were obtained, either in whole or in part. Over one third (36%) of the requests for data went unanswered. The majority of responses were received after the first request (64%) and within the first week of the first request (67%). Although fewer than half of the potentially relevant datasets were incorporated into the database, the level of sharing (40%) was high compared to some other disciplines where sharing can be as low as 10%. Our suggestions for building integrated databases include budgeting significant time for e-mail exchanges, being cognizant of the cost versus benefits of pursuing reticent data providers, and building trust through clear, respectful communication and with flexible and appropriate attributions.

  8. Research mapping in North Sumatra based on Scopus

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitepu, R.; Rosmayati; Bakti, D.; Hardi, S. M.

    2018-02-01

    Research is needed to improve the capacity of human resources to manage natural resources for human well-being. Research is done by institutions such as universities or research institutions, but the research picture related to human welfare interests is not easy to obtain. If research can be proven through scientific publications, scientific research publication databases can be used to view research behaviour. Research mapping in North Sumatra needs to be done to see the suitability of research conducted with development needs in North Sumatra, and as a presentation is the Universitas Sumatera Utara which shows that research conducted has 60% strength, especially in the exact sciences.

  9. Software Tools Streamline Project Management

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Three innovative software inventions from Ames Research Center (NETMARK, Program Management Tool, and Query-Based Document Management) are finding their way into NASA missions as well as industry applications. The first, NETMARK, is a program that enables integrated searching of data stored in a variety of databases and documents, meaning that users no longer have to look in several places for related information. NETMARK allows users to search and query information across all of these sources in one step. This cross-cutting capability in information analysis has exponentially reduced the amount of time needed to mine data from days or weeks to mere seconds. NETMARK has been used widely throughout NASA, enabling this automatic integration of information across many documents and databases. NASA projects that use NETMARK include the internal reporting system and project performance dashboard, Erasmus, NASA s enterprise management tool, which enhances organizational collaboration and information sharing through document routing and review; the Integrated Financial Management Program; International Space Station Knowledge Management; Mishap and Anomaly Information Reporting System; and management of the Mars Exploration Rovers. Approximately $1 billion worth of NASA s projects are currently managed using Program Management Tool (PMT), which is based on NETMARK. PMT is a comprehensive, Web-enabled application tool used to assist program and project managers within NASA enterprises in monitoring, disseminating, and tracking the progress of program and project milestones and other relevant resources. The PMT consists of an integrated knowledge repository built upon advanced enterprise-wide database integration techniques and the latest Web-enabled technologies. The current system is in a pilot operational mode allowing users to automatically manage, track, define, update, and view customizable milestone objectives and goals. The third software invention, Query-Based Document Management (QBDM) is a tool that enables content or context searches, either simple or hierarchical, across a variety of databases. The system enables users to specify notification subscriptions where they associate "contexts of interest" and "events of interest" to one or more documents or collection(s) of documents. Based on these subscriptions, users receive notification when the events of interest occur within the contexts of interest for associated document or collection(s) of documents. Users can also associate at least one notification time as part of the notification subscription, with at least one option for the time period of notifications.

  10. Outcomes of an investment in administrative data infrastructure: An example of capacity building at the Manitoba Centre for Health Policy.

    PubMed

    Orr, Justine; Smith, Mark; Burchill, Charles; Katz, Alan; Fransoo, Randy

    2016-12-27

    Using the Manitoba Centre for Health Policy as an example, this commentary discusses how even small investments in population health data can create a multitude of research benefits. The authors highlight that through infrastructure development such as acquiring databases, facilitating access to data and developing data management practices, new, innovative research can be achieved at relatively low cost.

  11. Differences among Major Taxa in the Extent of Ecological Knowledge across Four Major Ecosystems

    PubMed Central

    Fisher, Rebecca; Knowlton, Nancy; Brainard, Russell E.; Caley, M. Julian

    2011-01-01

    Existing knowledge shapes our understanding of ecosystems and is critical for ecosystem-based management of the world's natural resources. Typically this knowledge is biased among taxa, with some taxa far better studied than others, but the extent of this bias is poorly known. In conjunction with the publically available World Registry of Marine Species database (WoRMS) and one of the world's premier electronic scientific literature databases (Web of Science®), a text mining approach is used to examine the distribution of existing ecological knowledge among taxa in coral reef, mangrove, seagrass and kelp bed ecosystems. We found that for each of these ecosystems, most research has been limited to a few groups of organisms. While this bias clearly reflects the perceived importance of some taxa as commercially or ecologically valuable, the relative lack of research of other taxonomic groups highlights the problem that some key taxa and associated ecosystem processes they affect may be poorly understood or completely ignored. The approach outlined here could be applied to any type of ecosystem for analyzing previous research effort and identifying knowledge gaps in order to improve ecosystem-based conservation and management. PMID:22073172

  12. Sports medicine clinical trial research publications in academic medical journals between 1996 and 2005: an audit of the PubMed MEDLINE database.

    PubMed

    Nichols, A W

    2008-11-01

    To identify sports medicine-related clinical trial research articles in the PubMed MEDLINE database published between 1996 and 2005 and conduct a review and analysis of topics of research, experimental designs, journals of publication and the internationality of authorships. Sports medicine research is international in scope with improving study methodology and an evolution of topics. Structured review of articles identified in a search of a large electronic medical database. PubMed MEDLINE database. Sports medicine-related clinical research trials published between 1996 and 2005. Review and analysis of articles that meet inclusion criteria. Articles were examined for study topics, research methods, experimental subject characteristics, journal of publication, lead authors and journal countries of origin and language of publication. The search retrieved 414 articles, of which 379 (345 English language and 34 non-English language) met the inclusion criteria. The number of publications increased steadily during the study period. Randomised clinical trials were the most common study type and the "diagnosis, management and treatment of sports-related injuries and conditions" was the most popular study topic. The knee, ankle/foot and shoulder were the most frequent anatomical sites of study. Soccer players and runners were the favourite study subjects. The American Journal of Sports Medicine had the highest number of publications and shared the greatest international diversity of authorships with the British Journal of Sports Medicine. The USA, Australia, Germany and the UK produced a good number of the lead authorships. In all, 91% of articles and 88% of journals were published in English. Sports medicine-related research is internationally diverse, clinical trial publications are increasing and the sophistication of research design may be improving.

  13. Prevention of musculoskeletal disorders within management systems: A scoping review of practices, approaches, and techniques.

    PubMed

    Yazdani, Amin; Neumann, W Patrick; Imbeau, Daniel; Bigelow, Philip; Pagell, Mark; Wells, Richard

    2015-11-01

    The purpose of this study was to identify and summarize the current research evidence on approaches to preventing musculoskeletal disorders (MSD) within Occupational Health and Safety Management Systems (OHSMS). Databases in business, engineering, and health and safety were searched and 718 potentially relevant publications were identified and examined for their relevance. Twenty-one papers met the selection criteria and were subjected to thematic analysis. There was very little literature describing the integration of MSD risk assessment and prevention into management systems. This lack of information may isolate MSD prevention, leading to difficulties in preventing these disorders at an organizational level. The findings of this review argue for further research to integrate MSD prevention into management systems and to evaluate the effectiveness of the approach. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. A geological model for the management of subsurface data in the urban environment of Barcelona and surrounding area

    NASA Astrophysics Data System (ADS)

    Vázquez-Suñé, Enric; Ángel Marazuela, Miguel; Velasco, Violeta; Diviu, Marc; Pérez-Estaún, Andrés; Álvarez-Marrón, Joaquina

    2016-09-01

    The overdevelopment of cities since the industrial revolution has shown the need to incorporate a sound geological knowledge in the management of required subsurface infrastructures and in the assessment of increasingly needed groundwater resources. Additionally, the scarcity of outcrops and the technical difficulty to conduct underground exploration in urban areas highlights the importance of implementing efficient management plans that deal with the legacy of heterogeneous subsurface information. To deal with these difficulties, a methodology has been proposed to integrate all the available spatio-temporal data into a comprehensive spatial database and a set of tools that facilitates the analysis and processing of the existing and newly added data for the city of Barcelona (NE Spain). Here we present the resulting actual subsurface 3-D geological model that incorporates and articulates all the information stored in the database. The methodology applied to Barcelona benefited from a good collaboration between administrative bodies and researchers that enabled the realization of a comprehensive geological database despite logistic difficulties. Currently, the public administration and also private sectors both benefit from the geological understanding acquired in the city of Barcelona, for example, when preparing the hydrogeological models used in groundwater assessment plans. The methodology further facilitates the continuous incorporation of new data in the implementation and sustainable management of urban groundwater, and also contributes to significantly reducing the costs of new infrastructures.

  15. Aquatic models, genomics and chemical risk management.

    PubMed

    Cheng, Keith C; Hinton, David E; Mattingly, Carolyn J; Planchart, Antonio

    2012-01-01

    The 5th Aquatic Animal Models for Human Disease meeting follows four previous meetings (Nairn et al., 2001; Schmale, 2004; Schmale et al., 2007; Hinton et al., 2009) in which advances in aquatic animal models for human disease research were reported, and community discussion of future direction was pursued. At this meeting, discussion at a workshop entitled Bioinformatics and Computational Biology with Web-based Resources (20 September 2010) led to an important conclusion: Aquatic model research using feral and experimental fish, in combination with web-based access to annotated anatomical atlases and toxicological databases, yields data that advance our understanding of human gene function, and can be used to facilitate environmental management and drug development. We propose here that the effects of genes and environment are best appreciated within an anatomical context - the specifically affected cells and organs in the whole animal. We envision the use of automated, whole-animal imaging at cellular resolution and computational morphometry facilitated by high-performance computing and automated entry into toxicological databases, as anchors for genetic and toxicological data, and as connectors between human and model system data. These principles should be applied to both laboratory and feral fish populations, which have been virtually irreplaceable sentinals for environmental contamination that results in human morbidity and mortality. We conclude that automation, database generation, and web-based accessibility, facilitated by genomic/transcriptomic data and high-performance and cloud computing, will potentiate the unique and potentially key roles that aquatic models play in advancing systems biology, drug development, and environmental risk management. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. SeedStor: A Germplasm Information Management System and Public Database.

    PubMed

    Horler, R S P; Turner, A S; Fretter, P; Ambrose, M

    2018-01-01

    SeedStor (https://www.seedstor.ac.uk) acts as the publicly available database for the seed collections held by the Germplasm Resources Unit (GRU) based at the John Innes Centre, Norwich, UK. The GRU is a national capability supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The GRU curates germplasm collections of a range of temperate cereal, legume and Brassica crops and their associated wild relatives, as well as precise genetic stocks, near-isogenic lines and mapping populations. With >35,000 accessions, the GRU forms part of the UK's plant conservation contribution to the Multilateral System (MLS) of the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA) for wheat, barley, oat and pea. SeedStor is a fully searchable system that allows our various collections to be browsed species by species through to complicated multipart phenotype criteria-driven queries. The results from these searches can be downloaded for later analysis or used to order germplasm via our shopping cart. The user community for SeedStor is the plant science research community, plant breeders, specialist growers, hobby farmers and amateur gardeners, and educationalists. Furthermore, SeedStor is much more than a database; it has been developed to act internally as a Germplasm Information Management System that allows team members to track and process germplasm requests, determine regeneration priorities, handle cost recovery and Material Transfer Agreement paperwork, manage the Seed Store holdings and easily report on a wide range of the aforementioned tasks. © The Author(s) 2017. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.

  17. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.

  18. Enhancing Chemical Inventory Management in Laboratory through a Mobile-Based QR Code Tag

    NASA Astrophysics Data System (ADS)

    Shukran, M. A. M.; Ishak, M. S.; Abdullah, M. N.

    2017-08-01

    The demand for a greater inventory management system which can provide a lot of useful information from a single scan has made laboratory inventory management using barcode technology more difficult. Since the barcode technology lacks the ability to overcome the problem and is not capable of providing information needed to manage the chemicals in the laboratory, thus employing a QR code technology is the best solution. In this research, the main idea is to develop a standalone application running with its own database that is periodically synchronized with the inventory software hosted by the computer and connected to a specialized network as well. The first process required to establish this centralized system is to determine all inventory available in the chemical laboratory by referring to the documented data in order to develop the database. Several customization and enhancement were made to the open source QR code technology to ensure the developed application is dedicated for its main purposes. As the end of the research, it was proven that the system is able to track the position of all inventory and showing real time information about the scanned chemical labels. This paper intends to give an overview about the QR tag inventory system that was developed and its implementation at the National Defence University of Malaysia’s (NDUM) chemical laboratory.

  19. Extending GIS Technology to Study Karst Features of Southeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.

    2001-12-01

    This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.

  20. Development of a bird banding recapture database

    USGS Publications Warehouse

    Tautin, J.; Doherty, P.F.; Metras, L.

    2001-01-01

    Recaptures (and resightings) constitute the vast majority of post-release data from banded or otherwise marked nongame birds. A powerful suite of contemporary analytical models is available for using recapture data to estimate population size, survival rates and other parameters, and many banders collect recapture data for their project specific needs. However, despite widely recognized, broader programmatic needs for more and better data, banders' recapture data are not centrally reposited and made available for use by others. To address this need, the US Bird Banding Laboratory, the Canadian Bird Banding Office and the Georgia Cooperative Fish and Wildlife Research Unit are developing a bird banding recapture database. In this poster we discuss the critical steps in developing the database, including: determining exactly which recapture data should be included; developing a standard record format and structure for the database; developing electronic means for collecting, vetting and disseminating the data; and most importantly, developing metadata descriptions and individual data set profiles to facilitate the user's selection of appropriate analytical models. We provide examples of individual data sets to be included in the database, and we assess the feasibility of developing a prescribed program for obtaining recapture data from banders who do not presently collect them. It is expected that the recapture database eventually will contain millions of records made available publicly for a variety of avian research and management purposes

  1. MECP2 variation in Rett syndrome-An overview of current coverage of genetic and phenotype data within existing databases.

    PubMed

    Townend, Gillian S; Ehrhart, Friederike; van Kranen, Henk J; Wilkinson, Mark; Jacobsen, Annika; Roos, Marco; Willighagen, Egon L; van Enckevort, David; Evelo, Chris T; Curfs, Leopold M G

    2018-04-27

    Rett syndrome (RTT) is a monogenic rare disorder that causes severe neurological problems. In most cases, it results from a loss-of-function mutation in the gene encoding methyl-CPG-binding protein 2 (MECP2). Currently, about 900 unique MECP2 variations (benign and pathogenic) have been identified and it is suspected that the different mutations contribute to different levels of disease severity. For researchers and clinicians, it is important that genotype-phenotype information is available to identify disease-causing mutations for diagnosis, to aid in clinical management of the disorder, and to provide counseling for parents. In this study, 13 genotype-phenotype databases were surveyed for their general functionality and availability of RTT-specific MECP2 variation data. For each database, we investigated findability and interoperability alongside practical user functionality, and type and amount of genetic and phenotype data. The main conclusions are that, as well as being challenging to find these databases and specific MECP2 variants held within, interoperability is as yet poorly developed and requires effort to search across databases. Nevertheless, we found several thousand online database entries for MECP2 variations and their associated phenotypes, diagnosis, or predicted variant effects, which is a good starting point for researchers and clinicians who want to provide, annotate, and use the data. © 2018 The Authors. Human Mutation published by Wiley Periodicals, Inc.

  2. Database Systems. Course Three. Information Systems Curriculum.

    ERIC Educational Resources Information Center

    O'Neil, Sharon Lund; Everett, Donna R.

    This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…

  3. 23 CFR 972.204 - Management systems requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...

  4. 23 CFR 972.204 - Management systems requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...

  5. 23 CFR 972.204 - Management systems requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...

  6. 23 CFR 972.204 - Management systems requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...

  7. The Research on Safety Management Information System of Railway Passenger Based on Risk Management Theory

    NASA Astrophysics Data System (ADS)

    Zhu, Wenmin; Jia, Yuanhua

    2018-01-01

    Based on the risk management theory and the PDCA cycle model, requirements of the railway passenger transport safety production is analyzed, and the establishment of the security risk assessment team is proposed to manage risk by FTA with Delphi from both qualitative and quantitative aspects. The safety production committee is also established to accomplish performance appraisal, which is for further ensuring the correctness of risk management results, optimizing the safety management business processes and improving risk management capabilities. The basic framework and risk information database of risk management information system of railway passenger transport safety are designed by Ajax, Web Services and SQL technologies. The system realizes functions about risk management, performance appraisal and data management, and provides an efficient and convenient information management platform for railway passenger safety manager.

  8. Searching for religion and mental health studies required health, social science, and grey literature databases.

    PubMed

    Wright, Judy M; Cottrell, David J; Mir, Ghazala

    2014-07-01

    To determine the optimal databases to search for studies of faith-sensitive interventions for treating depression. We examined 23 health, social science, religious, and grey literature databases searched for an evidence synthesis. Databases were prioritized by yield of (1) search results, (2) potentially relevant references identified during screening, (3) included references contained in the synthesis, and (4) included references that were available in the database. We assessed the impact of databases beyond MEDLINE, EMBASE, and PsycINFO by their ability to supply studies identifying new themes and issues. We identified pragmatic workload factors that influence database selection. PsycINFO was the best performing database within all priority lists. ArabPsyNet, CINAHL, Dissertations and Theses, EMBASE, Global Health, Health Management Information Consortium, MEDLINE, PsycINFO, and Sociological Abstracts were essential for our searches to retrieve the included references. Citation tracking activities and the personal library of one of the research teams made significant contributions of unique, relevant references. Religion studies databases (Am Theo Lib Assoc, FRANCIS) did not provide unique, relevant references. Literature searches for reviews and evidence syntheses of religion and health studies should include social science, grey literature, non-Western databases, personal libraries, and citation tracking activities. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Tamura, Haruki; Mezaki, Koji

    This paper describes fundamental idea of technical information management in Mitsubishi Heavy Industries, Ltd., and present status of the activities. Then it introduces the background and history of the development, problems and countermeasures against them regarding Mitsubishi Heavy Industries Technical Information Retrieval System (called MARON) which started its service in May, 1985. The system deals with databases which cover information common to the whole company (in-house research and technical reports, holding information of books, journals and so on), and local information held in each business division or department. Anybody from any division can access to these databases through the company-wide network. The in-house interlibrary loan subsystem called Orderentry is available, which supports acquiring operation of original materials.

  10. Team X Spacecraft Instrument Database Consolidation

    NASA Technical Reports Server (NTRS)

    Wallenstein, Kelly A.

    2005-01-01

    In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.

  11. Regulatory administrative databases in FDA's Center for Biologics Evaluation and Research: convergence toward a unified database.

    PubMed

    Smith, Jeffrey K

    2013-04-01

    Regulatory administrative database systems within the Food and Drug Administration's (FDA) Center for Biologics Evaluation and Research (CBER) are essential to supporting its core mission, as a regulatory agency. Such systems are used within FDA to manage information and processes surrounding the processing, review, and tracking of investigational and marketed product submissions. This is an area of increasing interest in the pharmaceutical industry and has been a topic at trade association conferences (Buckley 2012). Such databases in CBER are complex, not for the type or relevance of the data to any particular scientific discipline but because of the variety of regulatory submission types and processes the systems support using the data. Commonalities among different data domains of CBER's regulatory administrative databases are discussed. These commonalities have evolved enough to constitute real database convergence and provide a valuable asset for business process intelligence. Balancing review workload across staff, exploring areas of risk in review capacity, process improvement, and presenting a clear and comprehensive landscape of review obligations are just some of the opportunities of such intelligence. This convergence has been occurring in the presence of usual forces that tend to drive information technology (IT) systems development toward separate stovepipes and data silos. CBER has achieved a significant level of convergence through a gradual process, using a clear goal, agreed upon development practices, and transparency of database objects, rather than through a single, discrete project or IT vendor solution. This approach offers a path forward for FDA systems toward a unified database.

  12. Making things happen through challenging goals: leader proactivity, trust, and business-unit performance.

    PubMed

    Crossley, Craig D; Cooper, Cecily D; Wernsing, Tara S

    2013-05-01

    Building on decades of research on the proactivity of individual performers, this study integrates research on goal setting and trust in leadership to examine manager proactivity and business unit sales performance in one of the largest sales organizations in the United States. Results of a moderated-mediation model suggest that proactive senior managers establish more challenging goals for their business units (N = 50), which in turn are associated with higher sales performance. We further found that employees' trust in the manager is a critical contingency variable that facilitates the relationship between challenging sales goals and subsequent sales performance. This research contributes to growing literatures on trust in leadership and proactivity by studying their joint effects at a district-unit level of analysis while identifying district managers' tendency to set challenging goals as a process variable that helps translate their proactivity into the collective performance of their units. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. The National Extreme Events Data and Research Center (NEED)

    NASA Astrophysics Data System (ADS)

    Gulledge, J.; Kaiser, D. P.; Wilbanks, T. J.; Boden, T.; Devarakonda, R.

    2014-12-01

    The Climate Change Science Institute at Oak Ridge National Laboratory (ORNL) is establishing the National Extreme Events Data and Research Center (NEED), with the goal of transforming how the United States studies and prepares for extreme weather events in the context of a changing climate. NEED will encourage the myriad, distributed extreme events research communities to move toward the adoption of common practices and will develop a new database compiling global historical data on weather- and climate-related extreme events (e.g., heat waves, droughts, hurricanes, etc.) and related information about impacts, costs, recovery, and available research. Currently, extreme event information is not easy to access and is largely incompatible and inconsistent across web sites. NEED's database development will take into account differences in time frames, spatial scales, treatments of uncertainty, and other parameters and variables, and leverage informatics tools developed at ORNL (i.e., the Metadata Editor [1] and Mercury [2]) to generate standardized, robust documentation for each database along with a web-searchable catalog. In addition, NEED will facilitate convergence on commonly accepted definitions and standards for extreme events data and will enable integrated analyses of coupled threats, such as hurricanes/sea-level rise/flooding and droughts/wildfires. Our goal and vision is that NEED will become the premiere integrated resource for the general study of extreme events. References: [1] Devarakonda, Ranjeet, et al. "OME: Tool for generating and managing metadata to handle BigData." Big Data (Big Data), 2014 IEEE International Conference on. IEEE, 2014. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94.

  14. Legacy2Drupal: Conversion of an existing relational oceanographic database to a Drupal 7 CMS

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Maffei, A. R.; Chandler, C. L.; Groman, R. C.

    2011-12-01

    Content Management Systems (CMSs) such as Drupal provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have already designed and implemented customized schemas for their metadata. The NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) has ported an existing relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal 7 Content Management System. This is an update on an effort described as a proof-of-concept in poster IN21B-1051, presented at AGU2009. The BCO-DMO project has translated all the existing database tables, input forms, website reports, and other features present in the existing system into Drupal CMS features. The replacement features are made possible by the use of Drupal content types, CCK node-reference fields, a custom theme, and a number of other supporting modules. This presentation describes the process used to migrate content in the original BCO-DMO metadata database to Drupal 7, some problems encountered during migration, and the modules used to migrate the content successfully. Strategic use of Drupal 7 CMS features that enable three separate but complementary interfaces to provide access to oceanographic research metadata will also be covered: 1) a Drupal 7-powered user front-end; 2) REST-ful JSON web services (providing a Mapserver interface to the metadata and data; and 3) a SPARQL interface to a semantic representation of the repository metadata (this feeding a new faceted search capability currently under development). The existing BCO-DMO ontology, developed in collaboration with Rensselaer Polytechnic Institute's Tetherless World Constellation, makes strategic use of pre-existing ontologies and will be used to drive semantically-enabled faceted search capabilities planned for the site. At this point, the use of semantic technologies included in the Drupal 7 core is anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 7 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO and other science data repositories for interoperability between systems that serve ecosystem research data.

  15. Combining new technologies for effective collection development: a bibliometric study using CD-ROM and a database management program.

    PubMed Central

    Burnham, J F; Shearer, B S; Wall, J C

    1992-01-01

    Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424

  16. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.

  17. Value added data archiving

    NASA Technical Reports Server (NTRS)

    Berard, Peter R.

    1993-01-01

    Researchers in the Molecular Sciences Research Center (MSRC) of Pacific Northwest Laboratory (PNL) currently generate massive amounts of scientific data. The amount of data that will need to be managed by the turn of the century is expected to increase significantly. Automated tools that support the management, maintenance, and sharing of this data are minimal. Researchers typically manage their own data by physically moving datasets to and from long term storage devices and recording a dataset's historical information in a laboratory notebook. Even though it is not the most efficient use of resources, researchers have tolerated the process. The solution to this problem will evolve over the next three years in three phases. PNL plans to add sophistication to existing multilevel file system (MLFS) software by integrating it with an object database management system (ODBMS). The first phase in the evolution is currently underway. A prototype system of limited scale is being used to gather information that will feed into the next two phases. This paper describes the prototype system, identifies the successes and problems/complications experienced to date, and outlines PNL's long term goals and objectives in providing a permanent solution.

  18. Secure web book to store structural genomics research data.

    PubMed

    Manjasetty, Babu A; Höppner, Klaus; Mueller, Uwe; Heinemann, Udo

    2003-01-01

    Recently established collaborative structural genomics programs aim at significantly accelerating the crystal structure analysis of proteins. These large-scale projects require efficient data management systems to ensure seamless collaboration between different groups of scientists working towards the same goal. Within the Berlin-based Protein Structure Factory, the synchrotron X-ray data collection and the subsequent crystal structure analysis tasks are located at BESSY, a third-generation synchrotron source. To organize file-based communication and data transfer at the BESSY site of the Protein Structure Factory, we have developed the web-based BCLIMS, the BESSY Crystallography Laboratory Information Management System. BCLIMS is a relational data management system which is powered by MySQL as the database engine and Apache HTTP as the web server. The database interface routines are written in Python programing language. The software is freely available to academic users. Here we describe the storage, retrieval and manipulation of laboratory information, mainly pertaining to the synchrotron X-ray diffraction experiments and the subsequent protein structure analysis, using BCLIMS.

  19. ANTI-VIRAL EFFECTS OF MEDICINAL PLANTS IN THE MANAGEMENT OF DENGUE: A SYSTEMATIC REVIEW

    PubMed Central

    Frederico, Éric Heleno Freira Ferreira; Cardoso, André Luiz Bandeira Dionísio; Moreira-Marconi, Eloá; de Sá-Caputo, Danúbia da Cunha; Guimarães, Carlos Alberto Sampaio; Dionello, Carla da Fontoura; Morel, Danielle Soares; Paineiras-Domingos, Laisa Liane; de Souza, Patricia Lopes; Brandão-Sobrinho-Neto, Samuel; Carvalho-Lima, Rafaelle Pacheco; Guedes-Aguiar, Eliane de Oliveira; Costa-Cavalcanti, Rebeca Graça; Kutter, Cristiane Ribeiro; Bernardo-Filho, Mario

    2017-01-01

    Background: Dengue is considered as an important arboviral disease. Safe, low-cost, and effective drugs that possess inhibitory activity against dengue virus (DENV) are mostly needed to try to combat the dengue infection worldwide. Medicinal plants have been considered as an important alternative to manage several diseases, such as dengue. As authors have demonstrated the antiviral effect of medicinal plants against DENV, the aim of this study was to review systematically the published research concerning the use of medicinal plants in the management of dengue using the PubMed database. Materials and Methods: Search and selection of publications were made using the PubMed database following the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA statement). Results: Six publications met the inclusion criteria and were included in the final selection after thorough analysis. Conclusion: It is suggested that medicinal plants’ products could be used as potential anti-DENV agents. PMID:28740942

  20. Management of first-episode pelvic inflammatory disease in primary care: results from a large UK primary care database.

    PubMed

    Nicholson, Amanda; Rait, Greta; Murray-Thomas, Tarita; Hughes, Gwenda; Mercer, Catherine H; Cassell, Jackie

    2010-10-01

    Prompt and effective treatment of pelvic inflammatory disease (PID) may help prevent long-term complications. Many PID cases are seen in primary care but it is not known how well management follows recommended guidelines. To estimate the incidence of first-episode PID cases seen in UK general practice, describe their management, and assess its adequacy in relation to existing guidelines. Cohort study. UK general practices contributing to the General Practice Research Database (GPRD). Women aged 15 to 40 years, consulting with a first episode of PID occurring between 30 June 2003 and 30 June 2008 were identified, based on the presence of a diagnostic code. The records within 28 days either side of the diagnosis date were analysed to describe management. A total of 3797 women with a first-ever coded diagnosis of PID were identified. Incidence fell during the study period from 19.3 to 8.9/10 000 person-years. Thirty-four per cent of cases had evidence of care elsewhere, while 2064 (56%) appeared to have been managed wholly within the practice. Of these 2064 women, 34% received recommended treatment including metronidazole, and 54% had had a Chlamydia trachomatis test, but only 16% received both. Management was more likely to follow guidelines in women in their 20s, and later in the study period. These analyses suggest that the management of PID in UK primary care, although improving, does not follow recommended guidelines for the majority of women. Further research is needed to understand the delivery of care in general practice and the coding of such complex syndromic conditions.

Top